Facial measurement differences between patients with schizophrenia and non-psychiatric controls.
Compton, Michael T; Brudno, Jennifer; Kryda, Aimee D; Bollini, Annie M; Walker, Elaine F
2007-07-01
Several previous reports suggest that facial measurements in patients with schizophrenia differ from those of non-psychiatric controls. Because the face and brain develop in concert from the same ectodermal tissue, the study of quantitative craniofacial abnormalities may give clues to genetic and/or environmental factors predisposing to schizophrenia. Using a predominantly African American sample, the present research question was two-fold: (1) Do patients differ from controls in terms of a number of specific facial measurements?, and (2) Does cluster analysis based on these facial measurements reveal distinct facial morphologies that significantly discriminate patients from controls? Facial dimensions were measured in 73 patients with schizophrenia and related psychotic disorders (42 males and 31 females) and 69 non-psychiatric controls (35 males and 34 females) using a 25-cm head and neck caliper. Due to differences in facial dimensions by gender, separate independent samples Student's t-tests and logistic regression analyses were employed to discern differences in facial measures between the patient and control groups in women and men. Findings were further explored using cluster analysis. Given an association between age and some facial dimensions, the effect of age was controlled. In unadjusted bivariate tests, female patients differed from female controls on several facial dimensions, though male patients did not differ significantly from male controls for any facial measure. Controlling for age using logistic regression, female patients had a greater mid-facial depth (tragus-subnasale) compared to female controls; male patients had lesser upper facial (trichion-glabella) and lower facial (subnasale-gnathion) heights compared to male controls. Among females, cluster analysis revealed two facial morphologies that significantly discriminated patients from controls, though this finding was not evident when employing further cluster analyses using secondary distance measures. When the sample was restricted to African Americans, results were similar and consistent. These findings indicate that, in a predominantly African American sample, some facial measurements differ between patients with schizophrenia and non-psychiatric controls, and these differences appear to be gender-specific. Further research on gender-specific quantitative craniofacial measurement differences between cases and controls could suggest gender-specific differences in embryologic/fetal neurodevelopmental processes underpinning schizophrenia.
The faces of pain: a cluster analysis of individual differences in facial activity patterns of pain.
Kunz, M; Lautenbacher, S
2014-07-01
There is general agreement that facial activity during pain conveys pain-specific information but is nevertheless characterized by substantial inter-individual differences. With the present study we aim to investigate whether these differences represent idiosyncratic variations or whether they can be clustered into distinct facial activity patterns. Facial actions during heat pain were assessed in two samples of pain-free individuals (n = 128; n = 112) and were later analysed using the Facial Action Coding System. Hierarchical cluster analyses were used to look for combinations of single facial actions in episodes of pain. The stability/replicability of facial activity patterns was determined across samples as well as across different basic social situations. Cluster analyses revealed four distinct activity patterns during pain, which stably occurred across samples and situations: (I) narrowed eyes with furrowed brows and wrinkled nose; (II) opened mouth with narrowed eyes; (III) raised eyebrows; and (IV) furrowed brows with narrowed eyes. In addition, a considerable number of participants were facially completely unresponsive during pain induction (stoic cluster). These activity patterns seem to be reaction stereotypies in the majority of individuals (in nearly two-thirds), whereas a minority displayed varying clusters across situations. These findings suggest that there is no uniform set of facial actions but instead there are at least four different facial activity patterns occurring during pain that are composed of different configurations of facial actions. Raising awareness about these different 'faces of pain' might hold the potential of improving the detection and, thereby, the communication of pain. © 2013 European Pain Federation - EFIC®
Towards a new taxonomy of idiopathic orofacial pain.
Woda, Alain; Tubert-Jeannin, Stéphanie; Bouhassira, Didier; Attal, Nadine; Fleiter, Bernard; Goulet, Jean-Paul; Gremeau-Richard, Christelle; Navez, Marie Louise; Picard, Pascale; Pionchon, Paul; Albuisson, Eliane
2005-08-01
There is no current consensus on the taxonomy of the different forms of idiopathic orofacial pain (stomatodynia, atypical odontalgia, atypical facial pain, facial arthromyalgia), which are sometimes considered as separate entities and sometimes grouped together. In the present prospective multicentric study, we used a systematic approach to help to place these different painful syndromes in the general classification of chronic facial pain. This multicenter study was carried out on 245 consecutive patients presenting with chronic facial pain (>4 months duration). Each patient was seen by two experts who proposed a diagnosis, administered a 111-item questionnaire and filled out a standardized 68-item examination form. Statistical processing included univariate analysis and several forms of multidimensional analysis. Migraines (n=37), tension-type headache (n=26), post-traumatic neuralgia (n=20) and trigeminal neuralgia (n=13) tended to cluster independently. When signs and symptoms describing topographic features were not included in the list of variables, the idiopathic orofacial pain patients tended to cluster in a single group. Inside this large cluster, only stomatodynia (n=42) emerged as a distinct homogenous subgroup. In contrast, facial arthromyalgia (n=46) and an entity formed with atypical facial pain (n=25) and atypical odontalgia (n=13) could only be individualised by variables reflecting topographical characteristics. These data provide grounds for an evidence-based classification of idiopathic facial pain entities and indicate that the current sub-classification of these syndromes relies primarily on the topography of the symptoms.
Proposed shade guide for human facial skin and lip: a pilot study.
Wee, Alvin G; Beatty, Mark W; Gozalo-Diaz, David J; Kim-Pusateri, Seungyee; Marx, David B
2013-08-01
Currently, no commercially available facial shade guide exists in the United States for the fabrication of facial prostheses. The purpose of this study was to measure facial skin and lip color in a human population sample stratified by age, gender, and race. Clustering analysis was used to determine optimal color coordinates for a proposed facial shade guide. Participants (n=119) were recruited from 4 racial/ethnic groups, 5 age groups, and both genders. Reflectance measurements of participants' noses and lower lips were made by using a spectroradiometer and xenon arc lamp with a 45/0 optical configuration. Repeated measures ANOVA (α=.05), to identify skin and lip color differences, resulting from race, age, gender, and location, and a hierarchical clustering analysis, to identify clusters of skin colors) were used. Significant contributors to L*a*b* facial color were race and facial location (P<.01). b* affected all factors (P<.05). Age affected only b* (P<.001), while gender affected only L* (P<.05) and b* (P<.05). Analyses identified 5 clusters of skin color. The study showed that skin color caused by age and gender primarily occurred within the yellow-blue axis. A significant lightness difference between gender groups was also found. Clustering analysis identified 5 distinct skin shade tabs. Copyright © 2013 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
Qualitative and Quantitative Analysis for Facial Complexion in Traditional Chinese Medicine
Zhao, Changbo; Li, Guo-zheng; Li, Fufeng; Wang, Zhi; Liu, Chang
2014-01-01
Facial diagnosis is an important and very intuitive diagnostic method in Traditional Chinese Medicine (TCM). However, due to its qualitative and experience-based subjective property, traditional facial diagnosis has a certain limitation in clinical medicine. The computerized inspection method provides classification models to recognize facial complexion (including color and gloss). However, the previous works only study the classification problems of facial complexion, which is considered as qualitative analysis in our perspective. For quantitative analysis expectation, the severity or degree of facial complexion has not been reported yet. This paper aims to make both qualitative and quantitative analysis for facial complexion. We propose a novel feature representation of facial complexion from the whole face of patients. The features are established with four chromaticity bases splitting up by luminance distribution on CIELAB color space. Chromaticity bases are constructed from facial dominant color using two-level clustering; the optimal luminance distribution is simply implemented with experimental comparisons. The features are proved to be more distinctive than the previous facial complexion feature representation. Complexion recognition proceeds by training an SVM classifier with the optimal model parameters. In addition, further improved features are more developed by the weighted fusion of five local regions. Extensive experimental results show that the proposed features achieve highest facial color recognition performance with a total accuracy of 86.89%. And, furthermore, the proposed recognition framework could analyze both color and gloss degrees of facial complexion by learning a ranking function. PMID:24967342
Taylor, Helena O; Morrison, Clinton S; Linden, Olivia; Phillips, Benjamin; Chang, Johnny; Byrne, Margaret E; Sullivan, Stephen R; Forrest, Christopher R
2014-01-01
Although symmetry is hailed as a fundamental goal of aesthetic and reconstructive surgery, our tools for measuring this outcome have been limited and subjective. With the advent of three-dimensional photogrammetry, surface geometry can be captured, manipulated, and measured quantitatively. Until now, few normative data existed with regard to facial surface symmetry. Here, we present a method for reproducibly calculating overall facial symmetry and present normative data on 100 subjects. We enrolled 100 volunteers who underwent three-dimensional photogrammetry of their faces in repose. We collected demographic data on age, sex, and race and subjectively scored facial symmetry. We calculated the root mean square deviation (RMSD) between the native and reflected faces, reflecting about a plane of maximum symmetry. We analyzed the interobserver reliability of the subjective assessment of facial asymmetry and the quantitative measurements and compared the subjective and objective values. We also classified areas of greatest asymmetry as localized to the upper, middle, or lower facial thirds. This cluster of normative data was compared with a group of patients with subtle but increasing amounts of facial asymmetry. We imaged 100 subjects by three-dimensional photogrammetry. There was a poor interobserver correlation between subjective assessments of asymmetry (r = 0.56). There was a high interobserver reliability for quantitative measurements of facial symmetry RMSD calculations (r = 0.91-0.95). The mean RMSD for this normative population was found to be 0.80 ± 0.24 mm. Areas of greatest asymmetry were distributed as follows: 10% upper facial third, 49% central facial third, and 41% lower facial third. Precise measurement permitted discrimination of subtle facial asymmetry within this normative group and distinguished norms from patients with subtle facial asymmetry, with placement of RMSDs along an asymmetry ruler. Facial surface symmetry, which is poorly assessed subjectively, can be easily and reproducibly measured using three-dimensional photogrammetry. The RMSD for facial asymmetry of healthy volunteers clusters at approximately 0.80 ± 0.24 mm. Patients with facial asymmetry due to a pathologic process can be differentiated from normative facial asymmetry based on their RMSDs.
Facial Structure Analysis Separates Autism Spectrum Disorders into Meaningful Clinical Subgroups
ERIC Educational Resources Information Center
Obafemi-Ajayi, Tayo; Miles, Judith H.; Takahashi, T. Nicole; Qi, Wenchuan; Aldridge, Kristina; Zhang, Minqi; Xin, Shi-Qing; He, Ying; Duan, Ye
2015-01-01
Varied cluster analysis were applied to facial surface measurements from 62 prepubertal boys with essential autism to determine whether facial morphology constitutes viable biomarker for delineation of discrete Autism Spectrum Disorders (ASD) subgroups. Earlier study indicated utility of facial morphology for autism subgrouping (Aldridge et al. in…
Cephalometric features in isolated growth hormone deficiency.
Oliveira-Neto, Luiz Alves; Melo, Mariade de Fátima B; Franco, Alexandre A; Oliveira, Alaíde H A; Souza, Anita H O; Valença, Eugênia H O; Britto, Isabela M P A; Salvatori, Roberto; Aguiar-Oliveira, Manuel H
2011-07-01
To analyze cephalometric features in adults with isolated growth hormone (GH) deficiency (IGHD). Nine adult IGHD individuals (7 males and 2 females; mean age, 37.8 ± 13.8 years) underwent a cross-sectional cephalometric study, including 9 linear and 5 angular measurements. Posterior facial height/anterior facial height and lower-anterior facial height/anterior facial height ratios were calculated. To pool cephalometric measurements in both genders, results were normalized by standard deviation scores (SDS), using the population means from an atlas of the normal Brazilian population. All linear measurements were reduced in IGHD subjects. Total maxillary length was the most reduced parameter (-6.5 ± 1.7), followed by a cluster of six measurements: posterior cranial base length (-4.9 ± 1.1), total mandibular length (-4.4 ± 0.7), total posterior facial height (-4.4 ± 1.1), total anterior facial height (-4.3 ± 0.9), mandibular corpus length (-4.2 ± 0.8), and anterior cranial base length (-4.1 ± 1.7). Less affected measurements were lower-anterior facial height (-2.7 ± 0.7) and mandibular ramus height (-2.5 ± 1.5). SDS angular measurements were in the normal range, except for increased gonial angle (+2.5 ± 1.1). Posterior facial height/anterior facial height and lower-anterior facial height/anterior facial height ratios were not different from those of the reference group. Congenital, untreated IGHD causes reduction of all linear measurements of craniofacial growth, particularly total maxillary length. Angular measurements and facial height ratios are less affected, suggesting that lGHD causes proportional blunting of craniofacial growth.
Baek, Chaehwan; Paeng, Jun-Young; Lee, Janice S; Hong, Jongrak
2012-05-01
A systematic classification is needed for the diagnosis and surgical treatment of facial asymmetry. The purposes of this study were to analyze the skeletal structures of patients with facial asymmetry and to objectively classify these patients into groups according to these structural characteristics. Patients with facial asymmetry and recent computed tomographic images from 2005 through 2009 were included in this study, which was approved by the institutional review board. Linear measurements, angles, and reference planes on 3-dimensional computed tomograms were obtained, including maxillary (upper midline deviation, maxilla canting, and arch form discrepancy) and mandibular (menton deviation, gonion to midsagittal plane, ramus height, and frontal ramus inclination) measurements. All measurements were analyzed using paired t tests with Bonferroni correction followed by K-means cluster analysis using SPSS 13.0 to determine an objective classification of facial asymmetry in the enrolled patients. Kruskal-Wallis test was performed to verify differences among clustered groups. P < .05 was considered statistically significant. Forty-three patients (18 male, 25 female) were included in the study. They were classified into 4 groups based on cluster analysis. Their mean age was 24.3 ± 4.4 years. Group 1 included subjects (44% of patients) with asymmetry caused by a shift or lateralization of the mandibular body. Group 2 included subjects (39%) with a significant difference between the left and right ramus height with menton deviation to the short side. Group 3 included subjects (12%) with atypical asymmetry, including deviation of the menton to the short side, prominence of the angle/gonion on the larger side, and reverse maxillary canting. Group 4 included subjects (5%) with severe maxillary canting, ramus height differences, and menton deviation to the short side. In this study, patients with asymmetry were classified into 4 statistically distinct groups according to their anatomic features. This diagnostic classification method will assist in treatment planning for patients with facial asymmetry and may be used to explore the etiology of these variants of facial asymmetry. Copyright © 2012 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Responses of single facial taste fibers in the channel catfish, Ictalurus punctatus, to amino acids.
Kohbara, J; Michel, W; Caprio, J
1992-10-01
1. Amino acids and nucleotides stimulate taste receptors of teleosts. In this report, responses to these compounds of 105 facial taste fibers (79 fully characterized) that innervate maxillary barbel taste buds of the channel catfish (Ictalurus punctatus) were analyzed. 2. The fully characterized facial taste fibers that responded to amino acids (n = 68) were generally poorly responsive to nucleotides and related substances (NRS), whereas the fibers responsive to NRS (n = 11) were poorly responsive to amino acids. Spike discharge of the amino acid-responsive fibers to the most potent amino acid stimulus tested per fiber increased 44-fold from a mean spontaneous activity of 2.1 +/- 3.5 to 92.1 +/- 42.4 (SD) spikes/3 s. Spike activity of the NRS-responsive fibers to NRS increased 11.5-fold from a mean spontaneous activity of 3.4 +/- 5.9 to 39.1 +/- 27.4 spikes/3 s. There was no significant difference between the spontaneous rates, but stimulus evoked spike rates for the amino acid-responsive fibers were significantly greater (P < 0.05; Mann-Whitney test) than those for the NRS-responsive fibers. 3. Hierarchical cluster analysis based on the 3-s response time identified three major groups of neurons. The identified clusters comprised neurons that were highly responsive to either L-alanine (i.e., Ala cluster; n = 39), L-arginine (i.e., Arg cluster; n = 29), or NRS (NRS cluster; n = 11). Fibers comprising the Arg cluster were more narrowly tuned than those within the Ala cluster. This report further characterizes the responses to amino acids of the individual facial taste fibers comprising the Ala and Arg clusters. 4. Subclusters were evident within both of the amino acid-responsive clusters. The Arg cluster was divisible into two subclusters dependent on the response to 1 mM L-proline. Twelve neurons that were significantly (P < 0.05; Mann-Whitney test) more responsive to L-proline than the remaining 17 neurons within the Arg cluster formed the Arg/Pro subcluster; these latter 17 neurons comprised the Arg subcluster. However, there was no significant difference (Mann-Whitney test) in the response to L-arginine between fibers within either subcluster across four different response times analyzed. Fibers within the Ala cluster were generally poorly responsive to L-proline. Four alanine subclusters were suggested on the basis of their relative responses to L-alanine, D-alanine, L-arginine, and the NRS; however, of the 39 fibers comprising the alanine cluster, two alanine subclusters comprised only two fibers each, and the third subcluster consisted of four fibers.(ABSTRACT TRUNCATED AT 400 WORDS)
Comparative histological study of the mammalian facial nucleus.
Furutani, Rui; Sugita, Shoei
2008-04-01
We performed comparative Nissl, Klüver-Barrera and Golgi staining studies of the mammalian facial nucleus to classify the morphologically distinct subdivisions and the neuronal types in the rat, rabbit, ferret, Japanese monkey (Macaca fuscata), pig, horse, Risso's dolphin (Grampus griseus), and bottlenose dolphin (Tursiops truncatus). The medial subnucleus was observed in all examined species; however, that of the Risso's and bottlenose dolphins was a poorly-developed structure comprised of scattered neurons. The medial subnuclei of terrestrial mammals were well-developed cytoarchitectonic structures, usually a rounded column comprised of densely clustered neurons. Intermediate and lateral subnuclei were found in all studied mammals, with differences in columnar shape and neuronal types from species to species. The dorsolateral subnucleus was detected in all mammals but the Japanese monkey, whose facial neurons converged into the intermediate subnucleus. The dorsolateral subnuclei of the two dolphin species studied were expanded subdivisions comprised of densely clustered cells. The ventromedial subnuclei of the ferret, pig, and horse were richly-developed columns comprised of large multipolar neurons. Pig and horse facial nuclei contained another ventral cluster, the ventrolateral subnucleus. The facial nuclei of the Japanese monkey and the bottlenose dolphin were similar in their ventral subnuclear organization. Our findings show species-specific subnuclear organization and distribution patterns of distinct types of neurons within morphological discrete subdivisions, reflecting functional differences.
Design of aerosol face masks for children using computerized 3D face analysis.
Amirav, Israel; Luder, Anthony S; Halamish, Asaf; Raviv, Dan; Kimmel, Ron; Waisman, Dan; Newhouse, Michael T
2014-08-01
Aerosol masks were originally developed for adults and downsized for children. Overall fit to minimize dead space and a tight seal are problematic, because children's faces undergo rapid and marked topographic and internal anthropometric changes in their first few months/years of life. Facial three-dimensional (3D) anthropometric data were used to design an optimized pediatric mask. Children's faces (n=271, aged 1 month to 4 years) were scanned with 3D technology. Data for the distance from the bridge of the nose to the tip of the chin (H) and the width of the mouth opening (W) were used to categorize the scans into "small," "medium," and "large" "clusters." "Average" masks were developed from each cluster to provide an optimal seal with minimal dead space. The resulting computerized contour, W and H, were used to develop the SootherMask® that enables children, "suckling" on their own pacifier, to keep the mask on their face, mainly by means of subatmospheric pressure. The relatively wide and flexible rim of the mask accommodates variations in facial size within and between clusters. Unique pediatric face masks were developed based on anthropometric data obtained through computerized 3D face analysis. These masks follow facial contours and gently seal to the child's face, and thus may minimize aerosol leakage and dead space.
FARRI, A.; ENRICO, A.; FARRI, F.
2012-01-01
SUMMARY In 1988, diagnostic criteria for headaches were drawn up by the International Headache Society (IHS) and is divided into headaches, cranial neuralgias and facial pain. The 2nd edition of the International Classification of Headache Disorders (ICHD) was produced in 2004, and still provides a dynamic and useful instrument for clinical practice. We have examined the current IHC, which comprises 14 groups. The first four cover primary headaches, with "benign paroxysmal vertigo of childhood" being the forms of migraine of interest to otolaryngologists; groups 5 to 12 classify "secondary headaches"; group 11 is formed of "headache or facial pain attributed to disorder of cranium, neck, eyes, ears, nose, sinuses, teeth, mouth or other facial or cranial structures"; group 13, consisting of "cranial neuralgias and central causes of facial pain" is also of relevance to otolaryngology. Neither the current classification system nor the original one has a satisfactory collocation for migraineassociated vertigo. Another critical point of the classification concerns cranio-facial pain syndromes such as Sluder's neuralgia, previously included in the 1988 classification among cluster headaches, and now included in the section on "cranial neuralgias and central causes of facial pain", even though Sluder's neuralgia has not been adequately validated. As we have highlighted in our studies, there are considerable similarities between Sluder's syndrome and cluster headaches. The main features distinguishing the two are the trend to cluster over time, found only in cluster headaches, and the distribution of pain, with greater nasal manifestations in the case of Sluder's syndrome. We believe that it is better and clearer, particularly on the basis of our clinical experience and published studies, to include this nosological entity, which is clearly distinct from an otolaryngological point of view, as a variant of cluster headache. We agree with experts in the field of headaches, such as Olesen and Nappi who contributed to previous classifications, on the need for a revised classification, particularly with regards to secondary headaches. According to the current Committee on headaches, the updated version of the classification, presently under study, is due to be published soon; it is our hope that this revised version will take into account some of the above considerations. PMID:22767967
Dissociable roles of internal feelings and face recognition ability in facial expression decoding.
Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia
2016-05-15
The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding. Copyright © 2016 Elsevier Inc. All rights reserved.
Symmetric nonnegative matrix factorization: algorithms and applications to probabilistic clustering.
He, Zhaoshui; Xie, Shengli; Zdunek, Rafal; Zhou, Guoxu; Cichocki, Andrzej
2011-12-01
Nonnegative matrix factorization (NMF) is an unsupervised learning method useful in various applications including image processing and semantic analysis of documents. This paper focuses on symmetric NMF (SNMF), which is a special case of NMF decomposition. Three parallel multiplicative update algorithms using level 3 basic linear algebra subprograms directly are developed for this problem. First, by minimizing the Euclidean distance, a multiplicative update algorithm is proposed, and its convergence under mild conditions is proved. Based on it, we further propose another two fast parallel methods: α-SNMF and β -SNMF algorithms. All of them are easy to implement. These algorithms are applied to probabilistic clustering. We demonstrate their effectiveness for facial image clustering, document categorization, and pattern clustering in gene expression.
Facial correlates of emotional behaviour in the domestic cat (Felis catus).
Bennett, Valerie; Gourkow, Nadine; Mills, Daniel S
2017-08-01
Leyhausen's (1979) work on cat behaviour and facial expressions associated with offensive and defensive behaviour is widely embraced as the standard for interpretation of agonistic behaviour in this species. However, it is a largely anecdotal description that can be easily misunderstood. Recently a facial action coding system has been developed for cats (CatFACS), similar to that used for objectively coding human facial expressions. This study reports on the use of this system to describe the relationship between behaviour and facial expressions of cats in confinement contexts without and with human interaction, in order to generate hypotheses about the relationship between these expressions and underlying emotional state. Video recordings taken of 29 cats resident in a Canadian animal shelter were analysed using 1-0 sampling of 275 4-s video clips. Observations under the two conditions were analysed descriptively using hierarchical cluster analysis for binomial data and indicated that in both situations, about half of the data clustered into three groups. An argument is presented that these largely reflect states based on varying degrees of relaxed engagement, fear and frustration. Facial actions associated with fear included blinking and half-blinking and a left head and gaze bias at lower intensities. Facial actions consistently associated with frustration included hissing, nose-licking, dropping of the jaw, the raising of the upper lip, nose wrinkling, lower lip depression, parting of the lips, mouth stretching, vocalisation and showing of the tongue. Relaxed engagement appeared to be associated with a right gaze and head turn bias. The results also indicate potential qualitative changes associated with differences in intensity in emotional expression following human intervention. The results were also compared to the classic description of "offensive and defensive moods" in cats (Leyhausen, 1979) and previous work by Gourkow et al. (2014a) on behavioural styles in cats in order to assess if these observations had replicable features noted by others. This revealed evidence of convergent validity between the methods However, the use of CatFACS revealed elements relating to vocalisation and response lateralisation, not previously reported in this literature. Copyright © 2017 Elsevier B.V. All rights reserved.
Head-and-face anthropometric survey of Chinese workers.
Du, Lili; Zhuang, Ziqing; Guan, Hongyu; Xing, Jingcai; Tang, Xianzhi; Wang, Limin; Wang, Zhenglun; Wang, Haijiao; Liu, Yuewei; Su, Wenjin; Benson, Stacey; Gallagher, Sean; Viscusi, Dennis; Chen, Weihong
2008-11-01
Millions of workers in China rely on respirators and other personal protective equipment to reduce the risk of injury and occupational diseases. However, it has been >25 years since the first survey of facial dimensions for Chinese adults was published, and it has never been completely updated. Thus, an anthropometric survey of Chinese civilian workers was conducted in 2006. A total of 3000 subjects (2026 males and 974 females) between the ages of 18 and 66 years old was measured using traditional techniques. Nineteen facial dimensions, height, weight, neck circumference, waist circumference and hip circumference were measured. A stratified sampling plan of three age strata and two gender strata was implemented. Linear regression analysis was used to evaluate the possible effects of gender, age, occupation and body size on facial dimensions. The regression coefficients for gender indicated that for all anthropometric dimensions, males had significantly larger measurements than females. As body mass index increased, dimensions measured increased significantly. Construction workers and miners had significantly smaller measurements than individuals employed in healthcare or manufacturing for a majority of dimensions. Five representative indexes of facial dimension (face length, face width, nose protrusion, bigonial breadth and nasal root breadth) were selected based on correlation and cluster analysis of all dimensions. Through comparison with the facial dimensions of American subjects, this study indicated that Chinese civilian workers have shorter face length, smaller nose protrusion, larger face width and longer lip length.
Appleton, Katherine M; McGrath, Alanna J; McKinley, Michelle C; Draffin, Claire R; Hamill, Lesley L; Young, Ian S; Woodside, Jayne V
2018-03-01
An effect of increased fruit and vegetable (FV) consumption on facial attractiveness has been proposed and recommended as a strategy to promote FV intakes, but no studies to date demonstrate a causal link between FV consumption and perceived attractiveness. This study investigated perceptions of attractiveness before and after the supervised consumption of 2, 5 or 8 FV portions/day for 4 weeks in 30 low FV consumers. Potential mechanisms for change via skin colour and perceived skin healthiness were also investigated. Faces were photographed at the start and end of the 4 week intervention in controlled conditions. Seventy-three independent individuals subsequently rated all 60 photographs in a randomized order, for facial attractiveness, facial skin yellowness, redness, healthiness, clarity, and symmetry. Using clustered multiple regression, FV consumption over the previous 4 weeks had no direct effect on attractiveness, but, for female faces, some evidence was found for an indirect impact, via linear and non-linear changes in skin yellowness. Effect sizes, however, were small. No association between FV consumption and skin healthiness was found, but skin healthiness was associated with facial attractiveness. Controlled and objectively measured increases in FV consumption for 4 weeks resulted indirectly in increased attractiveness in females via increases in skin yellowness, but effects are small and gradually taper as FV consumption increases. Based on the effect sizes from this study, we are hesitant to recommend the use of facial attractiveness to encourage increased FV consumption. Clinical trial Registration Number NCT01591057 ( www.clinicaltrials.gov ). Registered: 27th April, 2012.
Tromans, James Matthew; Harris, Mitchell; Stringer, Simon Maitland
2011-01-01
Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE) respond primarily to facial identity, while cells within the superior temporal sulcus (STS) respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC) of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs), with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.
Varying face occlusion detection and iterative recovery for face recognition
NASA Astrophysics Data System (ADS)
Wang, Meng; Hu, Zhengping; Sun, Zhe; Zhao, Shuhuan; Sun, Mei
2017-05-01
In most sparse representation methods for face recognition (FR), occlusion problems were usually solved via removing the occlusion part of both query samples and training samples to perform the recognition process. This practice ignores the global feature of facial image and may lead to unsatisfactory results due to the limitation of local features. Considering the aforementioned drawback, we propose a method called varying occlusion detection and iterative recovery for FR. The main contributions of our method are as follows: (1) to detect an accurate occlusion area of facial images, an image processing and intersection-based clustering combination method is used for occlusion FR; (2) according to an accurate occlusion map, the new integrated facial images are recovered iteratively and put into a recognition process; and (3) the effectiveness on recognition accuracy of our method is verified by comparing it with three typical occlusion map detection methods. Experiments show that the proposed method has a highly accurate detection and recovery performance and that it outperforms several similar state-of-the-art methods against partial contiguous occlusion.
Gaze Behavior Consistency among Older and Younger Adults When Looking at Emotional Faces
Chaby, Laurence; Hupont, Isabelle; Avril, Marie; Luherne-du Boullay, Viviane; Chetouani, Mohamed
2017-01-01
The identification of non-verbal emotional signals, and especially of facial expressions, is essential for successful social communication among humans. Previous research has reported an age-related decline in facial emotion identification, and argued for socio-emotional or aging-brain model explanations. However, more perceptual differences in the gaze strategies that accompany facial emotional processing with advancing age have been under-explored yet. In this study, 22 young (22.2 years) and 22 older (70.4 years) adults were instructed to look at basic facial expressions while their gaze movements were recorded by an eye-tracker. Participants were then asked to identify each emotion, and the unbiased hit rate was applied as performance measure. Gaze data were first analyzed using traditional measures of fixations over two preferential regions of the face (upper and lower areas) for each emotion. Then, to better capture core gaze changes with advancing age, spatio-temporal gaze behaviors were deeper examined using data-driven analysis (dimension reduction, clustering). Results first confirmed that older adults performed worse than younger adults at identifying facial expressions, except for “joy” and “disgust,” and this was accompanied by a gaze preference toward the lower-face. Interestingly, this phenomenon was maintained during the whole time course of stimulus presentation. More importantly, trials corresponding to older adults were more tightly clustered, suggesting that the gaze behavior patterns of older adults are more consistent than those of younger adults. This study demonstrates that, confronted to emotional faces, younger and older adults do not prioritize or ignore the same facial areas. Older adults mainly adopted a focused-gaze strategy, consisting in focusing only on the lower part of the face throughout the whole stimuli display time. This consistency may constitute a robust and distinctive “social signature” of emotional identification in aging. Younger adults, however, were more dispersed in terms of gaze behavior and used a more exploratory-gaze strategy, consisting in repeatedly visiting both facial areas. PMID:28450841
A new paradigm of oral cancer detection using digital infrared thermal imaging
NASA Astrophysics Data System (ADS)
Chakraborty, M.; Mukhopadhyay, S.; Dasgupta, A.; Banerjee, S.; Mukhopadhyay, S.; Patsa, S.; Ray, J. G.; Chaudhuri, K.
2016-03-01
Histopathology is considered the gold standard for oral cancer detection. But a major fraction of patient pop- ulation is incapable of accessing such healthcare facilities due to poverty. Moreover, such analysis may report false negatives when test tissue is not collected from exact cancerous location. The proposed work introduces a pioneering computer aided paradigm of fast, non-invasive and non-ionizing modality for oral cancer detection us- ing Digital Infrared Thermal Imaging (DITI). Due to aberrant metabolic activities in carcinogenic facial regions, heat signatures of patients are different from that of normal subjects. The proposed work utilizes asymmetry of temperature distribution of facial regions as principle cue for cancer detection. Three views of a subject, viz. front, left and right are acquired using long infrared (7:5 - 13μm) camera for analysing distribution of temperature. We study asymmetry of facial temperature distribution between: a) left and right profile faces and b) left and right half of frontal face. Comparison of temperature distribution suggests that patients manifest greater asymmetry compared to normal subjects. For classification, we initially use k-means and fuzzy k-means for unsupervised clustering followed by cluster class prototype assignment based on majority voting. Average classification accuracy of 91:5% and 92:8% are achieved by k-mean and fuzzy k-mean framework for frontal face. The corresponding metrics for profile face are 93:4% and 95%. Combining features of frontal and profile faces, average accuracies are increased to 96:2% and 97:6% respectively for k-means and fuzzy k-means framework.
Association study of Demodex bacteria and facial dermatoses based on DGGE technique.
Zhao, YaE; Yang, Fan; Wang, RuiLing; Niu, DongLing; Mu, Xin; Yang, Rui; Hu, Li
2017-03-01
The role of bacteria is unclear in the facial skin lesions caused by Demodex. To shed some light on this issue, we conducted a case-control study comparing cases with facial dermatoses with controls with healthy skin using denaturing gradient gel electrophoresis (DGGE) technique. The bacterial diversity, composition, and principal component were analyzed for Demodex bacteria and the matched facial skin bacteria. The result of mite examination showed that all 33 cases were infected with Demodex folliculorum (D. f), whereas 16 out of the 30 controls were infected with D. f, and the remaining 14 controls were infected with Demodex brevis (D. b). The diversity analysis showed that only evenness index presented statistical difference between mite bacteria and matched skin bacteria in the cases. The composition analysis showed that the DGGE bands of cases and controls were assigned to 12 taxa of 4 phyla, including Proteobacteria (39.37-52.78%), Firmicutes (2.7-26.77%), Actinobacteria (0-5.71%), and Bacteroidetes (0-2.08%). In cases, the proportion of Staphylococcus in Firmicutes was significantly higher than that in D. f controls and D. b controls, while the proportion of Sphingomonas in Proteobacteria was significantly lower than that in D. f controls. The between-group analysis (BGA) showed that all the banding patterns clustered into three groups, namely, D. f cases, D. f controls, and D. b controls. Our study suggests that the bacteria in Demodex should come from the matched facial skin bacteria. Proteobacteria and Firmicutes are the two main taxa. The increase of Staphylococcus and decrease of Sphingomonas might be associated with the development of facial dermatoses.
3D digital headform models of Australian cyclists.
Ellena, Thierry; Skals, Sebastian; Subic, Aleksandar; Mustafa, Helmy; Pang, Toh Yen
2017-03-01
Traditional 1D anthropometric data have been the primary source of information used by ergonomists for the dimensioning of head and facial gear. Although these data are simple to use and understand, they only provide univariate measures of key dimensions. 3D anthropometric data, however, describe the complete shape characteristics of the head surface, but are complicated to interpret due to the abundance of information they contain. Consequently, current headform standards based on 1D measurements may not adequately represent the actual head shape variations of the intended user groups. The purpose of this study was to introduce a set of new digital headform models representative of the adult cyclists' community in Australia. Four models were generated based on an Australian 3D anthropometric database of head shapes and a modified hierarchical clustering algorithm. Considerable shape differences were identified between our models and the current headforms from the Australian standard. We conclude that the design of head and facial gear based on current standards might not be favorable for optimal fitting results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Interaction of multiple biomimetic antimicrobial polymers with model bacterial membranes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baul, Upayan, E-mail: upayanb@imsc.res.in; Vemparala, Satyavani, E-mail: vani@imsc.res.in; Kuroda, Kenichi, E-mail: kkuroda@umich.edu
Using atomistic molecular dynamics simulations, interaction of multiple synthetic random copolymers based on methacrylates on prototypical bacterial membranes is investigated. The simulations show that the cationic polymers form a micellar aggregate in water phase and the aggregate, when interacting with the bacterial membrane, induces clustering of oppositely charged anionic lipid molecules to form clusters and enhances ordering of lipid chains. The model bacterial membrane, consequently, develops lateral inhomogeneity in membrane thickness profile compared to polymer-free system. The individual polymers in the aggregate are released into the bacterial membrane in a phased manner and the simulations suggest that the most probablemore » location of the partitioned polymers is near the 1-palmitoyl-2-oleoyl-phosphatidylglycerol (POPG) clusters. The partitioned polymers preferentially adopt facially amphiphilic conformations at lipid-water interface, despite lacking intrinsic secondary structures such as α-helix or β-sheet found in naturally occurring antimicrobial peptides.« less
Kim, Do-Won; Kim, Han-Sung; Lee, Seung-Hwan; Im, Chang-Hwan
2013-12-01
Schizophrenia is one of the most devastating of all mental illnesses, and has dimensional characteristics that include both positive and negative symptoms. One problem reported in schizophrenia patients is that they tend to show deficits in face emotion processing, on which negative symptoms are thought to have stronger influence. In this study, four event-related potential (ERP) components (P100, N170, N250, and P300) and their source activities were analyzed using EEG data acquired from 23 schizophrenia patients while they were presented with facial emotion picture stimuli. Correlations between positive and negative syndrome scale (PANSS) scores and source activations during facial emotion processing were calculated to identify the brain areas affected by symptom scores. Our analysis demonstrates that PANSS positive scores are negatively correlated with major areas of the left temporal lobule for early ERP components (P100, N170) and with the right middle frontal lobule for a later component (N250), which indicates that positive symptoms affect both early face processing and facial emotion processing. On the other hand, PANSS negative scores are negatively correlated with several clustered regions, including the left fusiform gyrus (at P100), most of which are not overlapped with regions showing correlations with PANSS positive scores. Our results suggest that positive and negative symptoms affect independent brain regions during facial emotion processing, which may help to explain the heterogeneous characteristics of schizophrenia. © 2013 Elsevier B.V. All rights reserved.
Familial covariation of facial emotion recognition and IQ in schizophrenia.
Andric, Sanja; Maric, Nadja P; Mihaljevic, Marina; Mirjanic, Tijana; van Os, Jim
2016-12-30
Alterations in general intellectual ability and social cognition in schizophrenia are core features of the disorder, evident at the illness' onset and persistent throughout its course. However, previous studies examining cognitive alterations in siblings discordant for schizophrenia yielded inconsistent results. Present study aimed to investigate the nature of the association between facial emotion recognition and general IQ by applying genetically sensitive cross-trait cross-sibling design. Participants (total n=158; patients, unaffected siblings, controls) were assessed using the Benton Facial Recognition Test, the Degraded Facial Affect Recognition Task (DFAR) and the Wechsler Adult Intelligence Scale-III. Patients had lower IQ and altered facial emotion recognition in comparison to other groups. Healthy siblings and controls did not significantly differ in IQ and DFAR performance, but siblings exhibited intermediate angry facial expression recognition. Cross-trait within-subject analyses showed significant associations between overall DFAR performance and IQ in all participants. Within-trait cross-sibling analyses found significant associations between patients' and siblings' IQ and overall DFAR performance, suggesting their familial clustering. Finally, cross-trait cross-sibling analyses revealed familial covariation of facial emotion recognition and IQ in siblings discordant for schizophrenia, further indicating their familial etiology. Both traits are important phenotypes for genetic studies and potential early clinical markers of schizophrenia-spectrum disorders. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Genetics Home Reference: beta-mannosidosis
... They may also exhibit distinctive facial features and clusters of enlarged blood vessels forming small, dark red ... JM, Zulaica A, Coll MJ, Chabás A. Molecular analysis in two beta-mannosidosis patients: description of a ...
Alternative face models for 3D face registration
NASA Astrophysics Data System (ADS)
Salah, Albert Ali; Alyüz, Neşe; Akarun, Lale
2007-01-01
3D has become an important modality for face biometrics. The accuracy of a 3D face recognition system depends on a correct registration that aligns the facial surfaces and makes a comparison possible. The best results obtained so far use a one-to-all registration approach, which means each new facial surface is registered to all faces in the gallery, at a great computational cost. We explore the approach of registering the new facial surface to an average face model (AFM), which automatically establishes correspondence to the pre-registered gallery faces. Going one step further, we propose that using a couple of well-selected AFMs can trade-off computation time with accuracy. Drawing on cognitive justifications, we propose to employ category-specific alternative average face models for registration, which is shown to increase the accuracy of the subsequent recognition. We inspect thin-plate spline (TPS) and iterative closest point (ICP) based registration schemes under realistic assumptions on manual or automatic landmark detection prior to registration. We evaluate several approaches for the coarse initialization of ICP. We propose a new algorithm for constructing an AFM, and show that it works better than a recent approach. Finally, we perform simulations with multiple AFMs that correspond to different clusters in the face shape space and compare these with gender and morphology based groupings. We report our results on the FRGC 3D face database.
DeUgarte, Catherine Marin; Woods, K S; Bartolucci, Alfred A; Azziz, Ricardo
2006-04-01
Hirsutism (i.e. facial and body terminal hair growth in a male-like pattern in women) is the principal clinical sign of hyperandrogenism, although its definition remains unclear. The purposes of the present study were to define 1) the degree of facial and body terminal hair, as assessed by the modified Ferriman-Gallwey (mFG) score, in unselected women from the general population; 2) the effect of race (Black and White) on the same; and 3) the normative cutoff values. We conducted a prospective observational study at a tertiary academic medical center. Participants included 633 unselected White (n = 283) and Black (n = 350) women presenting for a preemployment physical exam. Interventions included history and physical examination. Terminal body hair growth was assessed using the mFG scoring system; nine body areas were scored from 0-4 for terminal hair growth distribution. The mFG scores were not normally distributed; although cluster analysis failed to identify a natural cutoff value or clustering of the population, principal component and univariate analyses denoted two nearly distinct clusters that occurred above and below an mFG value of 2, with the bulk of the scores below. Overall, an mFG score of at least 3 was observed in 22.1% of all subjects (i.e. the upper quartile); of these subjects, 69.3% complained of being hirsute, compared with 15.8% of women with an mFG score below this value, and similar to the proportion of women with an mFG score of at least 8 who considered themselves to be hirsute (70.0%). Overall, there were no significant differences between Black and White women. Our data indicate that the prevalence and degree of facial and body terminal hair growth, as assessed by the mFG score, is similar in Black and White women and that an mFG of at least 3 signals the population of women whose hair growth falls out of the norm.
Facial nerve palsy after reactivation of herpes simplex virus type 1 in diabetic mice.
Esaki, Shinichi; Yamano, Koji; Katsumi, Sachiyo; Minakata, Toshiya; Murakami, Shingo
2015-04-01
Bell's palsy is highly associated with diabetes mellitus (DM). Either the reactivation of herpes simplex virus type 1 (HSV-1) or diabetic mononeuropathy has been proposed to cause the facial paralysis observed in DM patients. However, distinguishing whether the facial palsy is caused by herpetic neuritis or diabetic mononeuropathy is difficult. We previously reported that facial paralysis was aggravated in DM mice after HSV-1 inoculation of the murine auricle. In the current study, we induced HSV-1 reactivation by an auricular scratch following DM induction with streptozotocin (STZ). Controlled animal study. Diabetes mellitus was induced with streptozotocin injection in only mice that developed transient facial nerve paralysis with HSV-1. Recurrent facial palsy was induced after HSV-1 reactivation by auricular scratch. After DM induction, the number of cluster of differentiation 3 (CD3)(+) T cells decreased by 70% in the DM mice, and facial nerve palsy recurred in 13% of the DM mice. Herpes simplex virus type 1 deoxyribonucleic acid (DNA) was detected in the facial nerve of all of the DM mice with palsy, and HSV-1 capsids were found in the geniculate ganglion using electron microscopy. Herpes simplex virus type 1 DNA was also found in some of the DM mice without palsy, which suggested the subclinical reactivation of HSV-1. These results suggested that HSV-1 reactivation in the geniculate ganglion may be the main causative factor of the increased incidence of facial paralysis in DM patients. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.
Evers, Stefan; Rapoport, Alan
2017-04-01
Background Oxygen is recommended for the treatment of acute cluster headache attacks. However, it is not available worldwide. Methods The International Headache Society performed a survey among its national member societies on the availability and the restrictions for oxygen in the treatment of cluster headache. Results Oxygen is reimbursed in 50% of all countries responding ( n = 22). There are additional restrictions in the reimbursement of the facial mask and with respect to age. Conclusion Oxygen for the treatment of cluster headache attack is not reimbursed worldwide. Headache societies should pressure national/public health authorities to reimburse oxygen for cluster headache in all countries.
Adaptive metric learning with deep neural networks for video-based facial expression recognition
NASA Astrophysics Data System (ADS)
Liu, Xiaofeng; Ge, Yubin; Yang, Chao; Jia, Ping
2018-01-01
Video-based facial expression recognition has become increasingly important for plenty of applications in the real world. Despite that numerous efforts have been made for the single sequence, how to balance the complex distribution of intra- and interclass variations well between sequences has remained a great difficulty in this area. We propose the adaptive (N+M)-tuplet clusters loss function and optimize it with the softmax loss simultaneously in the training phrase. The variations introduced by personal attributes are alleviated using the similarity measurements of multiple samples in the feature space with many fewer comparison times as conventional deep metric learning approaches, which enables the metric calculations for large data applications (e.g., videos). Both the spatial and temporal relations are well explored by a unified framework that consists of an Inception-ResNet network with long short term memory and the two fully connected layer branches structure. Our proposed method has been evaluated with three well-known databases, and the experimental results show that our method outperforms many state-of-the-art approaches.
Cluster headache syndrome. Ways to abort or ward off attacks.
Marks, D R; Rapoport, A M
1992-02-15
Cluster headache is a syndrome of severe head and facial pain accompanied by autonomic abnormalities. Men are affected more frequently than women. Headaches occur daily during periods of susceptibility, which may be followed by periods of remission. The etiology of cluster headache is uncertain. Recent work suggests that hypothalamic dysfunction and/or oxyhemoglobin desaturation may be involved in its pathogenesis. Effective medical regimens are available for aborting acute attacks and for preventing attacks. Surgical ablation of the trigeminal ganglion has been effective in some patients when conventional medical therapy has failed.
Facial animation on an anatomy-based hierarchical face model
NASA Astrophysics Data System (ADS)
Zhang, Yu; Prakash, Edmond C.; Sung, Eric
2003-04-01
In this paper we propose a new hierarchical 3D facial model based on anatomical knowledge that provides high fidelity for realistic facial expression animation. Like real human face, the facial model has a hierarchical biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators and underlying skull structure. The deformable skin model has multi-layer structure to approximate different types of soft tissue. It takes into account the nonlinear stress-strain relationship of the skin and the fact that soft tissue is almost incompressible. Different types of muscle models have been developed to simulate distribution of the muscle force on the skin due to muscle contraction. By the presence of the skull model, our facial model takes advantage of both more accurate facial deformation and the consideration of facial anatomy during the interactive definition of facial muscles. Under the muscular force, the deformation of the facial skin is evaluated using numerical integration of the governing dynamic equations. The dynamic facial animation algorithm runs at interactive rate with flexible and realistic facial expressions to be generated.
Turning Avatar into Realistic Human Expression Using Linear and Bilinear Interpolations
NASA Astrophysics Data System (ADS)
Hazim Alkawaz, Mohammed; Mohamad, Dzulkifli; Rehman, Amjad; Basori, Ahmad Hoirul
2014-06-01
The facial animation in term of 3D facial data has accurate research support of the laser scan and advance 3D tools for complex facial model production. However, the approach still lacks facial expression based on emotional condition. Though, facial skin colour is required to offers an effect of facial expression improvement, closely related to the human emotion. This paper presents innovative techniques for facial animation transformation using the facial skin colour based on linear interpolation and bilinear interpolation. The generated expressions are almost same to the genuine human expression and also enhance the facial expression of the virtual human.
Pimenta e Silva Machado, Luciana; de Macedo Nery, Marianita Batista; de Góis Nery, Cláudio; Leles, Cláudio Rodrigues
2012-08-02
Temporomandibular disorder (TMD) patients might present a number of concurrent clinical diagnoses that may be clustered according to their similarity. Profiling patients' clinical presentations can be useful for better understanding the behavior of TMD and for providing appropriate treatment planning. The aim of this study was to simultaneously classify symptomatic patients diagnosed with a variety of subtypes of TMD into homogenous groups based on their clinical presentation and occurrence of comorbidities. Clinical records of 357 consecutive TMD patients seeking treatment in a private specialized clinic were included in the study sample. Patients presenting multiple subtypes of TMD diagnosed simultaneously were categorized according to the AAOP criteria. Descriptive statistics and two-step cluster analysis were used to characterize the clinical presentation of these patients based on the primary and secondary clinical diagnoses. The most common diagnoses were localized masticatory muscle pain (n = 125) and disc displacement without reduction (n = 104). Comorbidity was identified in 288 patients. The automatic selection of an optimal number of clusters included 100% of cases, generating an initial 6-cluster solution and a final 4-cluster solution. The interpretation of within-group ranking of the importance of variables in the clustering solutions resulted in the following characterization of clusters: chronic facial pain (n = 36), acute muscle pain (n = 125), acute articular pain (n = 75) and chronic articular impairment (n = 121). Subgroups of acute and chronic TMD patients seeking treatment can be identified using clustering methods to provide a better understanding of the clinical presentation of TMD when multiple diagnosis are present. Classifying patients into identifiable symptomatic profiles would help clinicians to estimate how common a disorder is within a population of TMD patients and understand the probability of certain pattern of clinical complaints.
Ruocco, Anthony C.; Reilly, James L.; Rubin, Leah H.; Daros, Alex R.; Gershon, Elliot S.; Tamminga, Carol A.; Pearlson, Godfrey D.; Hill, S. Kristian; Keshavan, Matcheri S.; Gur, Ruben C.; Sweeney, John A.
2014-01-01
Background Difficulty recognizing facial emotions is an important social-cognitive deficit associated with psychotic disorders. It also may reflect a familial risk for psychosis in schizophrenia-spectrum disorders and bipolar disorder. Objective The objectives of this study from the Bipolar-Schizophrenia Network on Intermediate Phenotypes (B-SNIP) consortium were to: 1) compare emotion recognition deficits in schizophrenia, schizoaffective disorder and bipolar disorder with psychosis, 2) determine the familiality of emotion recognition deficits across these disorders, and 3) evaluate emotion recognition deficits in nonpsychotic relatives with and without elevated Cluster A and Cluster B personality disorder traits. Method Participants included probands with schizophrenia (n=297), schizoaffective disorder (depressed type, n=61; bipolar type, n=69), bipolar disorder with psychosis (n=248), their first-degree relatives (n=332, n=69, n=154, and n=286, respectively) and healthy controls (n=380). All participants completed the Penn Emotion Recognition Test, a standardized measure of facial emotion recognition assessing four basic emotions (happiness, sadness, anger and fear) and neutral expressions (no emotion). Results Compared to controls, emotion recognition deficits among probands increased progressively from bipolar disorder to schizoaffective disorder to schizophrenia. Proband and relative groups showed similar deficits perceiving angry and neutral faces, whereas deficits on fearful, happy and sad faces were primarily isolated to schizophrenia probands. Even non-psychotic relatives without elevated Cluster A or Cluster B personality disorder traits showed deficits on neutral and angry faces. Emotion recognition ability was moderately familial only in schizophrenia families. Conclusions Emotion recognition deficits are prominent but somewhat different across psychotic disorders. These deficits are reflected to a lesser extent in relatives, particularly on angry and neutral faces. Deficits were evident in non-psychotic relatives even without elevated personality disorder traits. Deficits in facial emotion recognition may reflect an important social-cognitive deficit in patients with psychotic disorders. PMID:25052782
Ruocco, Anthony C; Reilly, James L; Rubin, Leah H; Daros, Alex R; Gershon, Elliot S; Tamminga, Carol A; Pearlson, Godfrey D; Hill, S Kristian; Keshavan, Matcheri S; Gur, Ruben C; Sweeney, John A
2014-09-01
Difficulty recognizing facial emotions is an important social-cognitive deficit associated with psychotic disorders. It also may reflect a familial risk for psychosis in schizophrenia-spectrum disorders and bipolar disorder. The objectives of this study from the Bipolar-Schizophrenia Network on Intermediate Phenotypes (B-SNIP) consortium were to: 1) compare emotion recognition deficits in schizophrenia, schizoaffective disorder and bipolar disorder with psychosis, 2) determine the familiality of emotion recognition deficits across these disorders, and 3) evaluate emotion recognition deficits in nonpsychotic relatives with and without elevated Cluster A and Cluster B personality disorder traits. Participants included probands with schizophrenia (n=297), schizoaffective disorder (depressed type, n=61; bipolar type, n=69), bipolar disorder with psychosis (n=248), their first-degree relatives (n=332, n=69, n=154, and n=286, respectively) and healthy controls (n=380). All participants completed the Penn Emotion Recognition Test, a standardized measure of facial emotion recognition assessing four basic emotions (happiness, sadness, anger and fear) and neutral expressions (no emotion). Compared to controls, emotion recognition deficits among probands increased progressively from bipolar disorder to schizoaffective disorder to schizophrenia. Proband and relative groups showed similar deficits perceiving angry and neutral faces, whereas deficits on fearful, happy and sad faces were primarily isolated to schizophrenia probands. Even non-psychotic relatives without elevated Cluster A or Cluster B personality disorder traits showed deficits on neutral and angry faces. Emotion recognition ability was moderately familial only in schizophrenia families. Emotion recognition deficits are prominent but somewhat different across psychotic disorders. These deficits are reflected to a lesser extent in relatives, particularly on angry and neutral faces. Deficits were evident in non-psychotic relatives even without elevated personality disorder traits. Deficits in facial emotion recognition may reflect an important social-cognitive deficit in patients with psychotic disorders. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Thanos, Konstantinos-Georgios; Thomopoulos, Stelios C. A.
2014-06-01
The study in this paper belongs to a more general research of discovering facial sub-clusters in different ethnicity face databases. These new sub-clusters along with other metadata (such as race, sex, etc.) lead to a vector for each face in the database where each vector component represents the likelihood of participation of a given face to each cluster. This vector is then used as a feature vector in a human identification and tracking system based on face and other biometrics. The first stage in this system involves a clustering method which evaluates and compares the clustering results of five different clustering algorithms (average, complete, single hierarchical algorithm, k-means and DIGNET), and selects the best strategy for each data collection. In this paper we present the comparative performance of clustering results of DIGNET and four clustering algorithms (average, complete, single hierarchical and k-means) on fabricated 2D and 3D samples, and on actual face images from various databases, using four different standard metrics. These metrics are the silhouette figure, the mean silhouette coefficient, the Hubert test Γ coefficient, and the classification accuracy for each clustering result. The results showed that, in general, DIGNET gives more trustworthy results than the other algorithms when the metrics values are above a specific acceptance threshold. However when the evaluation results metrics have values lower than the acceptance threshold but not too low (too low corresponds to ambiguous results or false results), then it is necessary for the clustering results to be verified by the other algorithms.
A unified probabilistic framework for spontaneous facial action modeling and understanding.
Tong, Yan; Chen, Jixu; Ji, Qiang
2010-02-01
Facial expression is a natural and powerful means of human communication. Recognizing spontaneous facial actions, however, is very challenging due to subtle facial deformation, frequent head movements, and ambiguous and uncertain facial motion measurements. Because of these challenges, current research in facial expression recognition is limited to posed expressions and often in frontal view. A spontaneous facial expression is characterized by rigid head movements and nonrigid facial muscular movements. More importantly, it is the coherent and consistent spatiotemporal interactions among rigid and nonrigid facial motions that produce a meaningful facial expression. Recognizing this fact, we introduce a unified probabilistic facial action model based on the Dynamic Bayesian network (DBN) to simultaneously and coherently represent rigid and nonrigid facial motions, their spatiotemporal dependencies, and their image measurements. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, facial action recognition is accomplished through probabilistic inference by systematically integrating visual measurements with the facial action model. Experiments show that compared to the state-of-the-art techniques, the proposed system yields significant improvements in recognizing both rigid and nonrigid facial motions, especially for spontaneous facial expressions.
[Trigeminal autonomic cephalgias: diagnostic and therapeutic implications].
Rosenberg-Nordmann, Mirjam; Tölle, Thomas R; Sprenger, Till
2007-09-06
Trigeminal autonomic cephalgias (TACs) are primary headache syndromes characterized by severe short-lasting headaches accompanied by ipsilateral facial autonomic symptoms. The group includes cluster headache (CH), paroxysmal hemicrania (PH), and short-lasting neuralgiform headache with conjunctival injection and tearing (SUNCT). By far, Cluster headache is the most frequent of these syndromes. Similar hypothalamic and trigeminovascular mechanisms have been discussed as pathophysiologic mechanisms for all TACs. The therapeutic strategies, however, differ considerably. Although unusual, structural lesions in TACs have been described, affecting the therapeutic management.
2004-08-01
Nongastrointestinal infection ......... F77/1589c; bovine mastitis , serotype 12 F77/2809A; infant born very edematous, serotype 6 F78/660; facial burns, cellulitis...included in this study (n 76) ID Earlier ID and historya LSU34...........................................................Genotype 57, ASC 274; bovine ...1994 (AFLP cluster A3a) LSU62...........................................................Genotype 15; bovine isolate, Poland, 1962 (AFLP cluster Ala
Automated Video Based Facial Expression Analysis of Neuropsychiatric Disorders
Wang, Peng; Barrett, Frederick; Martin, Elizabeth; Milanova, Marina; Gur, Raquel E.; Gur, Ruben C.; Kohler, Christian; Verma, Ragini
2008-01-01
Deficits in emotional expression are prominent in several neuropsychiatric disorders, including schizophrenia. Available clinical facial expression evaluations provide subjective and qualitative measurements, which are based on static 2D images that do not capture the temporal dynamics and subtleties of expression changes. Therefore, there is a need for automated, objective and quantitative measurements of facial expressions captured using videos. This paper presents a computational framework that creates probabilistic expression profiles for video data and can potentially help to automatically quantify emotional expression differences between patients with neuropsychiatric disorders and healthy controls. Our method automatically detects and tracks facial landmarks in videos, and then extracts geometric features to characterize facial expression changes. To analyze temporal facial expression changes, we employ probabilistic classifiers that analyze facial expressions in individual frames, and then propagate the probabilities throughout the video to capture the temporal characteristics of facial expressions. The applications of our method to healthy controls and case studies of patients with schizophrenia and Asperger’s syndrome demonstrate the capability of the video-based expression analysis method in capturing subtleties of facial expression. Such results can pave the way for a video based method for quantitative analysis of facial expressions in clinical research of disorders that cause affective deficits. PMID:18045693
Astley, S J; Clarren, S K
1996-07-01
The purpose of this study was to demonstrate that a quantitative, multivariate case definition of the fetal alcohol syndrome (FAS) facial phenotype could be derived from photographs of individuals with FAS and to demonstrate how this case definition and photographic approach could be used to develop efficient, accurate, and precise screening tools, diagnostic aids, and possibly surveillance tools. Frontal facial photographs of 42 subjects (from birth to 27 years of age) with FAS were matched to 84 subjects without FAS. The study population was randomly divided in half. Group 1 was used to identify the facial features that best differentiated individuals with and without FAS. Group 2 was used for cross validation. In group 1, stepwise discriminant analysis identified three facial features (reduced palpebral fissure length/inner canthal distance ratio, smooth philtrum, and thin upper lip) as the cluster of features that differentiated individuals with and without FAS in groups 1 and 2 with 100% accuracy. Sensitivity and specificity were unaffected by race, gender, and age. The phenotypic case definition derived from photographs accurately distinguished between individuals with and without FAS, demonstrating the potential of this approach for developing screening, diagnostic, and surveillance tools. Further evaluation of the validity and generalizability of this method will be needed.
Spontaneous Facial Mimicry in Response to Dynamic Facial Expressions
ERIC Educational Resources Information Center
Sato, Wataru; Yoshikawa, Sakiko
2007-01-01
Based on previous neuroscientific evidence indicating activation of the mirror neuron system in response to dynamic facial actions, we hypothesized that facial mimicry would occur while subjects viewed dynamic facial expressions. To test this hypothesis, dynamic/static facial expressions of anger/happiness were presented using computer-morphing…
Realistic facial animation generation based on facial expression mapping
NASA Astrophysics Data System (ADS)
Yu, Hui; Garrod, Oliver; Jack, Rachael; Schyns, Philippe
2014-01-01
Facial expressions reflect internal emotional states of a character or in response to social communications. Though much effort has been taken to generate realistic facial expressions, it still remains a challenging topic due to human being's sensitivity to subtle facial movements. In this paper, we present a method for facial animation generation, which reflects true facial muscle movements with high fidelity. An intermediate model space is introduced to transfer captured static AU peak frames based on FACS to the conformed target face. And then dynamic parameters derived using a psychophysics method is integrated to generate facial animation, which is assumed to represent natural correlation of multiple AUs. Finally, the animation sequence in the intermediate model space is mapped to the target face to produce final animation.
Szabó, Ádám György; Farkas, Kinga; Marosi, Csilla; Kozák, Lajos R; Rudas, Gábor; Réthelyi, János; Csukly, Gábor
2017-12-08
Schizophrenia has a negative effect on the activity of the temporal and prefrontal cortices in the processing of emotional facial expressions. However no previous research focused on the evaluation of mixed emotions in schizophrenia, albeit they are frequently expressed in everyday situations and negative emotions are frequently expressed by mixed facial expressions. Altogether 37 subjects, 19 patients with schizophrenia and 18 healthy control subjects were enrolled in the study. The two study groups did not differ in age and education. The stimulus set consisted of 10 fearful (100%), 10 happy (100%), 10 mixed fear (70% fear and 30% happy) and 10 mixed happy facial expressions. During the fMRI acquisition pictures were presented in a randomized order and subjects had to categorize expressions by button press. A decreased activation was found in the patient group during fear, mixed fear and mixed happy processing in the right ventrolateral prefrontal cortex (VLPFC) and the right anterior insula (RAI) at voxel and cluster level after familywise error correction. No difference was found between study groups in activations to happy facial condition. Patients with schizophrenia did not show a differential activation between mixed happy and happy facial expression similar to controls in the right dorsolateral prefrontal cortex (DLPFC). Patients with schizophrenia showed decreased functioning in right prefrontal regions responsible for salience signaling and valence evaluation during emotion recognition. Our results indicate that fear and mixed happy/fear processing are impaired in schizophrenia, while happy facial expression processing is relatively intact.
Analysis of facial expressions in parkinson's disease through video-based automatic methods.
Bandini, Andrea; Orlandi, Silvia; Escalante, Hugo Jair; Giovannelli, Fabio; Cincotta, Massimo; Reyes-Garcia, Carlos A; Vanni, Paola; Zaccara, Gaetano; Manfredi, Claudia
2017-04-01
The automatic analysis of facial expressions is an evolving field that finds several clinical applications. One of these applications is the study of facial bradykinesia in Parkinson's disease (PD), which is a major motor sign of this neurodegenerative illness. Facial bradykinesia consists in the reduction/loss of facial movements and emotional facial expressions called hypomimia. In this work we propose an automatic method for studying facial expressions in PD patients relying on video-based METHODS: 17 Parkinsonian patients and 17 healthy control subjects were asked to show basic facial expressions, upon request of the clinician and after the imitation of a visual cue on a screen. Through an existing face tracker, the Euclidean distance of the facial model from a neutral baseline was computed in order to quantify the changes in facial expressivity during the tasks. Moreover, an automatic facial expressions recognition algorithm was trained in order to study how PD expressions differed from the standard expressions. Results show that control subjects reported on average higher distances than PD patients along the tasks. This confirms that control subjects show larger movements during both posed and imitated facial expressions. Moreover, our results demonstrate that anger and disgust are the two most impaired expressions in PD patients. Contactless video-based systems can be important techniques for analyzing facial expressions also in rehabilitation, in particular speech therapy, where patients could get a definite advantage from a real-time feedback about the proper facial expressions/movements to perform. Copyright © 2017 Elsevier B.V. All rights reserved.
Modeling 3D Facial Shape from DNA
Claes, Peter; Liberton, Denise K.; Daniels, Katleen; Rosana, Kerri Matthes; Quillen, Ellen E.; Pearson, Laurel N.; McEvoy, Brian; Bauchet, Marc; Zaidi, Arslan A.; Yao, Wei; Tang, Hua; Barsh, Gregory S.; Absher, Devin M.; Puts, David A.; Rocha, Jorge; Beleza, Sandra; Pereira, Rinaldo W.; Baynam, Gareth; Suetens, Paul; Vandermeulen, Dirk; Wagner, Jennifer K.; Boster, James S.; Shriver, Mark D.
2014-01-01
Human facial diversity is substantial, complex, and largely scientifically unexplained. We used spatially dense quasi-landmarks to measure face shape in population samples with mixed West African and European ancestry from three locations (United States, Brazil, and Cape Verde). Using bootstrapped response-based imputation modeling (BRIM), we uncover the relationships between facial variation and the effects of sex, genomic ancestry, and a subset of craniofacial candidate genes. The facial effects of these variables are summarized as response-based imputed predictor (RIP) variables, which are validated using self-reported sex, genomic ancestry, and observer-based facial ratings (femininity and proportional ancestry) and judgments (sex and population group). By jointly modeling sex, genomic ancestry, and genotype, the independent effects of particular alleles on facial features can be uncovered. Results on a set of 20 genes showing significant effects on facial features provide support for this approach as a novel means to identify genes affecting normal-range facial features and for approximating the appearance of a face from genetic markers. PMID:24651127
Children's Facial Trustworthiness Judgments: Agreement and Relationship with Facial Attractiveness.
Ma, Fengling; Xu, Fen; Luo, Xianming
2016-01-01
This study examined developmental changes in children's abilities to make trustworthiness judgments based on faces and the relationship between a child's perception of trustworthiness and facial attractiveness. One hundred and one 8-, 10-, and 12-year-olds, along with 37 undergraduates, were asked to judge the trustworthiness of 200 faces. Next, they issued facial attractiveness judgments. The results indicated that children made consistent trustworthiness and attractiveness judgments based on facial appearance, but with-adult and within-age agreement levels of facial judgments increased with age. Additionally, the agreement levels of judgments made by girls were higher than those by boys. Furthermore, the relationship between trustworthiness and attractiveness judgments increased with age, and the relationship between two judgments made by girls was closer than those by boys. These findings suggest that face-based trait judgment ability develops throughout childhood and that, like adults, children may use facial attractiveness as a heuristic cue that signals a stranger's trustworthiness.
Are employment-interview skills a correlate of subtypes of schizophrenia?
Charisiou, J; Jackson, H J; Boyle, G J; Burgess, P; Minas, I H; Joshua, S D
1989-12-01
46 inpatients with a DSM-III diagnosis of schizophrenia were assessed in the week prior to discharge from hospital on measures of positive and negative symptoms and on 12 measures of employment interview skills (i.e., eye contact, facial gestures, body posture, verbal content, voice volume, length of speech, motivation, self-confidence, ability to communicate, manifest adjustment, manifest intelligence, over-all interview skill), and a global measure of employability. A cluster analysis based on the total positive and negative symptom scores produced two groups. The group with the lower mean negative symptom score exhibited better employment-interview skills and higher ratings on employability.
Marker optimization for facial motion acquisition and deformation.
Le, Binh H; Zhu, Mingyang; Deng, Zhigang
2013-11-01
A long-standing problem in marker-based facial motion capture is what are the optimal facial mocap marker layouts. Despite its wide range of potential applications, this problem has not yet been systematically explored to date. This paper describes an approach to compute optimized marker layouts for facial motion acquisition as optimization of characteristic control points from a set of high-resolution, ground-truth facial mesh sequences. Specifically, the thin-shell linear deformation model is imposed onto the example pose reconstruction process via optional hard constraints such as symmetry and multiresolution constraints. Through our experiments and comparisons, we validate the effectiveness, robustness, and accuracy of our approach. Besides guiding minimal yet effective placement of facial mocap markers, we also describe and demonstrate its two selected applications: marker-based facial mesh skinning and multiresolution facial performance capture.
Image ratio features for facial expression recognition application.
Song, Mingli; Tao, Dacheng; Liu, Zicheng; Li, Xuelong; Zhou, Mengchu
2010-06-01
Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.
Face Generation Using Emotional Regions for Sensibility Robot
NASA Astrophysics Data System (ADS)
Gotoh, Minori; Kanoh, Masayoshi; Kato, Shohei; Kunitachi, Tsutomu; Itoh, Hidenori
We think that psychological interaction is necessary for smooth communication between robots and people. One way to psychologically interact with others is through facial expressions. Facial expressions are very important for communication because they show true emotions and feelings. The ``Ifbot'' robot communicates with people by considering its own ``emotions''. Ifbot has many facial expressions to communicate enjoyment. We developed a method for generating facial expressions based on human subjective judgements mapping Ifbot's facial expressions to its emotions. We first created Ifbot's emotional space to map its facial expressions. We applied a five-layer auto-associative neural network to the space. We then subjectively evaluated the emotional space and created emotional regions based on the results. We generated emotive facial expressions using the emotional regions.
Barbosa, Jocelyn; Lee, Kyubum; Lee, Sunwon; Lodhi, Bilal; Cho, Jae-Gu; Seo, Woo-Keun; Kang, Jaewoo
2016-03-12
Facial palsy or paralysis (FP) is a symptom that loses voluntary muscles movement in one side of the human face, which could be very devastating in the part of the patients. Traditional methods are solely dependent to clinician's judgment and therefore time consuming and subjective in nature. Hence, a quantitative assessment system becomes apparently invaluable for physicians to begin the rehabilitation process; and to produce a reliable and robust method is challenging and still underway. We introduce a novel approach for a quantitative assessment of facial paralysis that tackles classification problem for FP type and degree of severity. Specifically, a novel method of quantitative assessment is presented: an algorithm that extracts the human iris and detects facial landmarks; and a hybrid approach combining the rule-based and machine learning algorithm to analyze and prognosticate facial paralysis using the captured images. A method combining the optimized Daugman's algorithm and Localized Active Contour (LAC) model is proposed to efficiently extract the iris and facial landmark or key points. To improve the performance of LAC, appropriate parameters of initial evolving curve for facial features' segmentation are automatically selected. The symmetry score is measured by the ratio between features extracted from the two sides of the face. Hybrid classifiers (i.e. rule-based with regularized logistic regression) were employed for discriminating healthy and unhealthy subjects, FP type classification, and for facial paralysis grading based on House-Brackmann (H-B) scale. Quantitative analysis was performed to evaluate the performance of the proposed approach. Experiments show that the proposed method demonstrates its efficiency. Facial movement feature extraction on facial images based on iris segmentation and LAC-based key point detection along with a hybrid classifier provides a more efficient way of addressing classification problem on facial palsy type and degree of severity. Combining iris segmentation and key point-based method has several merits that are essential for our real application. Aside from the facial key points, iris segmentation provides significant contribution as it describes the changes of the iris exposure while performing some facial expressions. It reveals the significant difference between the healthy side and the severe palsy side when raising eyebrows with both eyes directed upward, and can model the typical changes in the iris region.
Cranial base topology and basic trends in the facial evolution of Homo.
Bastir, Markus; Rosas, Antonio
2016-02-01
Facial prognathism and projection are important characteristics in human evolution but their three-dimensional (3D) architectonic relationships to basicranial morphology are not clear. We used geometric morphometrics and measured 51 3D-landmarks in a comparative sample of modern humans (N = 78) and fossil Pleistocene hominins (N = 10) to investigate the spatial features of covariation between basicranial and facial elements. The study reveals complex morphological integration patterns in craniofacial evolution of Middle and Late Pleistocene hominins. A downwards-orientated cranial base correlates with alveolar maxillary prognathism, relatively larger faces, and relatively larger distances between the anterior cranial base and the frontal bone (projection). This upper facial projection correlates with increased overall relative size of the maxillary alveolar process. Vertical facial height is associated with tall nasal cavities and is accommodated by an elevated anterior cranial base, possibly because of relations between the cribriform and the nasal cavity in relation to body size and energetics. Variation in upper- and mid-facial projection can further be produced by basicranial topology in which the midline base and nasal cavity are shifted anteriorly relative to retracted lateral parts of the base and the face. The zygomatics and the middle cranial fossae act together as bilateral vertical systems that are either projected or retracted relative to the midline facial elements, causing either midfacial flatness or midfacial projection correspondingly. We propose that facial flatness and facial projection reflect classical principles of craniofacial growth counterparts, while facial orientation relative to the basicranium as well as facial proportions reflect the complex interplay of head-body integration in the light of encephalization and body size decrease in Middle to Late Pleistocene hominin evolution. Developmental and evolutionary patterns of integration may only partially overlap morphologically, and traditional concepts taken from research on two-dimensional (2D) lateral X-rays and sections have led to oversimplified and overly mechanistic models of basicranial evolution. Copyright © 2015 Elsevier Ltd. All rights reserved.
Appearance-Based Facial Recognition Using Visible and Thermal Imagery: A Comparative Study
2006-01-01
Appearance-Based Facial Recognition Using Visible and Thermal Imagery: A Comparative Study ∗ Andrea Selinger† Diego A. Socolinsky‡ †Equinox...TYPE 3. DATES COVERED 00-00-2006 to 00-00-2006 4. TITLE AND SUBTITLE Appearance-Based Facial Recognition Using Visible and Thermal Imagery: A
Facial Asymmetry-Based Age Group Estimation: Role in Recognizing Age-Separated Face Images.
Sajid, Muhammad; Taj, Imtiaz Ahmad; Bajwa, Usama Ijaz; Ratyal, Naeem Iqbal
2018-04-23
Face recognition aims to establish the identity of a person based on facial characteristics. On the other hand, age group estimation is the automatic calculation of an individual's age range based on facial features. Recognizing age-separated face images is still a challenging research problem due to complex aging processes involving different types of facial tissues, skin, fat, muscles, and bones. Certain holistic and local facial features are used to recognize age-separated face images. However, most of the existing methods recognize face images without incorporating the knowledge learned from age group estimation. In this paper, we propose an age-assisted face recognition approach to handle aging variations. Inspired by the observation that facial asymmetry is an age-dependent intrinsic facial feature, we first use asymmetric facial dimensions to estimate the age group of a given face image. Deeply learned asymmetric facial features are then extracted for face recognition using a deep convolutional neural network (dCNN). Finally, we integrate the knowledge learned from the age group estimation into the face recognition algorithm using the same dCNN. This integration results in a significant improvement in the overall performance compared to using the face recognition algorithm alone. The experimental results on two large facial aging datasets, the MORPH and FERET sets, show that the proposed age group estimation based on the face recognition approach yields superior performance compared to some existing state-of-the-art methods. © 2018 American Academy of Forensic Sciences.
Support vector machine-based facial-expression recognition method combining shape and appearance
NASA Astrophysics Data System (ADS)
Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun
2010-11-01
Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.
Multi-layer sparse representation for weighted LBP-patches based facial expression recognition.
Jia, Qi; Gao, Xinkai; Guo, He; Luo, Zhongxuan; Wang, Yi
2015-03-19
In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the existing training methods have quite low recognition rates. Motivated by sparse representation, the problem can be solved by finding sparse coefficients of the test image by the whole training set. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. We evaluate facial representation based on weighted local binary patterns, and Fisher separation criterion is used to calculate the weighs of patches. A multi-layer sparse representation framework is proposed for multi-intensity facial expression recognition, especially for low-intensity expressions and noisy expressions in reality, which is a critical problem but seldom addressed in the existing works. To this end, several experiments based on low-resolution and multi-intensity expressions are carried out. Promising results on publicly available databases demonstrate the potential of the proposed approach.
Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A
2011-10-01
Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions.
Segmentation of human face using gradient-based approach
NASA Astrophysics Data System (ADS)
Baskan, Selin; Bulut, M. Mete; Atalay, Volkan
2001-04-01
This paper describes a method for automatic segmentation of facial features such as eyebrows, eyes, nose, mouth and ears in color images. This work is an initial step for wide range of applications based on feature-based approaches, such as face recognition, lip-reading, gender estimation, facial expression analysis, etc. Human face can be characterized by its skin color and nearly elliptical shape. For this purpose, face detection is performed using color and shape information. Uniform illumination is assumed. No restrictions on glasses, make-up, beard, etc. are imposed. Facial features are extracted using the vertically and horizontally oriented gradient projections. The gradient of a minimum with respect to its neighbor maxima gives the boundaries of a facial feature. Each facial feature has a different horizontal characteristic. These characteristics are derived by extensive experimentation with many face images. Using fuzzy set theory, the similarity between the candidate and the feature characteristic under consideration is calculated. Gradient-based method is accompanied by the anthropometrical information, for robustness. Ear detection is performed using contour-based shape descriptors. This method detects the facial features and circumscribes each facial feature with the smallest rectangle possible. AR database is used for testing. The developed method is also suitable for real-time systems.
Mundinger, Gerhard S.; Borsuk, Daniel E.; Okhah, Zachary; Christy, Michael R.; Bojovic, Branko; Dorafshar, Amir H.; Rodriguez, Eduardo D.
2014-01-01
Efficacy of prophylactic antibiotics in craniofacial fracture management is controversial. The purpose of this study was to compare evidence-based literature recommendations regarding antibiotic prophylaxis in facial fracture management with expert-based practice. A systematic review of the literature was performed to identify published studies evaluating pre-, peri-, and postoperative efficacy of antibiotics in facial fracture management by facial third. Study level of evidence was assessed according to the American Society of Plastic Surgery criteria, and graded practice recommendations were made based on these assessments. Expert opinions were garnered during the Advanced Orbital Surgery Symposium in the form of surveys evaluating senior surgeon clinical antibiotic prescribing practices by time point and facial third. A total of 44 studies addressing antibiotic prophylaxis and facial fracture management were identified. Overall, studies were of poor quality, precluding formal quantitative analysis. Studies supported the use of perioperative antibiotics in all facial thirds, and preoperative antibiotics in comminuted mandible fractures. Postoperative antibiotics were not supported in any facial third. Survey respondents (n = 17) cumulatively reported their antibiotic prescribing practices over 286 practice years and 24,012 facial fracture cases. Percentages of prescribers administering pre-, intra-, and postoperative antibiotics, respectively, by facial third were as follows: upper face 47.1, 94.1, 70.6; midface 47.1, 100, 70.6%; and mandible 68.8, 94.1, 64.7%. Preoperative but not postoperative antibiotic use is recommended for comminuted mandible fractures. Frequent use of pre- and postoperative antibiotics in upper and midface fractures is not supported by literature recommendations, but with low-level evidence. Higher level studies may better guide clinical antibiotic prescribing practices. PMID:25709755
Intra-temporal facial nerve centerline segmentation for navigated temporal bone surgery
NASA Astrophysics Data System (ADS)
Voormolen, Eduard H. J.; van Stralen, Marijn; Woerdeman, Peter A.; Pluim, Josien P. W.; Noordmans, Herke J.; Regli, Luca; Berkelbach van der Sprenkel, Jan W.; Viergever, Max A.
2011-03-01
Approaches through the temporal bone require surgeons to drill away bone to expose a target skull base lesion while evading vital structures contained within it, such as the sigmoid sinus, jugular bulb, and facial nerve. We hypothesize that an augmented neuronavigation system that continuously calculates the distance to these structures and warns if the surgeon drills too close, will aid in making safe surgical approaches. Contemporary image guidance systems are lacking an automated method to segment the inhomogeneous and complexly curved facial nerve. Therefore, we developed a segmentation method to delineate the intra-temporal facial nerve centerline from clinically available temporal bone CT images semi-automatically. Our method requires the user to provide the start- and end-point of the facial nerve in a patient's CT scan, after which it iteratively matches an active appearance model based on the shape and texture of forty facial nerves. Its performance was evaluated on 20 patients by comparison to our gold standard: manually segmented facial nerve centerlines. Our segmentation method delineates facial nerve centerlines with a maximum error along its whole trajectory of 0.40+/-0.20 mm (mean+/-standard deviation). These results demonstrate that our model-based segmentation method can robustly segment facial nerve centerlines. Next, we can investigate whether integration of this automated facial nerve delineation with a distance calculating neuronavigation interface results in a system that can adequately warn surgeons during temporal bone drilling, and effectively diminishes risks of iatrogenic facial nerve palsy.
Physical therapy for facial paralysis: a tailored treatment approach.
Brach, J S; VanSwearingen, J M
1999-04-01
Bell palsy is an acute facial paralysis of unknown etiology. Although recovery from Bell palsy is expected without intervention, clinical experience suggests that recovery is often incomplete. This case report describes a classification system used to guide treatment and to monitor recovery of an individual with facial paralysis. The patient was a 71-year-old woman with complete left facial paralysis secondary to Bell palsy. Signs and symptoms were assessed using a standardized measure of facial impairment (Facial Grading System [FGS]) and questions regarding functional limitations. A treatment-based category was assigned based on signs and symptoms. Rehabilitation involved muscle re-education exercises tailored to the treatment-based category. In 14 physical therapy sessions over 13 months, the patient had improved facial impairments (initial FGS score= 17/100, final FGS score= 68/100) and no reported functional limitations. Recovery from Bell palsy can be a complicated and lengthy process. The use of a classification system may help simplify the rehabilitation process.
Children's Facial Trustworthiness Judgments: Agreement and Relationship with Facial Attractiveness
Ma, Fengling; Xu, Fen; Luo, Xianming
2016-01-01
This study examined developmental changes in children's abilities to make trustworthiness judgments based on faces and the relationship between a child's perception of trustworthiness and facial attractiveness. One hundred and one 8-, 10-, and 12-year-olds, along with 37 undergraduates, were asked to judge the trustworthiness of 200 faces. Next, they issued facial attractiveness judgments. The results indicated that children made consistent trustworthiness and attractiveness judgments based on facial appearance, but with-adult and within-age agreement levels of facial judgments increased with age. Additionally, the agreement levels of judgments made by girls were higher than those by boys. Furthermore, the relationship between trustworthiness and attractiveness judgments increased with age, and the relationship between two judgments made by girls was closer than those by boys. These findings suggest that face-based trait judgment ability develops throughout childhood and that, like adults, children may use facial attractiveness as a heuristic cue that signals a stranger's trustworthiness. PMID:27148111
Namba, Shushi; Kabir, Russell S.; Miyatani, Makoto; Nakao, Takashi
2017-01-01
While numerous studies have examined the relationships between facial actions and emotions, they have yet to account for the ways that specific spontaneous facial expressions map onto emotional experiences induced without expressive intent. Moreover, previous studies emphasized that a fine-grained investigation of facial components could establish the coherence of facial actions with actual internal states. Therefore, this study aimed to accumulate evidence for the correspondence between spontaneous facial components and emotional experiences. We reinvestigated data from previous research which secretly recorded spontaneous facial expressions of Japanese participants as they watched film clips designed to evoke four different target emotions: surprise, amusement, disgust, and sadness. The participants rated their emotional experiences via a self-reported questionnaire of 16 emotions. These spontaneous facial expressions were coded using the Facial Action Coding System, the gold standard for classifying visible facial movements. We corroborated each facial action that was present in the emotional experiences by applying stepwise regression models. The results found that spontaneous facial components occurred in ways that cohere to their evolutionary functions based on the rating values of emotional experiences (e.g., the inner brow raiser might be involved in the evaluation of novelty). This study provided new empirical evidence for the correspondence between each spontaneous facial component and first-person internal states of emotion as reported by the expresser. PMID:28522979
ERIC Educational Resources Information Center
Spangler, Sibylle M.; Schwarzer, Gudrun; Korell, Monika; Maier-Karius, Johanna
2010-01-01
Four experiments were conducted with 5- to 11-year-olds and adults to investigate whether facial identity, facial speech, emotional expression, and gaze direction are processed independently of or in interaction with one another. In a computer-based, speeded sorting task, participants sorted faces according to facial identity while disregarding…
Facial paralysis caused by malignant skull base neoplasms.
Marzo, Sam J; Leonetti, John P; Petruzzelli, Guy
2002-12-01
Bell palsy remains the most common cause of facial paralysis. Unfortunately, this term is often erroneously applied to all cases of facial paralysis. The authors performed a retrospective review of data obtained in 11 patients who were treated at a university-based referral practice between July 1988 and September 2001 and who presented with acute facial nerve paralysis mimicking Bell palsy. All patients were subsequently found to harbor an occult skull base neoplasm. A delay in diagnosis was demonstrated in all cases. Seven patients died of their disease, and four patients are currently free of disease. Although Bell palsy remains the most common cause of peripheral facial nerve paralysis, patients in whom neoplasms invade the facial nerve may present with acute paralysis mimicking Bell palsy that fails to resolve. Delays in diagnosis and treatment in such cases may result in increased rates of mortality and morbidity.
Facial paralysis caused by malignant skull base neoplasms.
Marzo, Sam J; Leonetti, John P; Petruzzelli, Guy
2002-05-15
Bell palsy remains the most common cause of facial paralysis. Unfortunately, this term is often erroneously applied to all cases of facial paralysis. The authors performed a retrospective review of data obtained in 11 patients who were treated at a university-based referral practice between July 1988 and September 2001 and who presented with acute facial nerve paralysis mimicking Bell palsy. All patients were subsequently found to harbor an occult skull base neoplasm. A delay in diagnosis was demonstrated in all cases. Seven patients died of their disease, and four patients are currently free of disease. Although Bell palsy remains the most common cause of peripheral facial nerve paralysis, patients in whom neoplasms invade of the facial nerve may present with acute paralysis mimicking Bell palsy that fails to resolve. Delays in diagnosis and treatment in such cases may result in increased rates of mortality and morbidity.
A Real-Time Interactive System for Facial Makeup of Peking Opera
NASA Astrophysics Data System (ADS)
Cai, Feilong; Yu, Jinhui
In this paper we present a real-time interactive system for making facial makeup of Peking Opera. First, we analyze the process of drawing facial makeup and characteristics of the patterns used in it, and then construct a SVG pattern bank based on local features like eye, nose, mouth, etc. Next, we pick up some SVG patterns from the pattern bank and composed them to make a new facial makeup. We offer a vector-based free form deformation (FFD) tool to edit patterns and, based on editing, our system creates automatically texture maps for a template head model. Finally, the facial makeup is rendered on the 3D head model in real time. Our system offers flexibility in designing and synthesizing various 3D facial makeup. Potential applications of the system include decoration design, digital museum exhibition and education of Peking Opera.
NASA Astrophysics Data System (ADS)
Wan, Qianwen; Panetta, Karen; Agaian, Sos
2017-05-01
Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.
A View of the Therapy for Bell's Palsy Based on Molecular Biological Analyses of Facial Muscles.
Moriyama, Hiroshi; Mitsukawa, Nobuyuki; Itoh, Masahiro; Otsuka, Naruhito
2017-12-01
Details regarding the molecular biological features of Bell's palsy have not been widely reported in textbooks. We genetically analyzed facial muscles and clarified these points. We performed genetic analysis of facial muscle specimens from Japanese patients with severe (House-Brackmann facial nerve grading system V) and moderate (House-Brackmann facial nerve grading system III) dysfunction due to Bell's palsy. Microarray analysis of gene expression was performed using specimens from the healthy and affected sides, and gene expression was compared. Changes in gene expression were defined as an affected side/healthy side ratio of >1.5 or <0.5. We observed that the gene expression in Bell's palsy changes with the degree of facial nerve palsy. Especially, muscle, neuron, and energy category genes tended to fluctuate with the degree of facial nerve palsy. It is expected that this study will aid in the development of new treatments and diagnostic/prognostic markers based on the severity of facial nerve palsy.
A small-world network model of facial emotion recognition.
Takehara, Takuma; Ochiai, Fumio; Suzuki, Naoto
2016-01-01
Various models have been proposed to increase understanding of the cognitive basis of facial emotions. Despite those efforts, interactions between facial emotions have received minimal attention. If collective behaviours relating to each facial emotion in the comprehensive cognitive system could be assumed, specific facial emotion relationship patterns might emerge. In this study, we demonstrate that the frameworks of complex networks can effectively capture those patterns. We generate 81 facial emotion images (6 prototypes and 75 morphs) and then ask participants to rate degrees of similarity in 3240 facial emotion pairs in a paired comparison task. A facial emotion network constructed on the basis of similarity clearly forms a small-world network, which features an extremely short average network distance and close connectivity. Further, even if two facial emotions have opposing valences, they are connected within only two steps. In addition, we show that intermediary morphs are crucial for maintaining full network integration, whereas prototypes are not at all important. These results suggest the existence of collective behaviours in the cognitive systems of facial emotions and also describe why people can efficiently recognize facial emotions in terms of information transmission and propagation. For comparison, we construct three simulated networks--one based on the categorical model, one based on the dimensional model, and one random network. The results reveal that small-world connectivity in facial emotion networks is apparently different from those networks, suggesting that a small-world network is the most suitable model for capturing the cognitive basis of facial emotions.
Person-independent facial expression analysis by fusing multiscale cell features
NASA Astrophysics Data System (ADS)
Zhou, Lubing; Wang, Han
2013-03-01
Automatic facial expression recognition is an interesting and challenging task. To achieve satisfactory accuracy, deriving a robust facial representation is especially important. A novel appearance-based feature, the multiscale cell local intensity increasing patterns (MC-LIIP), to represent facial images and conduct person-independent facial expression analysis is presented. The LIIP uses a decimal number to encode the texture or intensity distribution around each pixel via pixel-to-pixel intensity comparison. To boost noise resistance, MC-LIIP carries out comparison computation on the average values of scalable cells instead of individual pixels. The facial descriptor fuses region-based histograms of MC-LIIP features from various scales, so as to encode not only textural microstructures but also the macrostructures of facial images. Finally, a support vector machine classifier is applied for expression recognition. Experimental results on the CK+ and Karolinska directed emotional faces databases show the superiority of the proposed method.
Zhang, Yu; Prakash, Edmond C; Sung, Eric
2004-01-01
This paper presents a new physically-based 3D facial model based on anatomical knowledge which provides high fidelity for facial expression animation while optimizing the computation. Our facial model has a multilayer biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators, and underlying skull structure. In contrast to existing mass-spring-damper (MSD) facial models, our dynamic skin model uses the nonlinear springs to directly simulate the nonlinear visco-elastic behavior of soft tissue and a new kind of edge repulsion spring is developed to prevent collapse of the skin model. Different types of muscle models have been developed to simulate distribution of the muscle force applied on the skin due to muscle contraction. The presence of the skull advantageously constrain the skin movements, resulting in more accurate facial deformation and also guides the interactive placement of facial muscles. The governing dynamics are computed using a local semi-implicit ODE solver. In the dynamic simulation, an adaptive refinement automatically adapts the local resolution at which potential inaccuracies are detected depending on local deformation. The method, in effect, ensures the required speedup by concentrating computational time only where needed while ensuring realistic behavior within a predefined error threshold. This mechanism allows more pleasing animation results to be produced at a reduced computational cost.
Advances in the understanding of cluster headache.
Leone, Massimo; Proietti Cecchini, Alberto
2017-02-01
Cluster headache is the worst primary headache form; it occurs in paroxysmal excruciatingly severe unilateral head pain attacks usually grouped in cluster periods. The familial occurrence of the disease indicates a genetic component but a gene abnormality is yet to be disclosed. Activation of trigeminal afferents and cranial parasympathetic efferents, the so-called trigemino-parasympathetic reflex, can explain pain and accompanying oculo-facial autonomic phenomena. In particular, pain in cluster headache is attributed, at least in part, to the increased CGRP plasma levels released by activated trigeminal system. Posterior hypothalamus was hypothesized to be the cluster generator activating the trigemino-parasympathetic reflex. Efficacy of monoclonal antibodies against CRGP is under investigation in randomized clinical trials. Areas covered: This paper will focus on main findings contributing to consider cluster headache as a neurovascular disorder with an origin from within the brain. Expert commentary: Accumulated evidence with hypothalamic stimulation in cluster headache patients indicate that posterior hypothalamus terminates rather than triggers the attacks. More extensive studies on the genetics of cluster headache are necessary to disclose anomalies behind the increased familial risk of the disease. Results from ongoing clinical trials in cluster headache sufferers using monoclonal antibodies against CGRP will open soon a new era.
Buckle, Tessa; KleinJan, Gijs H; Engelen, Thijs; van den Berg, Nynke S; DeRuiter, Marco C; van der Heide, Uulke; Valdes Olmos, Renato A; Webb, Andrew; van Buchem, Mark A; Balm, Alfons J; van Leeuwen, Fijs W B
2016-09-01
Even when guided by SPECT/CT planning of nodal resection in the head-and-neck area is challenging due to the many critical anatomical structures present within the surgical field. In this study the potential of a (SPECT/)MRI-based surgical planning method was explored. Hereby MRI increases the identification of SNs within clustered lymph nodes (LNs) and vital structures located adjacent to the SN (such as cranial nerve branches). SPECT/CT and pathology reports from 100 head-and-neck melanoma and 40 oral cavity cancer patients were retrospectively assessed for SN locations in levels I-V and degree of nodal clustering. A diffusion-weighted-preparation magnetic resonance neurography (MRN) sequence was used in eight healthy volunteers to detect LNs and peripheral nerves. In 15% of patients clustered nodes were retrospectively shown to be present at the location where the SN was identified on SPECT/CT (level IIA: 37.2%, level IIB: 21.6% and level III: 15.5%). With MRN, improved LN delineation enabled discrimination of individual LNs within a cluster. Uniquely, this MRI technology also provided insight in LN distribution (23.2±4 LNs per subject) and size (range 21-372mm(3)), and enabled non-invasive assessment of anatomical variances in the location of the LNs and facial nerves. Diffusion-weighted-preparation MRN enabled improved delineation of LNs and their surrounding delicate anatomical structures in the areas that most often harbor SNs in the head-and-neck. Based on our findings a combined SPECT/MRI approach is envisioned for future surgical planning of complex SN resections in this region. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fan, Ang-Xiao; Dakpé, Stéphanie; Dao, Tien Tuan; Pouletaut, Philippe; Rachik, Mohamed; Ho Ba Tho, Marie Christine
2017-07-01
Finite element simulation of facial mimics provides objective indicators about soft tissue functions for improving diagnosis, treatment and follow-up of facial disorders. There is a lack of in vivo experimental data for model development and validation. In this study, the contribution of the paired Zygomaticus Major (ZM) muscle contraction on the facial mimics was investigated using in vivo experimental data derived from MRI. Maximal relative differences of 7.7% and 37% were noted between MRI-based measurements and numerical outcomes for ZM and skin deformation behaviors respectively. This study opens a new direction to simulate facial mimics with in vivo data.
Chondromyxoid fibroma of the mastoid facial nerve canal mimicking a facial nerve schwannoma.
Thompson, Andrew L; Bharatha, Aditya; Aviv, Richard I; Nedzelski, Julian; Chen, Joseph; Bilbao, Juan M; Wong, John; Saad, Reda; Symons, Sean P
2009-07-01
Chondromyxoid fibroma of the skull base is a rare entity. Involvement of the temporal bone is particularly rare. We present an unusual case of progressive facial nerve paralysis with imaging and clinical findings most suggestive of a facial nerve schwannoma. The lesion was tubular in appearance, expanded the mastoid facial nerve canal, protruded out of the stylomastoid foramen, and enhanced homogeneously. The only unusual imaging feature was minor calcification within the tumor. Surgery revealed an irregular, cystic lesion. Pathology diagnosed a chondromyxoid fibroma involving the mastoid portion of the facial nerve canal, destroying the facial nerve.
Mapping the emotional face. How individual face parts contribute to successful emotion recognition.
Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna
2017-01-01
Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.
Mapping the emotional face. How individual face parts contribute to successful emotion recognition
Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna
2017-01-01
Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation. PMID:28493921
Mutual information-based facial expression recognition
NASA Astrophysics Data System (ADS)
Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah
2013-12-01
This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.
Facial expression recognition based on improved local ternary pattern and stacked auto-encoder
NASA Astrophysics Data System (ADS)
Wu, Yao; Qiu, Weigen
2017-08-01
In order to enhance the robustness of facial expression recognition, we propose a method of facial expression recognition based on improved Local Ternary Pattern (LTP) combined with Stacked Auto-Encoder (SAE). This method uses the improved LTP extraction feature, and then uses the improved depth belief network as the detector and classifier to extract the LTP feature. The combination of LTP and improved deep belief network is realized in facial expression recognition. The recognition rate on CK+ databases has improved significantly.
ERIC Educational Resources Information Center
Rutherford, M. D.; McIntosh, Daniel N.
2007-01-01
When perceiving emotional facial expressions, people with autistic spectrum disorders (ASD) appear to focus on individual facial features rather than configurations. This paper tests whether individuals with ASD use these features in a rule-based strategy of emotional perception, rather than a typical, template-based strategy by considering…
Zhou, Renpeng; Wang, Chen; Qian, Yunliang; Wang, Danru
2015-09-01
Facial defects are multicomponent deficiencies rather than simple soft-tissue defects. Based on different branches of the superficial temporal vascular system, various tissue components can be obtained to reconstruct facial defects individually. From January 2004 to December 2013, 31 patients underwent reconstruction of facial defects with composite flaps based on the superficial temporal vascular system. Twenty cases of nasal defects were repaired with skin and cartilage components, six cases of facial defects were treated with double island flaps of the skin and fascia, three patients underwent eyebrow and lower eyelid reconstruction with hairy and hairless flaps simultaneously, and two patients underwent soft-tissue repair with auricular combined flaps and cranial bone grafts. All flaps survived completely. Donor-site morbidity is minimal, closed primarily. Donor areas healed with acceptable cosmetic results. The final outcome was satisfactory. Combined flaps based on the superficial temporal vascular system are a useful and versatile option in facial soft-tissue reconstruction. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Novel method to predict body weight in children based on age and morphological facial features.
Huang, Ziyin; Barrett, Jeffrey S; Barrett, Kyle; Barrett, Ryan; Ng, Chee M
2015-04-01
A new and novel approach of predicting the body weight of children based on age and morphological facial features using a three-layer feed-forward artificial neural network (ANN) model is reported. The model takes in four parameters, including age-based CDC-inferred median body weight and three facial feature distances measured from digital facial images. In this study, thirty-nine volunteer subjects with age ranging from 6-18 years old and BW ranging from 18.6-96.4 kg were used for model development and validation. The final model has a mean prediction error of 0.48, a mean squared error of 18.43, and a coefficient of correlation of 0.94. The model shows significant improvement in prediction accuracy over several age-based body weight prediction methods. Combining with a facial recognition algorithm that can detect, extract and measure the facial features used in this study, mobile applications that incorporate this body weight prediction method may be developed for clinical investigations where access to scales is limited. © 2014, The American College of Clinical Pharmacology.
Dynamic facial expression recognition based on geometric and texture features
NASA Astrophysics Data System (ADS)
Li, Ming; Wang, Zengfu
2018-04-01
Recently, dynamic facial expression recognition in videos has attracted growing attention. In this paper, we propose a novel dynamic facial expression recognition method by using geometric and texture features. In our system, the facial landmark movements and texture variations upon pairwise images are used to perform the dynamic facial expression recognition tasks. For one facial expression sequence, pairwise images are created between the first frame and each of its subsequent frames. Integration of both geometric and texture features further enhances the representation of the facial expressions. Finally, Support Vector Machine is used for facial expression recognition. Experiments conducted on the extended Cohn-Kanade database show that our proposed method can achieve a competitive performance with other methods.
The review and results of different methods for facial recognition
NASA Astrophysics Data System (ADS)
Le, Yifan
2017-09-01
In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.
Williams, Christopher J; Thomas, Rhys H; Pickersgill, Trevor P; Lyons, Marion; Lowe, Gwen; Stiff, Rhianwen E; Moore, Catherine; Jones, Rachel; Howe, Robin; Brunt, Huw; Ashman, Anna; Mason, Brendan W
2016-01-01
We report a cluster of atypical Guillain-Barré syndrome in 10 adults temporally related to a cluster of four children with acute flaccid paralysis, over a 3-month period in South Wales, United Kingdom. All adult cases were male, aged between 24 and 77 years. Seven had prominent facial diplegia at onset. Available electrophysiological studies showed axonal involvement in five adults. Seven reported various forms of respiratory disease before onset of neurological symptoms. The ages of children ranged from one to 13 years, three of the four were two years old or younger. Enterovirus testing is available for three children; two had evidence of enterovirus D68 infection in stool or respiratory samples. We describe the clinical features, epidemiology and state of current investigations for these unusual clusters of illness.
Interference among the Processing of Facial Emotion, Face Race, and Face Gender.
Li, Yongna; Tse, Chi-Shing
2016-01-01
People can process multiple dimensions of facial properties simultaneously. Facial processing models are based on the processing of facial properties. The current study examined the processing of facial emotion, face race, and face gender using categorization tasks. The same set of Chinese, White and Black faces, each posing a neutral, happy or angry expression, was used in three experiments. Facial emotion interacted with face race in all the tasks. The interaction of face race and face gender was found in the race and gender categorization tasks, whereas the interaction of facial emotion and face gender was significant in the emotion and gender categorization tasks. These results provided evidence for a symmetric interaction between variant facial properties (emotion) and invariant facial properties (race and gender).
Interference among the Processing of Facial Emotion, Face Race, and Face Gender
Li, Yongna; Tse, Chi-Shing
2016-01-01
People can process multiple dimensions of facial properties simultaneously. Facial processing models are based on the processing of facial properties. The current study examined the processing of facial emotion, face race, and face gender using categorization tasks. The same set of Chinese, White and Black faces, each posing a neutral, happy or angry expression, was used in three experiments. Facial emotion interacted with face race in all the tasks. The interaction of face race and face gender was found in the race and gender categorization tasks, whereas the interaction of facial emotion and face gender was significant in the emotion and gender categorization tasks. These results provided evidence for a symmetric interaction between variant facial properties (emotion) and invariant facial properties (race and gender). PMID:27840621
Boyette, Jennings R
2014-10-01
Facial trauma in children differs from adults. The growing facial skeleton presents several challenges to the reconstructive surgeon. A thorough understanding of the patterns of facial growth and development is needed to form an individualized treatment strategy. A proper diagnosis must be made and treatment options weighed against the risk of causing further harm to facial development. This article focuses on the management of facial fractures in children. Discussed are common fracture patterns based on the development of the facial structure, initial management, diagnostic strategies, new concepts and old controversies regarding radiologic examinations, conservative versus operative intervention, risks of growth impairment, and resorbable fixation. Copyright © 2014 Elsevier Inc. All rights reserved.
Toward DNA-based facial composites: preliminary results and validation.
Claes, Peter; Hill, Harold; Shriver, Mark D
2014-11-01
The potential of constructing useful DNA-based facial composites is forensically of great interest. Given the significant identity information coded in the human face these predictions could help investigations out of an impasse. Although, there is substantial evidence that much of the total variation in facial features is genetically mediated, the discovery of which genes and gene variants underlie normal facial variation has been hampered primarily by the multipartite nature of facial variation. Traditionally, such physical complexity is simplified by simple scalar measurements defined a priori, such as nose or mouth width or alternatively using dimensionality reduction techniques such as principal component analysis where each principal coordinate is then treated as a scalar trait. However, as shown in previous and related work, a more impartial and systematic approach to modeling facial morphology is available and can facilitate both the gene discovery steps, as we recently showed, and DNA-based facial composite construction, as we show here. We first use genomic ancestry and sex to create a base-face, which is simply an average sex and ancestry matched face. Subsequently, the effects of 24 individual SNPs that have been shown to have significant effects on facial variation are overlaid on the base-face forming the predicted-face in a process akin to a photomontage or image blending. We next evaluate the accuracy of predicted faces using cross-validation. Physical accuracy of the facial predictions either locally in particular parts of the face or in terms of overall similarity is mainly determined by sex and genomic ancestry. The SNP-effects maintain the physical accuracy while significantly increasing the distinctiveness of the facial predictions, which would be expected to reduce false positives in perceptual identification tasks. To the best of our knowledge this is the first effort at generating facial composites from DNA and the results are preliminary but certainly promising, especially considering the limited amount of genetic information about the face contained in these 24 SNPs. This approach can incorporate additional SNPs as these are discovered and their effects documented. In this context we discuss three main avenues of research: expanding our knowledge of the genetic architecture of facial morphology, improving the predictive modeling of facial morphology by exploring and incorporating alternative prediction models, and increasing the value of the results through the weighted encoding of physical measurements in terms of human perception of faces. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Liaw, Hongming Leonard; Chiu, Mei-Hung; Chou, Chin-Cheng
2014-01-01
It has been shown that facial expression states of learners are related to their learning. As part of a continuing research project, the current study delved further for a more detailed description of the relation between facial microexpression state (FMES) changes and learning in conceptual conflict-based instructions. Based on the data gathered…
Headache with autonomic features in a child: cluster headache or contact-point headache?
Mishra, Devendra; Choudhury, Krishna Kant; Gupta, Alok
2008-03-01
Headache and facial pain due to diseases of the nose and sinuses are not uncommon in children. However, nasal contact-point associated with headache is relatively uncommon and has unclear etiological significance. We herein report a child having headache with autonomic features and contact-point in the nose, and discuss the difficulties in diagnostic categorization.
Kuroe, Kazuto; Rosas, Antonio; Molleson, Theya
2004-04-01
The aim of this study was to analyse the effects of cranial base orientation on the morphology of the craniofacial system in human populations. Three geographically distant populations from Europe (72), Africa (48) and Asia (24) were chosen. Five angular and two linear variables from the cranial base component and six angular and six linear variables from the facial component based on two reference lines of the vertical posterior maxillary and Frankfort horizontal planes were measured. The European sample presented dolichofacial individuals with a larger face height and a smaller face depth derived from a raised cranial base and facial cranium orientation which tended to be similar to the Asian sample. The African sample presented brachyfacial individuals with a reduced face height and a larger face depth as a result of a lowered cranial base and facial cranium orientation. The Asian sample presented dolichofacial individuals with a larger face height and depth due to a raised cranial base and facial cranium orientation. The findings of this study suggest that cranial base orientation and posterior cranial base length appear to be valid discriminating factors between different human populations.
Mastication Evaluation With Unsupervised Learning: Using an Inertial Sensor-Based System.
Lucena, Caroline Vieira; Lacerda, Marcelo; Caldas, Rafael; De Lima Neto, Fernando Buarque; Rativa, Diego
2018-01-01
There is a direct relationship between the prevalence of musculoskeletal disorders of the temporomandibular joint and orofacial disorders. A well-elaborated analysis of the jaw movements provides relevant information for healthcare professionals to conclude their diagnosis. Different approaches have been explored to track jaw movements such that the mastication analysis is getting less subjective; however, all methods are still highly subjective, and the quality of the assessments depends much on the experience of the health professional. In this paper, an accurate and non-invasive method based on a commercial low-cost inertial sensor (MPU6050) to measure jaw movements is proposed. The jaw-movement feature values are compared to the obtained with clinical analysis, showing no statistically significant difference between both methods. Moreover, We propose to use unsupervised paradigm approaches to cluster mastication patterns of healthy subjects and simulated patients with facial trauma. Two techniques were used in this paper to instantiate the method: Kohonen's Self-Organizing Maps and K-Means Clustering. Both algorithms have excellent performances to process jaw-movements data, showing encouraging results and potential to bring a full assessment of the masticatory function. The proposed method can be applied in real-time providing relevant dynamic information for health-care professionals.
Facial expression recognition based on improved deep belief networks
NASA Astrophysics Data System (ADS)
Wu, Yao; Qiu, Weigen
2017-08-01
In order to improve the robustness of facial expression recognition, a method of face expression recognition based on Local Binary Pattern (LBP) combined with improved deep belief networks (DBNs) is proposed. This method uses LBP to extract the feature, and then uses the improved deep belief networks as the detector and classifier to extract the LBP feature. The combination of LBP and improved deep belief networks is realized in facial expression recognition. In the JAFFE (Japanese Female Facial Expression) database on the recognition rate has improved significantly.
Shape-based approach for the estimation of individual facial mimics in craniofacial surgery planning
NASA Astrophysics Data System (ADS)
Gladilin, Evgeny; Zachow, Stefan; Deuflhard, Peter; Hege, Hans-Christian
2002-05-01
Besides the static soft tissue prediction, the estimation of basic facial emotion expressions is another important criterion for the evaluation of craniofacial surgery planning. For a realistic simulation of facial mimics, an adequate biomechanical model of soft tissue including the mimic musculature is needed. In this work, we present an approach for the modeling of arbitrarily shaped muscles and the estimation of basic individual facial mimics, which is based on the geometrical model derived from the individual tomographic data and the general finite element modeling of soft tissue biomechanics.
Yang, Yang; Saleemi, Imran; Shah, Mubarak
2013-07-01
This paper proposes a novel representation of articulated human actions and gestures and facial expressions. The main goals of the proposed approach are: 1) to enable recognition using very few examples, i.e., one or k-shot learning, and 2) meaningful organization of unlabeled datasets by unsupervised clustering. Our proposed representation is obtained by automatically discovering high-level subactions or motion primitives, by hierarchical clustering of observed optical flow in four-dimensional, spatial, and motion flow space. The completely unsupervised proposed method, in contrast to state-of-the-art representations like bag of video words, provides a meaningful representation conducive to visual interpretation and textual labeling. Each primitive action depicts an atomic subaction, like directional motion of limb or torso, and is represented by a mixture of four-dimensional Gaussian distributions. For one--shot and k-shot learning, the sequence of primitive labels discovered in a test video are labeled using KL divergence, and can then be represented as a string and matched against similar strings of training videos. The same sequence can also be collapsed into a histogram of primitives or be used to learn a Hidden Markov model to represent classes. We have performed extensive experiments on recognition by one and k-shot learning as well as unsupervised action clustering on six human actions and gesture datasets, a composite dataset, and a database of facial expressions. These experiments confirm the validity and discriminative nature of the proposed representation.
NATIONAL PREPAREDNESS: Technologies to Secure Federal Buildings
2002-04-25
Medium, some resistance based on sensitivity of eye Facial recognition Facial features are captured and compared Dependent on lighting, positioning...two primary types of facial recognition technology used to create templates: 1. Local feature analysis—Dozens of images from regions of the face are...an adjacent feature. Attachment I—Access Control Technologies: Biometrics Facial Recognition How the technology works
Subject-specific and pose-oriented facial features for face recognition across poses.
Lee, Ping-Han; Hsu, Gee-Sern; Wang, Yun-Wen; Hung, Yi-Ping
2012-10-01
Most face recognition scenarios assume that frontal faces or mug shots are available for enrollment to the database, faces of other poses are collected in the probe set. Given a face from the probe set, one needs to determine whether a match in the database exists. This is under the assumption that in forensic applications, most suspects have their mug shots available in the database, and face recognition aims at recognizing the suspects when their faces of various poses are captured by a surveillance camera. This paper considers a different scenario: given a face with multiple poses available, which may or may not include a mug shot, develop a method to recognize the face with poses different from those captured. That is, given two disjoint sets of poses of a face, one for enrollment and the other for recognition, this paper reports a method best for handling such cases. The proposed method includes feature extraction and classification. For feature extraction, we first cluster the poses of each subject's face in the enrollment set into a few pose classes and then decompose the appearance of the face in each pose class using Embedded Hidden Markov Model, which allows us to define a set of subject-specific and pose-priented (SSPO) facial components for each subject. For classification, an Adaboost weighting scheme is used to fuse the component classifiers with SSPO component features. The proposed method is proven to outperform other approaches, including a component-based classifier with local facial features cropped manually, in an extensive performance evaluation study.
Cognitive penetrability and emotion recognition in human facial expressions
Marchi, Francesco
2015-01-01
Do our background beliefs, desires, and mental images influence our perceptual experience of the emotions of others? In this paper, we will address the possibility of cognitive penetration (CP) of perceptual experience in the domain of social cognition. In particular, we focus on emotion recognition based on the visual experience of facial expressions. After introducing the current debate on CP, we review examples of perceptual adaptation for facial expressions of emotion. This evidence supports the idea that facial expressions are perceptually processed as wholes. That is, the perceptual system integrates lower-level facial features, such as eyebrow orientation, mouth angle etc., into facial compounds. We then present additional experimental evidence showing that in some cases, emotion recognition on the basis of facial expression is sensitive to and modified by the background knowledge of the subject. We argue that such sensitivity is best explained as a difference in the visual experience of the facial expression, not just as a modification of the judgment based on this experience. The difference in experience is characterized as the result of the interference of background knowledge with the perceptual integration process for faces. Thus, according to the best explanation, we have to accept CP in some cases of emotion recognition. Finally, we discuss a recently proposed mechanism for CP in the face-based recognition of emotion. PMID:26150796
NASA Astrophysics Data System (ADS)
Benitez-Garcia, Gibran; Nakamura, Tomoaki; Kaneko, Masahide
2017-01-01
Darwin was the first one to assert that facial expressions are innate and universal, which are recognized across all cultures. However, recent some cross-cultural studies have questioned this assumed universality. Therefore, this paper presents an analysis of the differences between Western and East-Asian faces of the six basic expressions (anger, disgust, fear, happiness, sadness and surprise) focused on three individual facial regions of eyes-eyebrows, nose and mouth. The analysis is conducted by applying PCA for two feature extraction methods: appearance-based by using the pixel intensities of facial parts, and geometric-based by handling 125 feature points from the face. Both methods are evaluated using 4 standard databases for both racial groups and the results are compared with a cross-cultural human study applied to 20 participants. Our analysis reveals that differences between Westerns and East-Asians exist mainly on the regions of eyes-eyebrows and mouth for expressions of fear and disgust respectively. This work presents important findings for a better design of automatic facial expression recognition systems based on the difference between two racial groups.
Lindsay, Kaitlin E; Rühli, Frank J; Deleon, Valerie Burke
2015-06-01
The technique of forensic facial approximation, or reconstruction, is one of many facets of the field of mummy studies. Although far from a rigorous scientific technique, evidence-based visualization of antemortem appearance may supplement radiological, chemical, histological, and epidemiological studies of ancient remains. Published guidelines exist for creating facial approximations, but few approximations are published with documentation of the specific process and references used. Additionally, significant new research has taken place in recent years which helps define best practices in the field. This case study records the facial approximation of a 3,000-year-old ancient Egyptian woman using medical imaging data and the digital sculpting program, ZBrush. It represents a synthesis of current published techniques based on the most solid anatomical and/or statistical evidence. Through this study, it was found that although certain improvements have been made in developing repeatable, evidence-based guidelines for facial approximation, there are many proposed methods still awaiting confirmation from comprehensive studies. This study attempts to assist artists, anthropologists, and forensic investigators working in facial approximation by presenting the recommended methods in a chronological and usable format. © 2015 Wiley Periodicals, Inc.
Learning representative features for facial images based on a modified principal component analysis
NASA Astrophysics Data System (ADS)
Averkin, Anton; Potapov, Alexey
2013-05-01
The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.
Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji
2003-01-01
Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.
Parameterized Facial Expression Synthesis Based on MPEG-4
NASA Astrophysics Data System (ADS)
Raouzaiou, Amaryllis; Tsapatsoulis, Nicolas; Karpouzis, Kostas; Kollias, Stefanos
2002-12-01
In the framework of MPEG-4, one can include applications where virtual agents, utilizing both textual and multisensory data, including facial expressions and nonverbal speech help systems become accustomed to the actual feelings of the user. Applications of this technology are expected in educational environments, virtual collaborative workplaces, communities, and interactive entertainment. Facial animation has gained much interest within the MPEG-4 framework; with implementation details being an open research area (Tekalp, 1999). In this paper, we describe a method for enriching human computer interaction, focusing on analysis and synthesis of primary and intermediate facial expressions (Ekman and Friesen (1978)). To achieve this goal, we utilize facial animation parameters (FAPs) to model primary expressions and describe a rule-based technique for handling intermediate ones. A relation between FAPs and the activation parameter proposed in classical psychological studies is established, leading to parameterized facial expression analysis and synthesis notions, compatible with the MPEG-4 standard.
Grunebaum, Lisa Danielle; Reiter, David
2006-01-01
To determine current practice for use of perioperative antibiotics among facial plastic surgeons, to determine the extent of use of literature support for preferences of facial plastic surgeons, and to compare patterns of use with nationally supported evidence-based guidelines. A link to a Web site containing a questionnaire on perioperative antibiotic use was e-mailed to more than 1000 facial plastic surgeons in the United States. Responses were archived in a dedicated database and analyzed to determine patterns of use and methods of documenting that use. Current literature was used to develop evidence-based recommendations for perioperative antibiotic use, emphasizing current nationally supported guidelines. Preferences varied significantly for medication used, dosage and regimen, time of first dose relative to incision time, setting in which medication was administered, and procedures for which perioperative antibiotic was deemed necessary. Surgical site infection in facial plastic surgery can be reduced by better conformance to currently available evidence-based guidelines. We offer specific recommendations that are supported by the current literature.
Static and Dynamic Facial Cues Differentially Affect the Consistency of Social Evaluations.
Hehman, Eric; Flake, Jessica K; Freeman, Jonathan B
2015-08-01
Individuals are quite sensitive to others' appearance cues when forming social evaluations. Cues such as facial emotional resemblance are based on facial musculature and thus dynamic. Cues such as a face's structure are based on the underlying bone and are thus relatively static. The current research examines the distinction between these types of facial cues by investigating the consistency in social evaluations arising from dynamic versus static cues. Specifically, across four studies using real faces, digitally generated faces, and downstream behavioral decisions, we demonstrate that social evaluations based on dynamic cues, such as intentions, have greater variability across multiple presentations of the same identity than do social evaluations based on static cues, such as ability. Thus, although evaluations of intentions vary considerably across different instances of a target's face, evaluations of ability are relatively fixed. The findings highlight the role of facial cues' consistency in the stability of social evaluations. © 2015 by the Society for Personality and Social Psychology, Inc.
Facial expression recognition based on weber local descriptor and sparse representation
NASA Astrophysics Data System (ADS)
Ouyang, Yan
2018-03-01
Automatic facial expression recognition has been one of the research hotspots in the area of computer vision for nearly ten years. During the decade, many state-of-the-art methods have been proposed which perform very high accurate rate based on the face images without any interference. Nowadays, many researchers begin to challenge the task of classifying the facial expression images with corruptions and occlusions and the Sparse Representation based Classification framework has been wildly used because it can robust to the corruptions and occlusions. Therefore, this paper proposed a novel facial expression recognition method based on Weber local descriptor (WLD) and Sparse representation. The method includes three parts: firstly the face images are divided into many local patches, and then the WLD histograms of each patch are extracted, finally all the WLD histograms features are composed into a vector and combined with SRC to classify the facial expressions. The experiment results on the Cohn-Kanade database show that the proposed method is robust to occlusions and corruptions.
Update on botulinum toxin and dermal fillers.
Berbos, Zachary J; Lipham, William J
2010-09-01
The art and science of facial rejuvenation is an ever-evolving field of medicine, as evidenced by the continual development of new surgical and nonsurgical treatment modalities. Over the past 10 years, the use of botulinum toxin and dermal fillers for aesthetic purposes has risen sharply. Herein, we discuss properties of several commonly used injectable products and provide basic instruction for their use toward the goal of achieving facial rejuvenation. The demand for nonsurgical injection-based facial rejuvenation products has risen enormously in recent years. Used independently or concurrently, botulinum toxin and dermal filler agents offer an affordable, minimally invasive approach to facial rejuvenation. Botulinum toxin and dermal fillers can be used to diminish facial rhytides, restore facial volume, and sculpt facial contours, thereby achieving an aesthetically pleasing, youthful facial appearance.
Facial Displays Are Tools for Social Influence.
Crivelli, Carlos; Fridlund, Alan J
2018-05-01
Based on modern theories of signal evolution and animal communication, the behavioral ecology view of facial displays (BECV) reconceives our 'facial expressions of emotion' as social tools that serve as lead signs to contingent action in social negotiation. BECV offers an externalist, functionalist view of facial displays that is not bound to Western conceptions about either expressions or emotions. It easily accommodates recent findings of diversity in facial displays, their public context-dependency, and the curious but common occurrence of solitary facial behavior. Finally, BECV restores continuity of human facial behavior research with modern functional accounts of non-human communication, and provides a non-mentalistic account of facial displays well-suited to new developments in artificial intelligence and social robotics. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Facial trauma: general principles of management.
Hollier, Larry H; Sharabi, Safa E; Koshy, John C; Stal, Samuel
2010-07-01
Facial fractures are common problems encountered by the plastic surgeon. Although ubiquitous in nature, their optimal treatment requires precise knowledge of the most recent evidence-based and technologically advanced recommendations. This article discusses a variety of contemporary issues regarding facial fractures, including physical and radiologic diagnosis, treatment pearls and caveats, and the role of various synthetic materials and plating technologies for optimal facial fracture fixation.
NASA Astrophysics Data System (ADS)
Hirose, Misa; Toyota, Saori; Tsumura, Norimichi
2018-02-01
In this research, we evaluate the visibility of age spot and freckle with changing the blood volume based on simulated spectral reflectance distribution and the actual facial color images, and compare these results. First, we generate three types of spatial distribution of age spot and freckle in patch-like images based on the simulated spectral reflectance. The spectral reflectance is simulated using Monte Carlo simulation of light transport in multi-layered tissue. Next, we reconstruct the facial color image with changing the blood volume. We acquire the concentration distribution of melanin, hemoglobin and shading components by applying the independent component analysis on a facial color image. We reproduce images using the obtained melanin and shading concentration and the changed hemoglobin concentration. Finally, we evaluate the visibility of pigmentations using simulated spectral reflectance distribution and facial color images. In the result of simulated spectral reflectance distribution, we found that the visibility became lower as the blood volume increases. However, we can see that a specific blood volume reduces the visibility of the actual pigmentations from the result of the facial color images.
Automatic detection of confusion in elderly users of a web-based health instruction video.
Postma-Nilsenová, Marie; Postma, Eric; Tates, Kiek
2015-06-01
Because of cognitive limitations and lower health literacy, many elderly patients have difficulty understanding verbal medical instructions. Automatic detection of facial movements provides a nonintrusive basis for building technological tools supporting confusion detection in healthcare delivery applications on the Internet. Twenty-four elderly participants (70-90 years old) were recorded while watching Web-based health instruction videos involving easy and complex medical terminology. Relevant fragments of the participants' facial expressions were rated by 40 medical students for perceived level of confusion and analyzed with automatic software for facial movement recognition. A computer classification of the automatically detected facial features performed more accurately and with a higher sensitivity than the human observers (automatic detection and classification, 64% accuracy, 0.64 sensitivity; human observers, 41% accuracy, 0.43 sensitivity). A drill-down analysis of cues to confusion indicated the importance of the eye and eyebrow region. Confusion caused by misunderstanding of medical terminology is signaled by facial cues that can be automatically detected with currently available facial expression detection technology. The findings are relevant for the development of Web-based services for healthcare consumers.
de la Rosa, Stephan; Fademrecht, Laura; Bülthoff, Heinrich H; Giese, Martin A; Curio, Cristóbal
2018-06-01
Motor-based theories of facial expression recognition propose that the visual perception of facial expression is aided by sensorimotor processes that are also used for the production of the same expression. Accordingly, sensorimotor and visual processes should provide congruent emotional information about a facial expression. Here, we report evidence that challenges this view. Specifically, the repeated execution of facial expressions has the opposite effect on the recognition of a subsequent facial expression than the repeated viewing of facial expressions. Moreover, the findings of the motor condition, but not of the visual condition, were correlated with a nonsensory condition in which participants imagined an emotional situation. These results can be well accounted for by the idea that facial expression recognition is not always mediated by motor processes but can also be recognized on visual information alone.
Perceived functional impact of abnormal facial appearance.
Rankin, Marlene; Borah, Gregory L
2003-06-01
Functional facial deformities are usually described as those that impair respiration, eating, hearing, or speech. Yet facial scars and cutaneous deformities have a significant negative effect on social functionality that has been poorly documented in the scientific literature. Insurance companies are declining payments for reconstructive surgical procedures for facial deformities caused by congenital disabilities and after cancer or trauma operations that do not affect mechanical facial activity. The purpose of this study was to establish a large, sample-based evaluation of the perceived social functioning, interpersonal characteristics, and employability indices for a range of facial appearances (normal and abnormal). Adult volunteer evaluators (n = 210) provided their subjective perceptions based on facial physical appearance, and an analysis of the consequences of facial deformity on parameters of preferential treatment was performed. A two-group comparative research design rated the differences among 10 examples of digitally altered facial photographs of actual patients among various age and ethnic groups with "normal" and "abnormal" congenital deformities or posttrauma scars. Photographs of adult patients with observable congenital and posttraumatic deformities (abnormal) were digitally retouched to eliminate the stigmatic defects (normal). The normal and abnormal photographs of identical patients were evaluated by the large sample study group on nine parameters of social functioning, such as honesty, employability, attractiveness, and effectiveness, using a visual analogue rating scale. Patients with abnormal facial characteristics were rated as significantly less honest (p = 0.007), less employable (p = 0.001), less trustworthy (p = 0.01), less optimistic (p = 0.001), less effective (p = 0.02), less capable (p = 0.002), less intelligent (p = 0.03), less popular (p = 0.001), and less attractive (p = 0.001) than were the same patients with normal facial appearances. Facial deformity caused by trauma, congenital disabilities, and postsurgical sequelae present with significant adverse functional consequences. Facial deformities have a significant negative effect on perceptions of social functionality, including employability, honesty, and trustworthiness. Adverse perceptions of patients with facial deformities occur regardless of sex, educational level, and age of evaluator.
A Genome-Wide Association Study Identifies Five Loci Influencing Facial Morphology in Europeans
Liu, Fan; van der Lijn, Fedde; Schurmann, Claudia; Zhu, Gu; Chakravarty, M. Mallar; Hysi, Pirro G.; Wollstein, Andreas; Lao, Oscar; de Bruijne, Marleen; Ikram, M. Arfan; van der Lugt, Aad; Rivadeneira, Fernando; Uitterlinden, André G.; Hofman, Albert; Niessen, Wiro J.; Homuth, Georg; de Zubicaray, Greig; McMahon, Katie L.; Thompson, Paul M.; Daboul, Amro; Puls, Ralf; Hegenscheid, Katrin; Bevan, Liisa; Pausova, Zdenka; Medland, Sarah E.; Montgomery, Grant W.; Wright, Margaret J.; Wicking, Carol; Boehringer, Stefan; Spector, Timothy D.; Paus, Tomáš; Martin, Nicholas G.; Biffar, Reiner; Kayser, Manfred
2012-01-01
Inter-individual variation in facial shape is one of the most noticeable phenotypes in humans, and it is clearly under genetic regulation; however, almost nothing is known about the genetic basis of normal human facial morphology. We therefore conducted a genome-wide association study for facial shape phenotypes in multiple discovery and replication cohorts, considering almost ten thousand individuals of European descent from several countries. Phenotyping of facial shape features was based on landmark data obtained from three-dimensional head magnetic resonance images (MRIs) and two-dimensional portrait images. We identified five independent genetic loci associated with different facial phenotypes, suggesting the involvement of five candidate genes—PRDM16, PAX3, TP63, C5orf50, and COL17A1—in the determination of the human face. Three of them have been implicated previously in vertebrate craniofacial development and disease, and the remaining two genes potentially represent novel players in the molecular networks governing facial development. Our finding at PAX3 influencing the position of the nasion replicates a recent GWAS of facial features. In addition to the reported GWA findings, we established links between common DNA variants previously associated with NSCL/P at 2p21, 8q24, 13q31, and 17q22 and normal facial-shape variations based on a candidate gene approach. Overall our study implies that DNA variants in genes essential for craniofacial development contribute with relatively small effect size to the spectrum of normal variation in human facial morphology. This observation has important consequences for future studies aiming to identify more genes involved in the human facial morphology, as well as for potential applications of DNA prediction of facial shape such as in future forensic applications. PMID:23028347
Sound-induced facial synkinesis following facial nerve paralysis.
Ma, Ming-San; van der Hoeven, Johannes H; Nicolai, Jean-Philippe A; Meek, Marcel F
2009-08-01
Facial synkinesis (or synkinesia) (FS) occurs frequently after paresis or paralysis of the facial nerve and is in most cases due to aberrant regeneration of (branches of) the facial nerve. Patients suffer from inappropriate and involuntary synchronous facial muscle contractions. Here we describe two cases of sound-induced facial synkinesis (SFS) after facial nerve injury. As far as we know, this phenomenon has not been described in the English literature before. Patient A presented with right hemifacial palsy after lesion of the facial nerve due to skull base fracture. He reported involuntary muscle activity at the right corner of the mouth, specifically on hearing ringing keys. Patient B suffered from left hemifacial palsy following otitis media and developed involuntary muscle contraction in the facial musculature specifically on hearing clapping hands or a trumpet sound. Both patients were evaluated by means of video, audio and EMG analysis. Possible mechanisms in the pathophysiology of SFS are postulated and therapeutic options are discussed.
Novel Noninvasive Brain Disease Detection System Using a Facial Image Sensor
Shu, Ting; Zhang, Bob; Tang, Yuan Yan
2017-01-01
Brain disease including any conditions or disabilities that affect the brain is fast becoming a leading cause of death. The traditional diagnostic methods of brain disease are time-consuming, inconvenient and non-patient friendly. As more and more individuals undergo examinations to determine if they suffer from any form of brain disease, developing noninvasive, efficient, and patient friendly detection systems will be beneficial. Therefore, in this paper, we propose a novel noninvasive brain disease detection system based on the analysis of facial colors. The system consists of four components. A facial image is first captured through a specialized sensor, where four facial key blocks are next located automatically from the various facial regions. Color features are extracted from each block to form a feature vector for classification via the Probabilistic Collaborative based Classifier. To thoroughly test the system and its performance, seven facial key block combinations were experimented. The best result was achieved using the second facial key block, where it showed that the Probabilistic Collaborative based Classifier is the most suitable. The overall performance of the proposed system achieves an accuracy −95%, a sensitivity −94.33%, a specificity −95.67%, and an average processing time (for one sample) of <1 min at brain disease detection. PMID:29292716
United States Homeland Security and National Biometric Identification
2002-04-09
security number. Biometrics is the use of unique individual traits such as fingerprints, iris eye patterns, voice recognition, and facial recognition to...technology to control access onto their military bases using a Defense Manpower Management Command developed software application. FACIAL Facial recognition systems...installed facial recognition systems in conjunction with a series of 200 cameras to fight street crime and identify terrorists. The cameras, which are
Nevalainen, Netta; Lähdesmäki, Raija; Mäki, Pirjo; Ek, Ellen; Taanila, Anja; Pesonen, Paula; Sipilä, Kirsi
2017-05-01
The aim was to study the association between stress level and chronic facial pain, while controlling for the effect of depression on this association, during a three-year follow-up in a general population-based birth cohort. In the general population-based Northern Finland 1966 Birth Cohort, information about stress level, depression and facial pain were collected using questionnaires at the age of 31 years. Stress level was measured using the Work Ability Index. Depression was assessed using the 13-item depression subscale in the Hopkins Symptom Checklist-25. Three years later, a subsample of 52 subjects (42 women) with chronic facial pain and 52 pain-free controls (42 women) was formed. Of the subjects having high stress level at baseline, 73.3% had chronic facial pain, and 26.7% were pain-free three years later. The univariate logistic regression analysis showed that high stress level at 31 years increased the risk for chronic facial pain (crude OR 6.1, 95%, CI 1.3-28.7) three years later. When including depression in a multivariate model, depression associated statistically significantly with chronic facial pain (adjusted OR 2.5, 95%, CI 1.0-5.8), whereas stress level did not (adjusted OR 2.3, 95%, CI 0.6-8.4). High stress level is connected with increased risk for chronic facial pain. This association seems to mediate through depression.
Down syndrome detection from facial photographs using machine learning techniques
NASA Astrophysics Data System (ADS)
Zhao, Qian; Rosenbaum, Kenneth; Sze, Raymond; Zand, Dina; Summar, Marshall; Linguraru, Marius George
2013-02-01
Down syndrome is the most commonly occurring chromosomal condition; one in every 691 babies in United States is born with it. Patients with Down syndrome have an increased risk for heart defects, respiratory and hearing problems and the early detection of the syndrome is fundamental for managing the disease. Clinically, facial appearance is an important indicator in diagnosing Down syndrome and it paves the way for computer-aided diagnosis based on facial image analysis. In this study, we propose a novel method to detect Down syndrome using photography for computer-assisted image-based facial dysmorphology. Geometric features based on facial anatomical landmarks, local texture features based on the Contourlet transform and local binary pattern are investigated to represent facial characteristics. Then a support vector machine classifier is used to discriminate normal and abnormal cases; accuracy, precision and recall are used to evaluate the method. The comparison among the geometric, local texture and combined features was performed using the leave-one-out validation. Our method achieved 97.92% accuracy with high precision and recall for the combined features; the detection results were higher than using only geometric or texture features. The promising results indicate that our method has the potential for automated assessment for Down syndrome from simple, noninvasive imaging data.
Automatically Log Off Upon Disappearance of Facial Image
2005-03-01
log off a PC when the user’s face disappears for an adjustable time interval. Among the fundamental technologies of biometrics, facial recognition is... facial recognition products. In this report, a brief overview of face detection technologies is provided. The particular neural network-based face...ensure that the user logging onto the system is the same person. Among the fundamental technologies of biometrics, facial recognition is the only
MacDonald, P M; Kirkpatrick, S W; Sullivan, L A
1996-11-01
Schematic drawings of facial expressions were evaluated as a possible assessment tool for research on emotion recognition and interpretation involving young children. A subset of Ekman and Friesen's (1976) Pictures of Facial Affect was used as the standard for comparison. Preschool children (N = 138) were shown drawing and photographs in two context conditions for six emotions (anger, disgust, fear, happiness, sadness, and surprise). The overall correlation between accuracy for the photographs and drawings was .677. A significant difference was found for the stimulus condition (photographs vs. drawings) but not for the administration condition (label-based vs. context-based). Children were significantly more accurate in interpreting drawings than photographs and tended to be more accurate in identifying facial expressions in the label-based administration condition for both photographs and drawings than in the context-based administration condition.
Xu, Fen; Wu, Dingcheng; Toriyama, Rie; Ma, Fengling; Itakura, Shoji; Lee, Kang
2012-01-01
All cultural groups in the world place paramount value on interpersonal trust. Existing research suggests that although accurate judgments of another's trustworthiness require extensive interactions with the person, we often make trustworthiness judgments based on facial cues on the first encounter. However, little is known about what facial cues are used for such judgments and what the bases are on which individuals make their trustworthiness judgments. In the present study, we tested the hypothesis that individuals may use facial attractiveness cues as a "shortcut" for judging another's trustworthiness due to the lack of other more informative and in-depth information about trustworthiness. Using data-driven statistical models of 3D Caucasian faces, we compared facial cues used for judging the trustworthiness of Caucasian faces by Caucasian participants who were highly experienced with Caucasian faces, and the facial cues used by Chinese participants who were unfamiliar with Caucasian faces. We found that Chinese and Caucasian participants used similar facial cues to judge trustworthiness. Also, both Chinese and Caucasian participants used almost identical facial cues for judging trustworthiness and attractiveness. The results suggest that without opportunities to interact with another person extensively, we use the less racially specific and more universal attractiveness cues as a "shortcut" for trustworthiness judgments.
Exaggerated perception of facial expressions is increased in individuals with schizotypal traits
Uono, Shota; Sato, Wataru; Toichi, Motomi
2015-01-01
Emotional facial expressions are indispensable communicative tools, and social interactions involving facial expressions are impaired in some psychiatric disorders. Recent studies revealed that the perception of dynamic facial expressions was exaggerated in normal participants, and this exaggerated perception is weakened in autism spectrum disorder (ASD). Based on the notion that ASD and schizophrenia spectrum disorder are at two extremes of the continuum with respect to social impairment, we hypothesized that schizophrenic characteristics would strengthen the exaggerated perception of dynamic facial expressions. To test this hypothesis, we investigated the relationship between the perception of facial expressions and schizotypal traits in a normal population. We presented dynamic and static facial expressions, and asked participants to change an emotional face display to match the perceived final image. The presence of schizotypal traits was positively correlated with the degree of exaggeration for dynamic, as well as static, facial expressions. Among its subscales, the paranoia trait was positively correlated with the exaggerated perception of facial expressions. These results suggest that schizotypal traits, specifically the tendency to over-attribute mental states to others, exaggerate the perception of emotional facial expressions. PMID:26135081
Exaggerated perception of facial expressions is increased in individuals with schizotypal traits.
Uono, Shota; Sato, Wataru; Toichi, Motomi
2015-07-02
Emotional facial expressions are indispensable communicative tools, and social interactions involving facial expressions are impaired in some psychiatric disorders. Recent studies revealed that the perception of dynamic facial expressions was exaggerated in normal participants, and this exaggerated perception is weakened in autism spectrum disorder (ASD). Based on the notion that ASD and schizophrenia spectrum disorder are at two extremes of the continuum with respect to social impairment, we hypothesized that schizophrenic characteristics would strengthen the exaggerated perception of dynamic facial expressions. To test this hypothesis, we investigated the relationship between the perception of facial expressions and schizotypal traits in a normal population. We presented dynamic and static facial expressions, and asked participants to change an emotional face display to match the perceived final image. The presence of schizotypal traits was positively correlated with the degree of exaggeration for dynamic, as well as static, facial expressions. Among its subscales, the paranoia trait was positively correlated with the exaggerated perception of facial expressions. These results suggest that schizotypal traits, specifically the tendency to over-attribute mental states to others, exaggerate the perception of emotional facial expressions.
Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia
2016-05-01
Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Nine-year-old children use norm-based coding to visually represent facial expression.
Burton, Nichola; Jeffery, Linda; Skinner, Andrew L; Benton, Christopher P; Rhodes, Gillian
2013-10-01
Children are less skilled than adults at making judgments about facial expression. This could be because they have not yet developed adult-like mechanisms for visually representing faces. Adults are thought to represent faces in a multidimensional face-space, and have been shown to code the expression of a face relative to the norm or average face in face-space. Norm-based coding is economical and adaptive, and may be what makes adults more sensitive to facial expression than children. This study investigated the coding system that children use to represent facial expression. An adaptation aftereffect paradigm was used to test 24 adults and 18 children (9 years 2 months to 9 years 11 months old). Participants adapted to weak and strong antiexpressions. They then judged the expression of an average expression. Adaptation created aftereffects that made the test face look like the expression opposite that of the adaptor. Consistent with the predictions of norm-based but not exemplar-based coding, aftereffects were larger for strong than weak adaptors for both age groups. Results indicate that, like adults, children's coding of facial expressions is norm-based. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Facial attractiveness, symmetry, and physical fitness in young women.
Hönekopp, Johannes; Bartholomé, Tobias; Jansen, Gregor
2004-06-01
This study explores the evolutionary-based hypothesis that facial attractiveness (a guiding force in mate selection) is a cue for physical fitness (presumably an important contributor to mate value in ancestral times). Since fluctuating asymmetry, a measure of developmental stability, is known to be a valid cue for fitness in several biological domains, we scrutinized facial asymmetry as a potential mediator between attractiveness and fitness. In our sample of young women, facial beauty indeed indicated physical fitness. The relationships that pertained to asymmetry were in the expected direction. However, a closer analysis revealed that facial asymmetry did not mediate the relationship between fitness and attractiveness. Unexpected problems regarding the measurement of facial asymmetry are discussed.
Illuminant color estimation based on pigmentation separation from human skin color
NASA Astrophysics Data System (ADS)
Tanaka, Satomi; Kakinuma, Akihiro; Kamijo, Naohiro; Takahashi, Hiroshi; Tsumura, Norimichi
2015-03-01
Human has the visual system called "color constancy" that maintains the perceptive colors of same object across various light sources. The effective method of color constancy algorithm was proposed to use the human facial color in a digital color image, however, this method has wrong estimation results by the difference of individual facial colors. In this paper, we present the novel color constancy algorithm based on skin color analysis. The skin color analysis is the method to separate the skin color into the components of melanin, hemoglobin and shading. We use the stationary property of Japanese facial color, and this property is calculated from the components of melanin and hemoglobin. As a result, we achieve to propose the method to use subject's facial color in image and not depend on the individual difference among Japanese facial color.
Zhao, Yi-Jiao; Xiong, Yu-Xue; Wang, Yong
2017-01-01
In this study, the practical accuracy (PA) of optical facial scanners for facial deformity patients in oral clinic was evaluated. Ten patients with a variety of facial deformities from oral clinical were included in the study. For each patient, a three-dimensional (3D) face model was acquired, via a high-accuracy industrial "line-laser" scanner (Faro), as the reference model and two test models were obtained, via a "stereophotography" (3dMD) and a "structured light" facial scanner (FaceScan) separately. Registration based on the iterative closest point (ICP) algorithm was executed to overlap the test models to reference models, and "3D error" as a new measurement indicator calculated by reverse engineering software (Geomagic Studio) was used to evaluate the 3D global and partial (upper, middle, and lower parts of face) PA of each facial scanner. The respective 3D accuracy of stereophotography and structured light facial scanners obtained for facial deformities was 0.58±0.11 mm and 0.57±0.07 mm. The 3D accuracy of different facial partitions was inconsistent; the middle face had the best performance. Although the PA of two facial scanners was lower than their nominal accuracy (NA), they all met the requirement for oral clinic use.
2013-04-01
bioreactor systems, a microfluidic -based flexible fluid exchange patch was developed for porcine wound models. A novel design and fabrication process...to be established. 15. SUBJECT TERMS Biomask, burn injury, facial reconstruction, wound-healing, bioreactor, flexible microfluidic , and...and layers of facial skin using different cell types and matrices to produce a reliable, physiologic facial and skin construct to restore functional
Expressive facial animation synthesis by learning speech coarticulation and expression spaces.
Deng, Zhigang; Neumann, Ulrich; Lewis, J P; Kim, Tae-Yong; Bulut, Murtaza; Narayanan, Shrikanth
2006-01-01
Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A Phoneme-Independent Expression Eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and Principal Component Analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation.
Tensor Rank Preserving Discriminant Analysis for Facial Recognition.
Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo
2017-10-12
Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.
Foolad, Negar; Shi, Vivian Y; Prakash, Neha; Kamangar, Faranak; Sivamani, Raja K
2015-06-16
Rosacea and melasma are two common skin conditions in dermatology. Both conditions have a predilection for the centrofacial region where the sebaceous gland density is the highest. However it is not known if sebaceous function has an association with these conditions. We aimed to assess the relationship between facial glabellar wrinkle severity and facial sebum excretion rate for individuals with rosacea, melasma, both conditions, and in those with rhytides. Secondly, the purpose of this study was to utilize high resolution 3D facial modeling and measurement technology to obtain information regarding glabellar rhytid count and severity. A total of 21 subjects participated in the study. Subjects were divided into four groups based on facial features: rosacea-only, melasma-only, rosacea and melasma, rhytides-only. A high resolution facial photograph was taken followed by measurement of facial sebum excretion rate (SER). The SER was found to decline with age and with the presence of melasma. The SER negatively correlated with increasing Wrinkle Severity Rating Scale. Through the use of 3D facial modeling and skin analysis technology, we found a positive correlation between clinically based grading scores and computer generated glabellar rhytid count and severity. Continuing research with facial modeling and measurement systems will allow for development of more objective facial assessments. Future studies need to assess the role of technology in stratifying the severity and subtypes of rosacea and melasma. Furthermore, the role of sebaceous regulation may have important implications in photoaging.
Near-infrared imaging of face transplants: are both pedicles necessary?
Nguyen, John T; Ashitate, Yoshitomo; Venugopal, Vivek; Neacsu, Florin; Kettenring, Frank; Frangioni, John V; Gioux, Sylvain; Lee, Bernard T
2013-09-01
Facial transplantation is a complex procedure that corrects severe facial defects due to traumas, burns, and congenital disorders. Although face transplantation has been successfully performed clinically, potential risks include tissue ischemia and necrosis. The vascular supply is typically based on the bilateral neck vessels. As it remains unclear whether perfusion can be based off a single pedicle, this study was designed to assess perfusion patterns of facial transplant allografts using near-infrared (NIR) fluorescence imaging. Upper facial composite tissue allotransplants were created using both carotid artery and external jugular vein pedicles in Yorkshire pigs. A flap validation model was created in n = 2 pigs and a clamp occlusion model was performed in n = 3 pigs. In the clamp occlusion models, sequential clamping of the vessels was performed to assess perfusion. Animals were injected with indocyanine green and imaged with NIR fluorescence. Quantitative metrics were assessed based on fluorescence intensity. With NIR imaging, arterial perforators emitted fluorescence indicating perfusion along the surface of the skin. Isolated clamping of one vascular pedicle showed successful perfusion across the midline based on NIR fluorescence imaging. This perfusion extended into the facial allograft within 60 s and perfused the entire contralateral side within 5 min. Determination of vascular perfusion is important in microsurgical constructs as complications can lead to flap loss. It is still unclear if facial transplants require both pedicles. This initial pilot study using intraoperative NIR fluorescence imaging suggests that facial flap models can be adequately perfused from a single pedicle. Copyright © 2013 Elsevier Inc. All rights reserved.
2013-06-01
fixed sensors located along the perimeter of the FOB. The video is analyzed for facial recognition to alert the Network Operations Center (NOC...the UAV is processed on board for facial recognition and video for behavior analysis is sent directly to the Network Operations Center (NOC). Video...captured by the fixed sensors are sent directly to the NOC for facial recognition and behavior analysis processing. The multi- directional signal
Mastication Evaluation With Unsupervised Learning: Using an Inertial Sensor-Based System
Lucena, Caroline Vieira; Lacerda, Marcelo; Caldas, Rafael; De Lima Neto, Fernando Buarque
2018-01-01
There is a direct relationship between the prevalence of musculoskeletal disorders of the temporomandibular joint and orofacial disorders. A well-elaborated analysis of the jaw movements provides relevant information for healthcare professionals to conclude their diagnosis. Different approaches have been explored to track jaw movements such that the mastication analysis is getting less subjective; however, all methods are still highly subjective, and the quality of the assessments depends much on the experience of the health professional. In this paper, an accurate and non-invasive method based on a commercial low-cost inertial sensor (MPU6050) to measure jaw movements is proposed. The jaw-movement feature values are compared to the obtained with clinical analysis, showing no statistically significant difference between both methods. Moreover, We propose to use unsupervised paradigm approaches to cluster mastication patterns of healthy subjects and simulated patients with facial trauma. Two techniques were used in this paper to instantiate the method: Kohonen’s Self-Organizing Maps and K-Means Clustering. Both algorithms have excellent performances to process jaw-movements data, showing encouraging results and potential to bring a full assessment of the masticatory function. The proposed method can be applied in real-time providing relevant dynamic information for health-care professionals. PMID:29651365
The Prevalence of Cosmetic Facial Plastic Procedures among Facial Plastic Surgeons.
Moayer, Roxana; Sand, Jordan P; Han, Albert; Nabili, Vishad; Keller, Gregory S
2018-04-01
This is the first study to report on the prevalence of cosmetic facial plastic surgery use among facial plastic surgeons. The aim of this study is to determine the frequency with which facial plastic surgeons have cosmetic procedures themselves. A secondary aim is to determine whether trends in usage of cosmetic facial procedures among facial plastic surgeons are similar to that of nonsurgeons. The study design was an anonymous, five-question, Internet survey distributed via email set in a single academic institution. Board-certified members of the American Academy of Facial Plastic and Reconstructive Surgery (AAFPRS) were included in this study. Self-reported history of cosmetic facial plastic surgery or minimally invasive procedures were recorded. The survey also queried participants for demographic data. A total of 216 members of the AAFPRS responded to the questionnaire. Ninety percent of respondents were male ( n = 192) and 10.3% were female ( n = 22). Thirty-three percent of respondents were aged 31 to 40 years ( n = 70), 25% were aged 41 to 50 years ( n = 53), 21.4% were aged 51 to 60 years ( n = 46), and 20.5% were older than 60 years ( n = 44). Thirty-six percent of respondents had a surgical cosmetic facial procedure and 75% has at least one minimally invasive cosmetic facial procedure. Facial plastic surgeons are frequent users of cosmetic facial plastic surgery. This finding may be due to access, knowledge base, values, or attitudes. By better understanding surgeon attitudes toward facial plastic surgery, we can improve communication with patients and delivery of care. This study is a first step in understanding use of facial plastic procedures among facial plastic surgeons. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Signatures of personality on dense 3D facial images.
Hu, Sile; Xiong, Jieyi; Fu, Pengcheng; Qiao, Lu; Tan, Jingze; Jin, Li; Tang, Kun
2017-03-06
It has long been speculated that cues on the human face exist that allow observers to make reliable judgments of others' personality traits. However, direct evidence of association between facial shapes and personality is missing from the current literature. This study assessed the personality attributes of 834 Han Chinese volunteers (405 males and 429 females), utilising the five-factor personality model ('Big Five'), and collected their neutral 3D facial images. Dense anatomical correspondence was established across the 3D facial images in order to allow high-dimensional quantitative analyses of the facial phenotypes. In this paper, we developed a Partial Least Squares (PLS) -based method. We used composite partial least squares component (CPSLC) to test association between the self-tested personality scores and the dense 3D facial image data, then used principal component analysis (PCA) for further validation. Among the five personality factors, agreeableness and conscientiousness in males and extraversion in females were significantly associated with specific facial patterns. The personality-related facial patterns were extracted and their effects were extrapolated on simulated 3D facial models.
Novel dynamic Bayesian networks for facial action element recognition and understanding
NASA Astrophysics Data System (ADS)
Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong
2011-12-01
In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.
Le, T Hoang Ngan; Luu, Khoa; Savvides, Marios
2013-08-01
Robust facial hair detection and segmentation is a highly valued soft biometric attribute for carrying out forensic facial analysis. In this paper, we propose a novel and fully automatic system, called SparCLeS, for beard/moustache detection and segmentation in challenging facial images. SparCLeS uses the multiscale self-quotient (MSQ) algorithm to preprocess facial images and deal with illumination variation. Histogram of oriented gradients (HOG) features are extracted from the preprocessed images and a dynamic sparse classifier is built using these features to classify a facial region as either containing skin or facial hair. A level set based approach, which makes use of the advantages of both global and local information, is then used to segment the regions of a face containing facial hair. Experimental results demonstrate the effectiveness of our proposed system in detecting and segmenting facial hair regions in images drawn from three databases, i.e., the NIST Multiple Biometric Grand Challenge (MBGC) still face database, the NIST Color Facial Recognition Technology FERET database, and the Labeled Faces in the Wild (LFW) database.
Shu, Ting; Zhang, Bob; Tang, Yuan Yan
2017-01-01
At present, heart disease is the number one cause of death worldwide. Traditionally, heart disease is commonly detected using blood tests, electrocardiogram, cardiac computerized tomography scan, cardiac magnetic resonance imaging, and so on. However, these traditional diagnostic methods are time consuming and/or invasive. In this paper, we propose an effective noninvasive computerized method based on facial images to quantitatively detect heart disease. Specifically, facial key block color features are extracted from facial images and analyzed using the Probabilistic Collaborative Representation Based Classifier. The idea of facial key block color analysis is founded in Traditional Chinese Medicine. A new dataset consisting of 581 heart disease and 581 healthy samples was experimented by the proposed method. In order to optimize the Probabilistic Collaborative Representation Based Classifier, an analysis of its parameters was performed. According to the experimental results, the proposed method obtains the highest accuracy compared with other classifiers and is proven to be effective at heart disease detection.
Holistic processing of static and moving faces.
Zhao, Mintao; Bülthoff, Isabelle
2017-07-01
Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability-holistic face processing-remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based face processing by manipulating the presence of facial motion during study and at test in a composite face task. The results showed that rigidly moving faces were processed as holistically as static faces (Experiment 1). Holistic processing of moving faces persisted whether facial motion was presented during study, at test, or both (Experiment 2). Moreover, when faces were inverted to eliminate the contributions of both an upright face template and observers' expertise with upright faces, rigid facial motion facilitated holistic face processing (Experiment 3). Thus, holistic processing represents a general principle of face perception that applies to both static and dynamic faces, rather than being limited to static faces. These results support an emerging view that both perceiver-based and face-based factors contribute to holistic face processing, and they offer new insights on what underlies holistic face processing, how information supporting holistic face processing interacts with each other, and why facial motion may affect face recognition and holistic face processing differently. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
An Automatic Diagnosis Method of Facial Acne Vulgaris Based on Convolutional Neural Network.
Shen, Xiaolei; Zhang, Jiachi; Yan, Chenjun; Zhou, Hong
2018-04-11
In this paper, we present a new automatic diagnosis method for facial acne vulgaris which is based on convolutional neural networks (CNNs). To overcome the shortcomings of previous methods which were the inability to classify enough types of acne vulgaris. The core of our method is to extract features of images based on CNNs and achieve classification by classifier. A binary-classifier of skin-and-non-skin is used to detect skin area and a seven-classifier is used to achieve the classification task of facial acne vulgaris and healthy skin. In the experiments, we compare the effectiveness of our CNN and the VGG16 neural network which is pre-trained on the ImageNet data set. We use a ROC curve to evaluate the performance of binary-classifier and use a normalized confusion matrix to evaluate the performance of seven-classifier. The results of our experiments show that the pre-trained VGG16 neural network is effective in extracting features from facial acne vulgaris images. And the features are very useful for the follow-up classifiers. Finally, we try applying the classifiers both based on the pre-trained VGG16 neural network to assist doctors in facial acne vulgaris diagnosis.
Nagarajan, R; Hariharan, M; Satiyan, M
2012-08-01
Developing tools to assist physically disabled and immobilized people through facial expression is a challenging area of research and has attracted many researchers recently. In this paper, luminance stickers based facial expression recognition is proposed. Recognition of facial expression is carried out by employing Discrete Wavelet Transform (DWT) as a feature extraction method. Different wavelet families with their different orders (db1 to db20, Coif1 to Coif 5 and Sym2 to Sym8) are utilized to investigate their performance in recognizing facial expression and to evaluate their computational time. Standard deviation is computed for the coefficients of first level of wavelet decomposition for every order of wavelet family. This standard deviation is used to form a set of feature vectors for classification. In this study, conventional validation and cross validation are performed to evaluate the efficiency of the suggested feature vectors. Three different classifiers namely Artificial Neural Network (ANN), k-Nearest Neighborhood (kNN) and Linear Discriminant Analysis (LDA) are used to classify a set of eight facial expressions. The experimental results demonstrate that the proposed method gives very promising classification accuracies.
Quantitative analysis of facial paralysis using local binary patterns in biomedical videos.
He, Shu; Soraghan, John J; O'Reilly, Brian F; Xing, Dongshan
2009-07-01
Facial paralysis is the loss of voluntary muscle movement of one side of the face. A quantitative, objective, and reliable assessment system would be an invaluable tool for clinicians treating patients with this condition. This paper presents a novel framework for objective measurement of facial paralysis. The motion information in the horizontal and vertical directions and the appearance features on the apex frames are extracted based on the local binary patterns (LBPs) on the temporal-spatial domain in each facial region. These features are temporally and spatially enhanced by the application of novel block processing schemes. A multiresolution extension of uniform LBP is proposed to efficiently combine the micropatterns and large-scale patterns into a feature vector. The symmetry of facial movements is measured by the resistor-average distance (RAD) between LBP features extracted from the two sides of the face. Support vector machine is applied to provide quantitative evaluation of facial paralysis based on the House-Brackmann (H-B) scale. The proposed method is validated by experiments with 197 subject videos, which demonstrates its accuracy and efficiency.
Shu, Ting; Zhang, Bob
2015-04-01
Blood tests allow doctors to check for certain diseases and conditions. However, using a syringe to extract the blood can be deemed invasive, slightly painful, and its analysis time consuming. In this paper, we propose a new non-invasive system to detect the health status (Healthy or Diseased) of an individual based on facial block texture features extracted using the Gabor filter. Our system first uses a non-invasive capture device to collect facial images. Next, four facial blocks are located on these images to represent them. Afterwards, each facial block is convolved with a Gabor filter bank to calculate its texture value. Classification is finally performed using K-Nearest Neighbor and Support Vector Machines via a Library for Support Vector Machines (with four kernel functions). The system was tested on a dataset consisting of 100 Healthy and 100 Diseased (with 13 forms of illnesses) samples. Experimental results show that the proposed system can detect the health status with an accuracy of 93 %, a sensitivity of 94 %, a specificity of 92 %, using a combination of the Gabor filters and facial blocks.
Martín-Ruiz, María-Luisa; Máximo-Bocanegra, Nuria; Luna-Oliva, Laura
2016-03-26
The importance of an early rehabilitation process in children with cerebral palsy (CP) is widely recognized. On the one hand, new and useful treatment tools such as rehabilitation systems based on interactive technologies have appeared for rehabilitation of gross motor movements. On the other hand, from the therapeutic point of view, performing rehabilitation exercises with the facial muscles can improve the swallowing process, the facial expression through the management of muscles in the face, and even the speech of children with cerebral palsy. However, it is difficult to find interactive games to improve the detection and evaluation of oral-facial musculature dysfunctions in children with CP. This paper describes a framework based on strategies developed for interactive serious games that is created both for typically developed children and children with disabilities. Four interactive games are the core of a Virtual Environment called SONRIE. This paper demonstrates the benefits of SONRIE to monitor children's oral-facial difficulties. The next steps will focus on the validation of SONRIE to carry out the rehabilitation process of oral-facial musculature in children with cerebral palsy.
Automatic 2.5-D Facial Landmarking and Emotion Annotation for Social Interaction Assistance.
Zhao, Xi; Zou, Jianhua; Li, Huibin; Dellandrea, Emmanuel; Kakadiaris, Ioannis A; Chen, Liming
2016-09-01
People with low vision, Alzheimer's disease, and autism spectrum disorder experience difficulties in perceiving or interpreting facial expression of emotion in their social lives. Though automatic facial expression recognition (FER) methods on 2-D videos have been extensively investigated, their performance was constrained by challenges in head pose and lighting conditions. The shape information in 3-D facial data can reduce or even overcome these challenges. However, high expenses of 3-D cameras prevent their widespread use. Fortunately, 2.5-D facial data from emerging portable RGB-D cameras provide a good balance for this dilemma. In this paper, we propose an automatic emotion annotation solution on 2.5-D facial data collected from RGB-D cameras. The solution consists of a facial landmarking method and a FER method. Specifically, we propose building a deformable partial face model and fit the model to a 2.5-D face for localizing facial landmarks automatically. In FER, a novel action unit (AU) space-based FER method has been proposed. Facial features are extracted using landmarks and further represented as coordinates in the AU space, which are classified into facial expressions. Evaluated on three publicly accessible facial databases, namely EURECOM, FRGC, and Bosphorus databases, the proposed facial landmarking and expression recognition methods have achieved satisfactory results. Possible real-world applications using our algorithms have also been discussed.
Neaux, Dimitri; Guy, Franck; Gilissen, Emmanuel; Coudyzer, Walter; Vignaud, Patrick; Ducrocq, Stéphane
2013-01-01
The organization of the bony face is complex, its morphology being influenced in part by the rest of the cranium. Characterizing the facial morphological variation and craniofacial covariation patterns in extant hominids is fundamental to the understanding of their evolutionary history. Numerous studies on hominid facial shape have proposed hypotheses concerning the relationship between the anterior facial shape, facial block orientation and basicranial flexion. In this study we test these hypotheses in a sample of adult specimens belonging to three extant hominid genera (Homo, Pan and Gorilla). Intraspecific variation and covariation patterns are analyzed using geometric morphometric methods and multivariate statistics, such as partial least squared on three-dimensional landmarks coordinates. Our results indicate significant intraspecific covariation between facial shape, facial block orientation and basicranial flexion. Hominids share similar characteristics in the relationship between anterior facial shape and facial block orientation. Modern humans exhibit a specific pattern in the covariation between anterior facial shape and basicranial flexion. This peculiar feature underscores the role of modern humans' highly-flexed basicranium in the overall integration of the cranium. Furthermore, our results are consistent with the hypothesis of a relationship between the reduction of the value of the cranial base angle and a downward rotation of the facial block in modern humans, and to a lesser extent in chimpanzees. PMID:23441232
The fallopian canal: a comprehensive review and proposal of a new classification.
Mortazavi, M M; Latif, B; Verma, K; Adeeb, N; Deep, A; Griessenauer, C J; Tubbs, R S; Fukushima, T
2014-03-01
The facial nerve follows a complex course through the skull base. Understanding its anatomy is crucial during standard skull base approaches and resection of certain skull base tumors closely related to the nerve, especially, tumors at the cerebellopontine angle. Herein, we review the fallopian canal and its implications in surgical approaches to the skull base. Furthermore, we suggest a new classification. Based on the anatomy and literature, we propose that the meatal segment of the facial nerve be included as a component of the fallopian canal. A comprehensive knowledge of the course of the facial nerve is important to those who treat patients with pathology of or near this cranial nerve.
Research on facial expression simulation based on depth image
NASA Astrophysics Data System (ADS)
Ding, Sha-sha; Duan, Jin; Zhao, Yi-wu; Xiao, Bo; Wang, Hao
2017-11-01
Nowadays, face expression simulation is widely used in film and television special effects, human-computer interaction and many other fields. Facial expression is captured by the device of Kinect camera .The method of AAM algorithm based on statistical information is employed to detect and track faces. The 2D regression algorithm is applied to align the feature points. Among them, facial feature points are detected automatically and 3D cartoon model feature points are signed artificially. The aligned feature points are mapped by keyframe techniques. In order to improve the animation effect, Non-feature points are interpolated based on empirical models. Under the constraint of Bézier curves we finish the mapping and interpolation. Thus the feature points on the cartoon face model can be driven if the facial expression varies. In this way the purpose of cartoon face expression simulation in real-time is came ture. The experiment result shows that the method proposed in this text can accurately simulate the facial expression. Finally, our method is compared with the previous method. Actual data prove that the implementation efficiency is greatly improved by our method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuse, Takeo; Tada, Yuichiro; Aoyagi, Masaru
1996-03-01
The purpose of this study was to determine the accuracy of high resolution CT (HRCT) in the detection of facial canal dehiscence and semicircular canal fistula, the preoperative evaluation of both of which is clinically very important for ear surgery. We retrospectively reviewed the HRCT findings in 61 patients who underwent mastoidectomy at Yamagata University between 1989 and 1993. The HRCT images were obtained in the axial and semicoronal planes using 1 mm slice thickness and 1 mm intersection gap. In 46 (75%) of the 61 patients, the HRCT image-based assessment of the facial canal dehiscence coincided with the surgicalmore » findings. The data for the facial canal revealed sensitivity of 66% and specificity of 84%. For semicircular canal fistula. in 59 (97%) of the 61 patients, the HRCT image-based assessment and the surgical findings coincided. The image-based assessment in the remaining two patients, who both had massive cholesteatoma, was false-positive. HRCT is useful in the diagnosis of facial canal dehiscence and labyrinthine fistula, but its limitations should also be recognized. 12 refs., 3 figs., 6 tabs.« less
Anatomically accurate individual face modeling.
Zhang, Yu; Prakash, Edmond C; Sung, Eric
2003-01-01
This paper presents a new 3D face model of a specific person constructed from the anatomical perspective. By exploiting the laser range data, a 3D facial mesh precisely representing the skin geometry is reconstructed. Based on the geometric facial mesh, we develop a deformable multi-layer skin model. It takes into account the nonlinear stress-strain relationship and dynamically simulates the non-homogenous behavior of the real skin. The face model also incorporates a set of anatomically-motivated facial muscle actuators and underlying skull structure. Lagrangian mechanics governs the facial motion dynamics, dictating the dynamic deformation of facial skin in response to the muscle contraction.
Spoofing detection on facial images recognition using LBP and GLCM combination
NASA Astrophysics Data System (ADS)
Sthevanie, F.; Ramadhani, K. N.
2018-03-01
The challenge for the facial based security system is how to detect facial image falsification such as facial image spoofing. Spoofing occurs when someone try to pretend as a registered user to obtain illegal access and gain advantage from the protected system. This research implements facial image spoofing detection method by analyzing image texture. The proposed method for texture analysis combines the Local Binary Pattern (LBP) and Gray Level Co-occurrence Matrix (GLCM) method. The experimental results show that spoofing detection using LBP and GLCM combination achieves high detection rate compared to that of using only LBP feature or GLCM feature.
Facial asymmetry quantitative evaluation in oculoauriculovertebral spectrum.
Manara, Renzo; Schifano, Giovanni; Brotto, Davide; Mardari, Rodica; Ghiselli, Sara; Gerunda, Antonio; Ghirotto, Cristina; Fusetti, Stefano; Piacentile, Katherine; Scienza, Renato; Ermani, Mario; Martini, Alessandro
2016-03-01
Facial asymmetries in oculoauriculovertebral spectrum (OAVS) patients might require surgical corrections that are mostly based on qualitative approach and surgeon's experience. The present study aimed to develop a quantitative 3D CT imaging-based procedure suitable for maxillo-facial surgery planning in OAVS patients. Thirteen OAVS patients (mean age 3.5 ± 4.0 years; range 0.2-14.2, 6 females) and 13 controls (mean age 7.1 ± 5.3 years; range 0.6-15.7, 5 females) who underwent head CT examination were retrospectively enrolled. Eight bilateral anatomical facial landmarks were defined on 3D CT images (porion, orbitale, most anterior point of frontozygomatic suture, most superior point of temporozygomatic suture, most posterior-lateral point of the maxilla, gonion, condylion, mental foramen) and distance from orthogonal planes (in millimeters) was used to evaluate the asymmetry on each axis and to calculate a global asymmetry index of each anatomical landmark. Mean asymmetry values and relative confidence intervals were obtained from the control group. OAVS patients showed 2.5 ± 1.8 landmarks above the confidence interval while considering the global asymmetry values; 12 patients (92%) showed at least one pathologically asymmetric landmark. Considering each axis, the mean number of pathologically asymmetric landmarks increased to 5.5 ± 2.6 (p = 0.002) and all patients presented at least one significant landmark asymmetry. Modern CT-based 3D reconstructions allow accurate assessment of facial bone asymmetries in patients affected by OAVS. The evaluation as a global score and in different orthogonal axes provides precise quantitative data suitable for maxillo-facial surgical planning. CT-based 3D reconstruction might allow a quantitative approach for planning and following-up maxillo-facial surgery in OAVS patients.
Wang, Ming Feng; Otsuka, Takero; Akimoto, Susumu; Sato, Sadao
2013-01-01
The aim of the present study was to evaluate how vertical facial height correlates with mandibular plane angle, facial width and depth from a three dimensional (3D) viewing angle. In this study 3D cephalometric landmarks were identified and measurements from 43 randomly selected cone beam computed tomography (CBCT) images of dry skulls from the Weisbach collection of Vienna Natural History Museum were analyzed. Pearson correlation coefficients of facial height measurements and mandibular plane angle and the correlation coefficients of height-width and height-depth were calculated, respectively. The mandibular plane angle (MP-SN) significantly correlated with ramus height (Co-Go) and posterior facial height (PFH) but not with anterior lower face height (ALFH) or anterior total face height (ATFH). The ALFH and ATFH showed significant correlation with anterior cranial base length (S-N), whereas PFH showed significant correlation with the mandible (S-B) and maxilla (S-A) anteroposterior position. High or low mandibular plane angle might not necessarily be accompanied by long or short anterior face height, respectively. The PFH rather than AFH is assumed to play a key role in the vertical facial type whereas AFH seems to undergo relatively intrinsic growth.
Automated facial acne assessment from smartphone images
NASA Astrophysics Data System (ADS)
Amini, Mohammad; Vasefi, Fartash; Valdebran, Manuel; Huang, Kevin; Zhang, Haomiao; Kemp, William; MacKinnon, Nicholas
2018-02-01
A smartphone mobile medical application is presented, that provides analysis of the health of skin on the face using a smartphone image and cloud-based image processing techniques. The mobile application employs the use of the camera to capture a front face image of a subject, after which the captured image is spatially calibrated based on fiducial points such as position of the iris of the eye. A facial recognition algorithm is used to identify features of the human face image, to normalize the image, and to define facial regions of interest (ROI) for acne assessment. We identify acne lesions and classify them into two categories: those that are papules and those that are pustules. Automated facial acne assessment was validated by performing tests on images of 60 digital human models and 10 real human face images. The application was able to identify 92% of acne lesions within five facial ROIs. The classification accuracy for separating papules from pustules was 98%. Combined with in-app documentation of treatment, lifestyle factors, and automated facial acne assessment, the app can be used in both cosmetic and clinical dermatology. It allows users to quantitatively self-measure acne severity and treatment efficacy on an ongoing basis to help them manage their chronic facial acne.
La Padula, Simone; Hersant, Barbara; SidAhmed, Mounia; Niddam, Jeremy; Meningaud, Jean Paul
2016-07-01
Most patients requesting aesthetic rejuvenation treatment expect to look healthier and younger. Some scales for ageing assessment have been proposed, but none is focused on patient age prediction. The aim of this study was to develop and validate a new facial rating scale assessing facial ageing sign severity. One thousand Caucasian patients were included and assessed. The Rasch model was used as part of the validation process. A score was attributed to each patient, based on the scales we developed. The correlation between the real age and scores obtained, the inter-rater reliability and test-retest reliability were analysed. The objective was to develop a tool enabling the assigning of a patient to a specific age range based on the calculated score. All scales exceeded criteria for acceptability, reliability and validity. The real age strongly correlated with the total facial score in both sex groups. The test-retest reliability confirmed this strong correlation. We developed a facial ageing scale which could be a useful tool to assess patients before and after rejuvenation treatment and an important new metrics to be used in facial rejuvenation and regenerative clinical research. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Living with Moebius syndrome: adjustment, social competence, and satisfaction with life.
Bogart, Kathleen Rives; Matsumoto, David
2010-03-01
Moebius syndrome is a rare congenital condition that results in bilateral facial paralysis. Several studies have reported social interaction and adjustment problems in people with Moebius syndrome and other facial movement disorders, presumably resulting from lack of facial expression. To determine whether adults with Moebius syndrome experience increased anxiety and depression and/or decreased social competence and satisfaction with life compared with people without facial movement disorders. Internet-based quasi-experimental study with comparison group. Thirty-seven adults with Moebius syndrome recruited through the United States-based Moebius Syndrome Foundation newsletter and Web site and 37 age- and gender-matched control participants recruited through a university participant database. Anxiety and depression, social competence, satisfaction with life, ability to express emotion facially, and questions about Moebius syndrome symptoms. People with Moebius syndrome reported significantly lower social competence than the matched control group and normative data but did not differ significantly from the control group or norms in anxiety, depression, or satisfaction with life. In people with Moebius syndrome, degree of facial expression impairment was not significantly related to the adjustment variables. Many people with Moebius syndrome are better adjusted than previous research suggests, despite their difficulties with social interaction. To enhance interaction, people with Moebius syndrome could compensate for the lack of facial expression with alternative expressive channels.
Cochlear Implantation in Patients With CHARGE Syndrome.
Rah, Yoon Chan; Lee, Ji Young; Suh, Myung-Whan; Park, Moo Kyun; Lee, Jun Ho; Chang, Sun O; Oh, Seung-Ha
2016-11-01
To determine the optimal surgical approach for cochlear implantation (CI) preoperatively based on the spatial relation of a displaced facial nerve (FN) and middle ear structures and to analyze clinical outcomes of CHARGE syndrome. Facial nerve displacement and associated deviation of inner ear structures were analyzed in 13 patients (17 ears) with CHARGE syndrome who underwent CI. Surgical accessibility through the facial recess was assessed based on anatomical landmarks. Postoperative speech performance and associated clinical characteristics were analyzed. The most consistently identified ear anomalies were semicircular canal aplasia (100%), ossicular anomaly (100%), and vestibular hypoplasia (88%). Facial nerve displacement was found in 77% of cases (anteroinferior: 47%, anterior: 24%, inferior: 6%). The width of available surgical space around facial recess was significantly greater in cases of facial recess approach (2.85 ± 0.9 mm) than those of alternative approach (0.12 ± 0.29 mm, P = .02). Postoperatively, 53% achieved better than category 4 on the categories of auditory perception (CAP) scale. The CAP category was significantly correlated with internal auditory canal diameter (P = .025) and did not differ according to the applied surgical approach. Preoperative determination of surgical accessibility through facial recess would be useful for safe surgical approach, and successful hearing rehabilitation was achievable by applying appropriate surgical approaches. © The Author(s) 2016.
Association of Frontal and Lateral Facial Attractiveness.
Gu, Jeffrey T; Avilla, David; Devcic, Zlatko; Karimi, Koohyar; Wong, Brian J F
2018-01-01
Despite the large number of studies focused on defining frontal or lateral facial attractiveness, no reports have examined whether a significant association between frontal and lateral facial attractiveness exists. To examine the association between frontal and lateral facial attractiveness and to identify anatomical features that may influence discordance between frontal and lateral facial beauty. Paired frontal and lateral facial synthetic images of 240 white women (age range, 18-25 years) were evaluated from September 30, 2004, to September 29, 2008, using an internet-based focus group (n = 600) on an attractiveness Likert scale of 1 to 10, with 1 being least attractive and 10 being most attractive. Data analysis was performed from December 6, 2016, to March 30, 2017. The association between frontal and lateral attractiveness scores was determined using linear regression. Outliers were defined as data outside the 95% individual prediction interval. To identify features that contribute to score discordance between frontal and lateral attractiveness scores, each of these image pairs were scrutinized by an evaluator panel for facial features that were present in the frontal or lateral projections and absent in the other respective facial projections. Attractiveness scores obtained from internet-based focus groups. For the 240 white women studied (mean [SD] age, 21.4 [2.2] years), attractiveness scores ranged from 3.4 to 9.5 for frontal images and 3.3 to 9.4 for lateral images. The mean (SD) frontal attractiveness score was 6.9 (1.4), whereas the mean (SD) lateral attractiveness score was 6.4 (1.3). Simple linear regression of frontal and lateral attractiveness scores resulted in a coefficient of determination of r2 = 0.749. Eight outlier pairs were identified and analyzed by panel evaluation. Panel evaluation revealed no clinically applicable association between frontal and lateral images among outliers; however, contributory facial features were suggested. Thin upper lip, convex nose, and blunt cervicomental angle were suggested by evaluators as facial characteristics that contributed to outlier frontal or lateral attractiveness scores. This study identified a strong linear association between frontal and lateral facial attractiveness. Furthermore, specific facial landmarks responsible for the discordance between frontal and lateral facial attractiveness scores were suggested. Additional studies are necessary to determine whether correction of these landmarks may increase facial harmony and attractiveness. NA.
Nishimura, Mayu; Maurer, Daphne; Gao, Xiaoqing
2009-07-01
We explored differences in the mental representation of facial identity between 8-year-olds and adults. The 8-year-olds and adults made similarity judgments of a homogeneous set of faces (individual hair cues removed) using an "odd-man-out" paradigm. Multidimensional scaling (MDS) analyses were performed to represent perceived similarity of faces in a multidimensional space. Five dimensions accounted optimally for the judgments of both children and adults, with similar local clustering of faces. However, the fit of the MDS solutions was better for adults, in part because children's responses were more variable. More children relied predominantly on a single dimension, namely eye color, whereas adults appeared to use multiple dimensions for each judgment. The pattern of findings suggests that children's mental representation of faces has a structure similar to that of adults but that children's judgments are influenced less consistently by that overall structure.
LBP and SIFT based facial expression recognition
NASA Astrophysics Data System (ADS)
Sumer, Omer; Gunes, Ece O.
2015-02-01
This study compares the performance of local binary patterns (LBP) and scale invariant feature transform (SIFT) with support vector machines (SVM) in automatic classification of discrete facial expressions. Facial expression recognition is a multiclass classification problem and seven classes; happiness, anger, sadness, disgust, surprise, fear and comtempt are classified. Using SIFT feature vectors and linear SVM, 93.1% mean accuracy is acquired on CK+ database. On the other hand, the performance of LBP-based classifier with linear SVM is reported on SFEW using strictly person independent (SPI) protocol. Seven-class mean accuracy on SFEW is 59.76%. Experiments on both databases showed that LBP features can be used in a fairly descriptive way if a good localization of facial points and partitioning strategy are followed.
NASA Astrophysics Data System (ADS)
Mihardja, H.; Meuratana, PA; Ibrahim, A.
2017-08-01
Damage to the facial nerve due to trauma from traffic accidents is the second most common cause of paralysis of the facial nerve. The treatments include both pharmacological and non-pharmacological therapy. Acupuncture is a method of treatment that applies evidence-based medical principles and uses anatomy, physiology, and pathology to place needles atcertain acupuncture points. This paper describes a 26-year-old female patient with right-side facial palsy following a traffic accident who had animproved Brackmann’s score after 12 sessions of acupuncture treatment. The acupuncture points were chosen based on Liu Yan’sbrain-clearing needling technique. Acupuncture can shorten healing time and improve the effect of treatment for facial-nerve paralysis.
ERIC Educational Resources Information Center
Yang, Manshu; Chow, Sy-Miin
2010-01-01
Facial electromyography (EMG) is a useful physiological measure for detecting subtle affective changes in real time. A time series of EMG data contains bursts of electrical activity that increase in magnitude when the pertinent facial muscles are activated. Whereas previous methods for detecting EMG activation are often based on deterministic or…
An optimized ERP brain-computer interface based on facial expression changes.
Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej
2014-06-01
Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.
An optimized ERP brain-computer interface based on facial expression changes
NASA Astrophysics Data System (ADS)
Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej
2014-06-01
Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). Significance. The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.
Cherubino, Mario; Turri-Zanoni, Mario; Battaglia, Paolo; Giudice, Marco; Pellegatta, Igor; Tamborini, Federico; Maggiulli, Francesca; Guzzetti, Luca; Di Giovanna, Danilo; Bignami, Maurizio; Calati, Carolina; Castelnuovo, Paolo; Valdatta, Luigi
2017-01-01
Complex cranio-orbito-facial defects after skull base cancers resection entail a functional and esthetic reconstruction. The introduction of endoscopic assisted techniques for excision surgery with the advances in reconstructive surgery and anesthesiology allowed to improve the management of such critical patients. We report a series of chimeric anterolateral thigh (ALT) flaps used to reconstruct complex cranio-orbital-facial defects after skull base surgery. A retrospective review of patients that underwent cranio-orbito-facial reconstruction using a chimeric ALT flap from March 2013 to October 2015 at a single tertiary care referral Institute was performed. All patients were affected by locally-advanced malignant tumor and the resulting defects involved the skull base in all cases. The ALT flaps were perforator-based flaps with different components: fascia, skin and muscle. The different flap territories had independent vascular supply and were independent of any physical interconnection except where linked by a common source vessel. Ten patients were included in the study. Three patients underwent adjuvant radiotherapy and to chemotherapy. The mean hospitalization time was 21 days (range, 8-24 days). One failure was observed. After a mean follow-up of 12.4 months, 3 patients died of the disease, 2 are alive with disease, while 5 patients (50%) are currently alive without evidence of disease. Chimeric ALT flap is a reliable and versatile reconstructive option for complex cranio-orbito-facial defects resulting from skull base surgery. The chimeric flap composed of different territories proved to be adequate for a patient-tailored three-dimensional reconstruction of the defects as well as able to resist to the postoperative adjuvant treatments. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Din, Tengku Noor Daimah Tengku; Jamayet, Nafij; Rajion, Zainul Ahmad; Luddin, Norhayati; Abdullah, Johari Yap; Abdullah, Abdul Manaf; Yahya, Suzana
2016-12-01
Facial defects are either congenital or caused by trauma or cancer where most of them affect the person appearance. The emotional pressure and low self-esteem are problems commonly related to patient with facial defect. To overcome this problem, silicone prosthesis was designed to cover the defect part. This study describes the techniques in designing and fabrication for facial prosthesis applying computer aided method and manufacturing (CADCAM). The steps of fabricating the facial prosthesis were based on a patient case. The patient was diagnosed for Gorlin Gotz syndrome and came to Hospital Universiti Sains Malaysia (HUSM) for prosthesis. The 3D image of the patient was reconstructed from CT data using MIMICS software. Based on the 3D image, the intercanthal and zygomatic measurements of the patient were compared with available data in the database to find the suitable nose shape. The normal nose shape for the patient was retrieved from the nasal digital library. Mirror imaging technique was used to mirror the facial part. The final design of facial prosthesis including eye, nose and cheek was superimposed to see the result virtually. After the final design was confirmed, the mould design was created. The mould of nasal prosthesis was printed using Objet 3D printer. Silicone casting was done using the 3D print mould. The final prosthesis produced from the computer aided method was acceptable to be used for facial rehabilitation to provide better quality of life.
Oxygenated-Blood Colour Change Thresholds for Perceived Facial Redness, Health, and Attractiveness
Re, Daniel E.; Whitehead, Ross D.; Xiao, Dengke; Perrett, David I.
2011-01-01
Blood oxygenation level is associated with cardiovascular fitness, and raising oxygenated blood colouration in human faces increases perceived health. The current study used a two-alternative forced choice (2AFC) psychophysics design to quantify the oxygenated blood colour (redness) change threshold required to affect perception of facial colour, health and attractiveness. Detection thresholds for colour judgments were lower than those for health and attractiveness, which did not differ. The results suggest redness preferences do not reflect a sensory bias, rather preferences may be based on accurate indications of health status. Furthermore, results suggest perceived health and attractiveness may be perceptually equivalent when they are assessed based on facial redness. Appearance-based motivation for lifestyle change can be effective; thus future studies could assess the degree to which cardiovascular fitness increases face redness and could quantify changes in aerobic exercise needed to increase facial attractiveness. PMID:21448270
Oxygenated-blood colour change thresholds for perceived facial redness, health, and attractiveness.
Re, Daniel E; Whitehead, Ross D; Xiao, Dengke; Perrett, David I
2011-03-23
Blood oxygenation level is associated with cardiovascular fitness, and raising oxygenated blood colouration in human faces increases perceived health. The current study used a two-alternative forced choice (2AFC) psychophysics design to quantify the oxygenated blood colour (redness) change threshold required to affect perception of facial colour, health and attractiveness. Detection thresholds for colour judgments were lower than those for health and attractiveness, which did not differ. The results suggest redness preferences do not reflect a sensory bias, rather preferences may be based on accurate indications of health status. Furthermore, results suggest perceived health and attractiveness may be perceptually equivalent when they are assessed based on facial redness. Appearance-based motivation for lifestyle change can be effective; thus future studies could assess the degree to which cardiovascular fitness increases face redness and could quantify changes in aerobic exercise needed to increase facial attractiveness.
Creating speech-synchronized animation.
King, Scott A; Parent, Richard E
2005-01-01
We present a facial model designed primarily to support animated speech. Our facial model takes facial geometry as input and transforms it into a parametric deformable model. The facial model uses a muscle-based parameterization, allowing for easier integration between speech synchrony and facial expressions. Our facial model has a highly deformable lip model that is grafted onto the input facial geometry to provide the necessary geometric complexity needed for creating lip shapes and high-quality renderings. Our facial model also includes a highly deformable tongue model that can represent the shapes the tongue undergoes during speech. We add teeth, gums, and upper palate geometry to complete the inner mouth. To decrease the processing time, we hierarchically deform the facial surface. We also present a method to animate the facial model over time to create animated speech using a model of coarticulation that blends visemes together using dominance functions. We treat visemes as a dynamic shaping of the vocal tract by describing visemes as curves instead of keyframes. We show the utility of the techniques described in this paper by implementing them in a text-to-audiovisual-speech system that creates animation of speech from unrestricted text. The facial and coarticulation models must first be interactively initialized. The system then automatically creates accurate real-time animated speech from the input text. It is capable of cheaply producing tremendous amounts of animated speech with very low resource requirements.
Liang, Yin; Liu, Baolin; Li, Xianglin; Wang, Peiyuan
2018-01-01
It is an important question how human beings achieve efficient recognition of others' facial expressions in cognitive neuroscience, and it has been identified that specific cortical regions show preferential activation to facial expressions in previous studies. However, the potential contributions of the connectivity patterns in the processing of facial expressions remained unclear. The present functional magnetic resonance imaging (fMRI) study explored whether facial expressions could be decoded from the functional connectivity (FC) patterns using multivariate pattern analysis combined with machine learning algorithms (fcMVPA). We employed a block design experiment and collected neural activities while participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise). Both static and dynamic expression stimuli were included in our study. A behavioral experiment after scanning confirmed the validity of the facial stimuli presented during the fMRI experiment with classification accuracies and emotional intensities. We obtained whole-brain FC patterns for each facial expression and found that both static and dynamic facial expressions could be successfully decoded from the FC patterns. Moreover, we identified the expression-discriminative networks for the static and dynamic facial expressions, which span beyond the conventional face-selective areas. Overall, these results reveal that large-scale FC patterns may also contain rich expression information to accurately decode facial expressions, suggesting a novel mechanism, which includes general interactions between distributed brain regions, and that contributes to the human facial expression recognition.
Liang, Yin; Liu, Baolin; Li, Xianglin; Wang, Peiyuan
2018-01-01
It is an important question how human beings achieve efficient recognition of others’ facial expressions in cognitive neuroscience, and it has been identified that specific cortical regions show preferential activation to facial expressions in previous studies. However, the potential contributions of the connectivity patterns in the processing of facial expressions remained unclear. The present functional magnetic resonance imaging (fMRI) study explored whether facial expressions could be decoded from the functional connectivity (FC) patterns using multivariate pattern analysis combined with machine learning algorithms (fcMVPA). We employed a block design experiment and collected neural activities while participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise). Both static and dynamic expression stimuli were included in our study. A behavioral experiment after scanning confirmed the validity of the facial stimuli presented during the fMRI experiment with classification accuracies and emotional intensities. We obtained whole-brain FC patterns for each facial expression and found that both static and dynamic facial expressions could be successfully decoded from the FC patterns. Moreover, we identified the expression-discriminative networks for the static and dynamic facial expressions, which span beyond the conventional face-selective areas. Overall, these results reveal that large-scale FC patterns may also contain rich expression information to accurately decode facial expressions, suggesting a novel mechanism, which includes general interactions between distributed brain regions, and that contributes to the human facial expression recognition. PMID:29615882
Wolf, Karsten; Raedler, Thomas; Henke, Kai; Kiefer, Falk; Mass, Reinhard; Quante, Markus; Wiedemann, Klaus
2005-01-01
The purpose of this pilot study was to establish the validity of an improved facial electromyogram (EMG) method for the measurement of facial pain expression. Darwin defined pain in connection with fear as a simultaneous occurrence of eye staring, brow contraction and teeth chattering. Prkachin was the first to use the video-based Facial Action Coding System to measure facial expressions while using four different types of pain triggers, identifying a group of facial muscles around the eyes. The activity of nine facial muscles in 10 healthy male subjects was analyzed. Pain was induced through a laser system with a randomized sequence of different intensities. Muscle activity was measured with a new, highly sensitive and selective facial EMG. The results indicate two groups of muscles as key for pain expression. These results are in concordance with Darwin's definition. As in Prkachin's findings, one muscle group is assembled around the orbicularis oculi muscle, initiating eye staring. The second group consists of the mentalis and depressor anguli oris muscles, which trigger mouth movements. The results demonstrate the validity of the facial EMG method for measuring facial pain expression. Further studies with psychometric measurements, a larger sample size and a female test group should be conducted.
Blend Shape Interpolation and FACS for Realistic Avatar
NASA Astrophysics Data System (ADS)
Alkawaz, Mohammed Hazim; Mohamad, Dzulkifli; Basori, Ahmad Hoirul; Saba, Tanzila
2015-03-01
The quest of developing realistic facial animation is ever-growing. The emergence of sophisticated algorithms, new graphical user interfaces, laser scans and advanced 3D tools imparted further impetus towards the rapid advancement of complex virtual human facial model. Face-to-face communication being the most natural way of human interaction, the facial animation systems became more attractive in the information technology era for sundry applications. The production of computer-animated movies using synthetic actors are still challenging issues. Proposed facial expression carries the signature of happiness, sadness, angry or cheerful, etc. The mood of a particular person in the midst of a large group can immediately be identified via very subtle changes in facial expressions. Facial expressions being very complex as well as important nonverbal communication channel are tricky to synthesize realistically using computer graphics. Computer synthesis of practical facial expressions must deal with the geometric representation of the human face and the control of the facial animation. We developed a new approach by integrating blend shape interpolation (BSI) and facial action coding system (FACS) to create a realistic and expressive computer facial animation design. The BSI is used to generate the natural face while the FACS is employed to reflect the exact facial muscle movements for four basic natural emotional expressions such as angry, happy, sad and fear with high fidelity. The results in perceiving the realistic facial expression for virtual human emotions based on facial skin color and texture may contribute towards the development of virtual reality and game environment of computer aided graphics animation systems.
Three-dimensional analysis of facial shape and symmetry in twins using laser surface scanning.
Djordjevic, J; Jadallah, M; Zhurov, A I; Toma, A M; Richmond, S
2013-08-01
Three-dimensional analysis of facial shape and symmetry in twins. Faces of 37 twin pairs [19 monozygotic (MZ) and 18 dizygotic (DZ)] were laser scanned at the age of 15 during a follow-up of the Avon Longitudinal Study of Parents and Children (ALSPAC), South West of England. Facial shape was analysed using two methods: 1) Procrustes analysis of landmark configurations (63 x, y and z coordinates of 21 facial landmarks) and 2) three-dimensional comparisons of facial surfaces within each twin pair. Monozygotic and DZ twins were compared using ellipsoids representing 95% of the variation in landmark configurations and surface-based average faces. Facial symmetry was analysed by superimposing the original and mirror facial images. Both analyses showed greater similarity of facial shape in MZ twins, with lower third being the least similar. Procrustes analysis did not reveal any significant difference in facial landmark configurations of MZ and DZ twins. The average faces of MZ and DZ males were coincident in the forehead, supraorbital and infraorbital ridges, the bridge of the nose and lower lip. In MZ and DZ females, the eyes, supraorbital and infraorbital ridges, philtrum and lower part of the cheeks were coincident. Zygosity did not seem to influence the amount of facial symmetry. Lower facial third was the most asymmetrical. Three-dimensional analyses revealed differences in facial shapes of MZ and DZ twins. The relative contribution of genetic and environmental factors is different for the upper, middle and lower facial thirds. © 2012 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Cross-domain expression recognition based on sparse coding and transfer learning
NASA Astrophysics Data System (ADS)
Yang, Yong; Zhang, Weiyi; Huang, Yong
2017-05-01
Traditional facial expression recognition methods usually assume that the training set and the test set are independent and identically distributed. However, in actual expression recognition applications, the conditions of independent and identical distribution are hardly satisfied for the training set and test set because of the difference of light, shade, race and so on. In order to solve this problem and improve the performance of expression recognition in the actual applications, a novel method based on transfer learning and sparse coding is applied to facial expression recognition. First of all, a common primitive model, that is, the dictionary is learnt. Then, based on the idea of transfer learning, the learned primitive pattern is transferred to facial expression and the corresponding feature representation is obtained by sparse coding. The experimental results in CK +, JAFFE and NVIE database shows that the transfer learning based on sparse coding method can effectively improve the expression recognition rate in the cross-domain expression recognition task and is suitable for the practical facial expression recognition applications.
Realistic facial expression of virtual human based on color, sweat, and tears effects.
Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan
2014-01-01
Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.
Realistic Facial Expression of Virtual Human Based on Color, Sweat, and Tears Effects
Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan
2014-01-01
Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics. PMID:25136663
iFER: facial expression recognition using automatically selected geometric eye and eyebrow features
NASA Astrophysics Data System (ADS)
Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz
2018-03-01
Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.
Tsukiura, Takashi
2012-01-01
In our daily lives, we form some impressions of other people. Although those impressions are affected by many factors, face-based affective signals such as facial expression, facial attractiveness, or trustworthiness are important. Previous psychological studies have demonstrated the impact of facial impressions on remembering other people, but little is known about the neural mechanisms underlying this psychological process. The purpose of this article is to review recent functional MRI (fMRI) studies to investigate the effects of face-based affective signals including facial expression, facial attractiveness, and trustworthiness on memory for faces, and to propose a tentative concept for understanding this affective-cognitive interaction. On the basis of the aforementioned research, three brain regions are potentially involved in the processing of face-based affective signals. The first candidate is the amygdala, where activity is generally modulated by both affectively positive and negative signals from faces. Activity in the orbitofrontal cortex (OFC), as the second candidate, increases as a function of perceived positive signals from faces; whereas activity in the insular cortex, as the third candidate, reflects a function of face-based negative signals. In addition, neuroscientific studies have reported that the three regions are functionally connected to the memory-related hippocampal regions. These findings suggest that the effects of face-based affective signals on memory for faces could be modulated by interactions between the regions associated with the processing of face-based affective signals and the hippocampus as a memory-related region. PMID:22837740
Facial recognition performance of female inmates as a result of sexual assault history.
Islam-Zwart, Kayleen A; Heath, Nicole M; Vik, Peter W
2005-06-01
This study examined the effect of sexual assault history on facial recognition performance. Gender of facial stimuli and posttraumatic stress disorder (PTSD) symptoms also were expected to influence performance. Fifty-six female inmates completed an interview and the Wechsler Memory Scale-Third Edition Faces I and Faces II subtests (Wechsler, 1997). Women with a sexual assault exhibited better immediate and delayed facial recognition skills than those with no assault history. There were no differences in performance based on the gender of faces or PTSD diagnosis. Immediate facial recognition was correlated with report of PTSD symptoms. Findings provide greater insight into women's reactions to, and the uniqueness of, the trauma of sexual victimization.
A study on facial expressions recognition
NASA Astrophysics Data System (ADS)
Xu, Jingjing
2017-09-01
In terms of communication, postures and facial expressions of such feelings like happiness, anger and sadness play important roles in conveying information. With the development of the technology, recently a number of algorithms dealing with face alignment, face landmark detection, classification, facial landmark localization and pose estimation have been put forward. However, there are a lot of challenges and problems need to be fixed. In this paper, a few technologies have been concluded and analyzed, and they all relate to handling facial expressions recognition and poses like pose-indexed based multi-view method for face alignment, robust facial landmark detection under significant head pose and occlusion, partitioning the input domain for classification, robust statistics face formalization.
van de Graaf, R C; IJpma, F F A; Nicolai, J-P A; Werker, P M N
2009-11-01
Bell's palsy is the eponym for idiopathic peripheral facial paralysis. It is named after Sir Charles Bell (1774-1842), who, in the first half of the nineteenth century, discovered the function of the facial nerve and attracted the attention of the medical world to facial paralysis. Our knowledge of this condition before Bell's landmark publications is very limited and is based on just a few documents. In 1804 and 1805, Evert Jan Thomassen à Thuessink (1762-1832) published what appears to be the first known extensive study on idiopathic peripheral facial paralysis. His description of this condition was quite accurate. He located several other early descriptions and concluded from this literature that, previously, the condition had usually been confused with other afflictions (such as 'spasmus cynicus', central facial paralysis and trigeminal neuralgia). According to Thomassen à Thuessink, idiopathic peripheral facial paralysis and trigeminal neuralgia were related, being different expressions of the same condition. Thomassen à Thuessink believed that idiopathic peripheral facial paralysis was caused by 'rheumatism' or exposure to cold. Many aetiological theories have since been proposed. Despite this, the cold hypothesis persists even today.
[Screening for psychiatric risk factors in a facial trauma patients. Validating a questionnaire].
Foletti, J M; Bruneau, S; Farisse, J; Thiery, G; Chossegros, C; Guyot, L
2014-12-01
We recorded similarities between patients managed in the psychiatry department and in the maxillo-facial surgical unit. Our hypothesis was that some psychiatric conditions act as risk factors for facial trauma. We had for aim to test our hypothesis and to validate a simple and efficient questionnaire to identify these psychiatric disorders. Fifty-eight consenting patients with facial trauma, recruited prospectively in the 3 maxillo-facial surgery departments of the Marseille area during 3 months (December 2012-March 2013) completed a self-questionnaire based on the French version of 3 validated screening tests (Self Reported Psychopathy test, Rapid Alcohol Problem Screening test quantity-frequency, and Personal Health Questionnaire). This preliminary study confirmed that psychiatric conditions detected by our questionnaire, namely alcohol abuse and dependence, substance abuse, and depression, were risk factors for facial trauma. Maxillo-facial surgeons are often unaware of psychiatric disorders that may be the cause of facial trauma. The self-screening test we propose allows documenting the psychiatric history of patients and implementing earlier psychiatric care. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Hybrid generative-discriminative approach to age-invariant face recognition
NASA Astrophysics Data System (ADS)
Sajid, Muhammad; Shafique, Tamoor
2018-03-01
Age-invariant face recognition is still a challenging research problem due to the complex aging process involving types of facial tissues, skin, fat, muscles, and bones. Most of the related studies that have addressed the aging problem are focused on generative representation (aging simulation) or discriminative representation (feature-based approaches). Designing an appropriate hybrid approach taking into account both the generative and discriminative representations for age-invariant face recognition remains an open problem. We perform a hybrid matching to achieve robustness to aging variations. This approach automatically segments the eyes, nose-bridge, and mouth regions, which are relatively less sensitive to aging variations compared with the rest of the facial regions that are age-sensitive. The aging variations of age-sensitive facial parts are compensated using a demographic-aware generative model based on a bridged denoising autoencoder. The age-insensitive facial parts are represented by pixel average vector-based local binary patterns. Deep convolutional neural networks are used to extract relative features of age-sensitive and age-insensitive facial parts. Finally, the feature vectors of age-sensitive and age-insensitive facial parts are fused to achieve the recognition results. Extensive experimental results on morphological face database II (MORPH II), face and gesture recognition network (FG-NET), and Verification Subset of cross-age celebrity dataset (CACD-VS) demonstrate the effectiveness of the proposed method for age-invariant face recognition well.
Objective grading of facial paralysis using Local Binary Patterns in video processing.
He, Shu; Soraghan, John J; O'Reilly, Brian F
2008-01-01
This paper presents a novel framework for objective measurement of facial paralysis in biomedial videos. The motion information in the horizontal and vertical directions and the appearance features on the apex frames are extracted based on the Local Binary Patterns (LBP) on the temporal-spatial domain in each facial region. These features are temporally and spatially enhanced by the application of block schemes. A multi-resolution extension of uniform LBP is proposed to efficiently combine the micro-patterns and large-scale patterns into a feature vector, which increases the algorithmic robustness and reduces noise effects while still retaining computational simplicity. The symmetry of facial movements is measured by the Resistor-Average Distance (RAD) between LBP features extracted from the two sides of the face. Support Vector Machine (SVM) is applied to provide quantitative evaluation of facial paralysis based on the House-Brackmann (H-B) Scale. The proposed method is validated by experiments with 197 subject videos, which demonstrates its accuracy and efficiency.
Towards a Unified Framework for Pose, Expression, and Occlusion Tolerant Automatic Facial Alignment.
Seshadri, Keshav; Savvides, Marios
2016-10-01
We propose a facial alignment algorithm that is able to jointly deal with the presence of facial pose variation, partial occlusion of the face, and varying illumination and expressions. Our approach proceeds from sparse to dense landmarking steps using a set of specific models trained to best account for the shape and texture variation manifested by facial landmarks and facial shapes across pose and various expressions. We also propose the use of a novel l1-regularized least squares approach that we incorporate into our shape model, which is an improvement over the shape model used by several prior Active Shape Model (ASM) based facial landmark localization algorithms. Our approach is compared against several state-of-the-art methods on many challenging test datasets and exhibits a higher fitting accuracy on all of them.
Transforming Security Screening With Biometrics
2003-04-09
prompted the Defense Advanced Research Projects Agency to experiment with facial recognition technology for identification of known terrorists. While DoD...screening of individuals. Facial recognition technology has been tested to some degree for accessing highly sensitive military areas, but not for...the military can implement facial recognition to screen personnel requesting access to bases and stations, DoD is not likely to use biometrics to
Use of Biometrics within Sub-Saharan Refugee Communities
2013-12-01
fingerprint patterns, iris pattern recognition, and facial recognition as a means of establishing an individual’s identity. Biometrics creates and...Biometrics typically comprises fingerprint patterns, iris pattern recognition, and facial recognition as a means of establishing an individual’s identity...authentication because it identifies an individual based on mathematical analysis of the random pattern visible within the iris. Facial recognition is
Martín-Ruiz, María-Luisa; Máximo-Bocanegra, Nuria; Luna-Oliva, Laura
2016-01-01
The importance of an early rehabilitation process in children with cerebral palsy (CP) is widely recognized. On the one hand, new and useful treatment tools such as rehabilitation systems based on interactive technologies have appeared for rehabilitation of gross motor movements. On the other hand, from the therapeutic point of view, performing rehabilitation exercises with the facial muscles can improve the swallowing process, the facial expression through the management of muscles in the face, and even the speech of children with cerebral palsy. However, it is difficult to find interactive games to improve the detection and evaluation of oral-facial musculature dysfunctions in children with CP. This paper describes a framework based on strategies developed for interactive serious games that is created both for typically developed children and children with disabilities. Four interactive games are the core of a Virtual Environment called SONRIE. This paper demonstrates the benefits of SONRIE to monitor children’s oral-facial difficulties. The next steps will focus on the validation of SONRIE to carry out the rehabilitation process of oral-facial musculature in children with cerebral palsy. PMID:27023561
A study of patient facial expressivity in relation to orthodontic/surgical treatment.
Nafziger, Y J
1994-09-01
A dynamic analysis of the faces of patients seeking an aesthetic restoration of facial aberrations with orthognathic treatment requires (besides the routine static study, such as records, study models, photographs, and cephalometric tracings) the study of their facial expressions. To determine a classification method for the units of expressive facial behavior, the mobility of the face is studied with the aid of the facial action coding system (FACS) created by Ekman and Friesen. With video recordings of faces and photographic images taken from the video recordings, the authors have modified a technique of facial analysis structured on the visual observation of the anatomic basis of movement. The technique, itself, is based on the defining of individual facial expressions and then codifying such expressions through the use of minimal, anatomic action units. These action units actually combine to form facial expressions. With the help of FACS, the facial expressions of 18 patients before and after orthognathic surgery, and six control subjects without dentofacial deformation have been studied. I was able to register 6278 facial expressions and then further define 18,844 action units, from the 6278 facial expressions. A classification of the facial expressions made by subject groups and repeated in quantified time frames has allowed establishment of "rules" or "norms" relating to expression, thus further enabling the making of comparisons of facial expressiveness between patients and control subjects. This study indicates that the facial expressions of the patients were more similar to the facial expressions of the controls after orthognathic surgery. It was possible to distinguish changes in facial expressivity in patients after dentofacial surgery, the type and degree of change depended on the facial structure before surgery. Changes noted tended toward a functioning that is identical to that of subjects who do not suffer from dysmorphosis and toward greater lip competence, particularly the function of the orbicular muscle of the lips, with reduced compensatory activity of the lower lip and the chin. The results of our study are supported by the clinical observations and suggest that the FACS technique should be able to provide a coding for the study of facial expression.
Donato, Gianluca; Bartlett, Marian Stewart; Hager, Joseph C.; Ekman, Paul; Sejnowski, Terrence J.
2010-01-01
The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions. PMID:21188284
Factors Influencing Perception of Facial Attractiveness: Gender and Dental Education.
Jung, Ga-Hee; Jung, Seunggon; Park, Hong-Ju; Oh, Hee-Kyun; Kook, Min-Suk
2018-03-01
This study was conducted to investigate the gender- and dental education-specific differences in perception of facial attractiveness for varying ratio of lower face contour. Two hundred eleven students (110 male respondents and 110 female respondents; aged between 20-38 years old) were requested to rate facial figures with alterations to the bigonial width and the vertical length of the lower face. We produced a standard figure which is based on the "golden ratio" and 4 additional series of figures with either horizontal or vertical alterations to the contour of lower face. The preference for each figure was evaluated using a Visual Analog Scale. The Kruskal Wallis test was used for differences in the preferences for each figure and the Mann-Whitney U test was used to evaluate gender-specific differences and differences by dental education. In general, the highest preference score was indicated for the standard figure, whereas facial figure with large bigonial width and chin length had the lowest score.Male respondents showed significantly higher preference score for facial contour that had a 0.1 proportional increase in the facial height-bigonial width ratio over that of the standard figure.For horizontal alterations to the facial profiles, there were no significant differences in the preferences by the level of dental education. For vertically altered images, the average Visual Analog Scale was significantly lower among the dentally-educated for facial image that had a proportional 0.22 and 0.42 increase in the ratio between the vertical length of the chin and the lip. Generally, the standard image based on the golden ratio was the most. Slender face was appealed more to males than to females, and facial image with an increased lower facial height were perceived to be much less attractive to the dentally-educated respondents, which suggests that the dental education might have some influence in sensitivity to vertical changes in lower face.
Successful treatment of migrating partial seizures in Wolf-Hirschhorn syndrome with bromide.
Itakura, Ayako; Saito, Yoshiaki; Nishimura, Yoko; Okazaki, Tetsuya; Ohno, Koyo; Sejima, Hitoshi; Yamamoto, Toshiyuki; Maegaki, Yoshihiro
2016-08-01
A girl with mild psychomotor developmental delay developed right or left hemiclonic convulsion at 10months of age. One month later, clusters of hemiclonic or bilateral tonic seizures with eyelid twitching emerged, resulting in status epilepticus. Treatment with phenobarbital and potassium bromide completely terminated the seizures within 10days. Ictal electroencephalography revealed a migrating focus of rhythmic 3-4Hz waves from the right temporal to right frontal regions and then to the left frontal regions. Genetic analysis was conducted based on the characteristic facial appearance of the patient, which identified a 2.1-Mb terminal deletion on chromosome 4p. This is the first case of Wolf-Hirschhorn syndrome complicated by epilepsy with migrating partial seizures. Copyright © 2016 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Gibelli, Daniele; Codari, Marina; Pucciarelli, Valentina; Dolci, Claudia; Sforza, Chiarella
2017-11-23
The quantitative assessment of facial modifications from mimicry is of relevant interest for the rehabilitation of patients who can no longer produce facial expressions. This study investigated a novel application of 3-dimensional on 3-dimensional superimposition for facial mimicry. This cross-sectional study was based on 10 men 30 to 40 years old who underwent stereophotogrammetry for neutral, happy, sad, and angry expressions. Registration of facial expressions on the neutral expression was performed. Root mean square (RMS) point-to-point distance in the labial area was calculated between each facial expression and the neutral one and was considered the main parameter for assessing facial modifications. In addition, effect size (Cohen d) was calculated to assess the effects of labial movements in relation to facial modifications. All participants were free from possible facial deformities, pathologies, or trauma that could affect facial mimicry. RMS values of facial areas differed significantly among facial expressions (P = .0004 by Friedman test). The widest modifications of the lips were observed in happy expressions (RMS, 4.06 mm; standard deviation [SD], 1.14 mm), with a statistically relevant difference compared with the sad (RMS, 1.42 mm; SD, 1.15 mm) and angry (RMS, 0.76 mm; SD, 0.45 mm) expressions. The effect size of labial versus total face movements was limited for happy and sad expressions and large for the angry expression. This study found that a happy expression provides wider modifications of the lips than the other facial expressions and suggests a novel procedure for assessing regional changes from mimicry. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Automated detection of pain from facial expressions: a rule-based approach using AAM
NASA Astrophysics Data System (ADS)
Chen, Zhanli; Ansari, Rashid; Wilkie, Diana J.
2012-02-01
In this paper, we examine the problem of using video analysis to assess pain, an important problem especially for critically ill, non-communicative patients, and people with dementia. We propose and evaluate an automated method to detect the presence of pain manifested in patient videos using a unique and large collection of cancer patient videos captured in patient homes. The method is based on detecting pain-related facial action units defined in the Facial Action Coding System (FACS) that is widely used for objective assessment in pain analysis. In our research, a person-specific Active Appearance Model (AAM) based on Project-Out Inverse Compositional Method is trained for each patient individually for the modeling purpose. A flexible representation of the shape model is used in a rule-based method that is better suited than the more commonly used classifier-based methods for application to the cancer patient videos in which pain-related facial actions occur infrequently and more subtly. The rule-based method relies on the feature points that provide facial action cues and is extracted from the shape vertices of AAM, which have a natural correspondence to face muscular movement. In this paper, we investigate the detection of a commonly used set of pain-related action units in both the upper and lower face. Our detection results show good agreement with the results obtained by three trained FACS coders who independently reviewed and scored the action units in the cancer patient videos.
Dunkley, Benjamin T; Pang, Elizabeth W; Sedge, Paul A; Jetly, Rakesh; Doesburg, Sam M; Taylor, Margot J
2016-01-01
Post-traumatic stress disorder (PTSD) is associated with atypical responses to emotional face stimuli with preferential processing given to threat-related facial expressions via hyperactive amygdalae disengaged from medial prefrontal modulation. We examined implicit emotional face perception in soldiers with (n = 20) and without (n = 25) PTSD using magnetoencephalography to define spatiotemporal network interactions, and a subsequent region-of-interest analysis to characterize the network role of the right amygdala and medial prefrontal cortex in threatening face perception. Contrasts of network interactions revealed the PTSD group were hyperconnected compared to controls in the phase-locking response in the 2-24 Hz range for angry faces, but not for happy faces when contrasting groups. Hyperconnectivity in PTSD was greatest in the posterior cingulate, right ventromedial prefrontal cortex, right parietal regions and the right temporal pole, as well as the right amygdala. Graph measures of right amygdala and medial prefrontal connectivity revealed increases in node strength and clustering in PTSD, but not inter-node connectivity. Additionally, these measures were found to correlate with anxiety and depression. In line with prior studies, amygdala hyperconnectivity was observed in PTSD in relation to threatening faces, but the medial prefrontal cortex also displayed enhanced connectivity in our network-based approach. Overall, these results support preferential neurophysiological encoding of threat-related facial expressions in those with PTSD.
Recognizing Action Units for Facial Expression Analysis
Tian, Ying-li; Kanade, Takeo; Cohn, Jeffrey F.
2010-01-01
Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams. PMID:25210210
[Idiopathic facial paralysis in children].
Achour, I; Chakroun, A; Ayedi, S; Ben Rhaiem, Z; Mnejja, M; Charfeddine, I; Hammami, B; Ghorbel, A
2015-05-01
Idiopathic facial palsy is the most common cause of facial nerve palsy in children. Controversy exists regarding treatment options. The objectives of this study were to review the epidemiological and clinical characteristics as well as the outcome of idiopathic facial palsy in children to suggest appropriate treatment. A retrospective study was conducted on children with a diagnosis of idiopathic facial palsy from 2007 to 2012. A total of 37 cases (13 males, 24 females) with a mean age of 13.9 years were included in this analysis. The mean duration between onset of Bell's palsy and consultation was 3 days. Of these patients, 78.3% had moderately severe (grade IV) or severe paralysis (grade V on the House and Brackmann grading). Twenty-seven patients were treated in an outpatient context, three patients were hospitalized, and seven patients were treated as outpatients and subsequently hospitalized. All patients received corticosteroids. Eight of them also received antiviral treatment. The complete recovery rate was 94.6% (35/37). The duration of complete recovery was 7.4 weeks. Children with idiopathic facial palsy have a very good prognosis. The complete recovery rate exceeds 90%. However, controversy exists regarding treatment options. High-quality studies have been conducted on adult populations. Medical treatment based on corticosteroids alone or combined with antiviral treatment is certainly effective in improving facial function outcomes in adults. In children, the recommendation for prescription of steroids and antiviral drugs based on adult treatment appears to be justified. Randomized controlled trials in the pediatric population are recommended to define a strategy for management of idiopathic facial paralysis. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Initial assessment of facial nerve paralysis based on motion analysis using an optical flow method.
Samsudin, Wan Syahirah W; Sundaraj, Kenneth; Ahmad, Amirozi; Salleh, Hasriah
2016-01-01
An initial assessment method that can classify as well as categorize the severity of paralysis into one of six levels according to the House-Brackmann (HB) system based on facial landmarks motion using an Optical Flow (OF) algorithm is proposed. The desired landmarks were obtained from the video recordings of 5 normal and 3 Bell's Palsy subjects and tracked using the Kanade-Lucas-Tomasi (KLT) method. A new scoring system based on the motion analysis using area measurement is proposed. This scoring system uses the individual scores from the facial exercises and grades the paralysis based on the HB system. The proposed method has obtained promising results and may play a pivotal role towards improved rehabilitation programs for patients.
ERIC Educational Resources Information Center
Camras, Linda A.; Oster, Harriet; Bakeman, Roger; Meng, Zhaolan; Ujiie, Tatsuo; Campos, Joseph J.
2007-01-01
Do infants show distinct negative facial expressions for different negative emotions? To address this question, European American, Chinese, and Japanese 11-month-olds were videotaped during procedures designed to elicit mild anger or frustration and fear. Facial behavior was coded using Baby FACS, an anatomically based scoring system. Infants'…
Hepatitis Diagnosis Using Facial Color Image
NASA Astrophysics Data System (ADS)
Liu, Mingjia; Guo, Zhenhua
Facial color diagnosis is an important diagnostic method in traditional Chinese medicine (TCM). However, due to its qualitative, subjective and experi-ence-based nature, traditional facial color diagnosis has a very limited application in clinical medicine. To circumvent the subjective and qualitative problems of facial color diagnosis of Traditional Chinese Medicine, in this paper, we present a novel computer aided facial color diagnosis method (CAFCDM). The method has three parts: face Image Database, Image Preprocessing Module and Diagnosis Engine. Face Image Database is carried out on a group of 116 patients affected by 2 kinds of liver diseases and 29 healthy volunteers. The quantitative color feature is extracted from facial images by using popular digital image processing techni-ques. Then, KNN classifier is employed to model the relationship between the quantitative color feature and diseases. The results show that the method can properly identify three groups: healthy, severe hepatitis with jaundice and severe hepatitis without jaundice with accuracy higher than 73%.
Variant facial artery in the submandibular region.
Vadgaonkar, Rajanigandha; Rai, Rajalakshmi; Prabhu, Latha V; Bv, Murlimanju; Samapriya, Neha
2012-07-01
Facial artery has been considered to be the most important vascular pedicle in facial rejuvenation procedures and submandibular gland (SMG) resection. It usually arises from the external carotid artery and passes from the carotid to digastric triangle, deep to the posterior belly of digastric muscle, and lodges in a groove at the posterior end of the SMG. It then passes between SMG and the mandible to reach the face after winding around the base of the mandible. During a routine dissection, in a 62-year-old female cadaver, in Kasturba Medical College Mangalore, an unusual pattern in the cervical course of facial artery was revealed. The right facial artery was found to pierce the whole substance of the SMG before winding around the lower border of the mandible to enter the facial region. Awareness of existence of such a variant and its comparison to the normal anatomy will be useful to oral and maxillofacial surgeons.
Gribova, N P; Iudel'son, Ia B; Golubev, V L; Abramenkova, I V
2003-01-01
To carry out a differential diagnosis of two facial dyskinesia (FD) models--facial hemispasm (FH) and facial paraspasm (FP), a combined program of electroneuromyographic (ENMG) examination has been created, using statistical analyses, including that for objects identification based on hybrid neural network with the application of adaptive fuzzy logic method and standard statistics programs (Wilcoxon, Student statistics). In FH, a lesion of peripheral facial neuromotor apparatus with augmentation of functions of inter-neurons in segmental and upper segmental stem levels predominated. In FP, primary afferent strengthening in mimic muscles was accompanied by increased motor neurons activity and reciprocal augmentation of inter-neurons, inhibiting motor portion of V pair. Mathematical algorithm for ENMG results recognition worked out in the study provides a precise differentiation of two FD models and opens possibilities for differential diagnosis of other facial motor disorders.
Real-time speech-driven animation of expressive talking faces
NASA Astrophysics Data System (ADS)
Liu, Jia; You, Mingyu; Chen, Chun; Song, Mingli
2011-05-01
In this paper, we present a real-time facial animation system in which speech drives mouth movements and facial expressions synchronously. Considering five basic emotions, a hierarchical structure with an upper layer of emotion classification is established. Based on the recognized emotion label, the under-layer classification at sub-phonemic level has been modelled on the relationship between acoustic features of frames and audio labels in phonemes. Using certain constraint, the predicted emotion labels of speech are adjusted to gain the facial expression labels which are combined with sub-phonemic labels. The combinations are mapped into facial action units (FAUs), and audio-visual synchronized animation with mouth movements and facial expressions is generated by morphing between FAUs. The experimental results demonstrate that the two-layer structure succeeds in both emotion and sub-phonemic classifications, and the synthesized facial sequences reach a comparative convincing quality.
Orthogonal-blendshape-based editing system for facial motion capture data.
Li, Qing; Deng, Zhigang
2008-01-01
The authors present a novel data-driven 3D facial motion capture data editing system using automated construction of an orthogonal blendshape face model and constrained weight propagation, aiming to bridge the popular facial motion capture technique and blendshape approach. In this work, a 3D facial-motion-capture-editing problem is transformed to a blendshape-animation-editing problem. Given a collected facial motion capture data set, we construct a truncated PCA space spanned by the greatest retained eigenvectors and a corresponding blendshape face model for each anatomical region of the human face. As such, modifying blendshape weights (PCA coefficients) is equivalent to editing their corresponding motion capture sequence. In addition, a constrained weight propagation technique allows animators to balance automation and flexible controls.
Empirical mode decomposition-based facial pose estimation inside video sequences
NASA Astrophysics Data System (ADS)
Qing, Chunmei; Jiang, Jianmin; Yang, Zhijing
2010-03-01
We describe a new pose-estimation algorithm via integration of the strength in both empirical mode decomposition (EMD) and mutual information. While mutual information is exploited to measure the similarity between facial images to estimate poses, EMD is exploited to decompose input facial images into a number of intrinsic mode function (IMF) components, which redistribute the effect of noise, expression changes, and illumination variations as such that, when the input facial image is described by the selected IMF components, all the negative effects can be minimized. Extensive experiments were carried out in comparisons to existing representative techniques, and the results show that the proposed algorithm achieves better pose-estimation performances with robustness to noise corruption, illumination variation, and facial expressions.
Makedonska, Jana; Wright, Barth W.; Strait, David S.
2012-01-01
A fundamental challenge of morphology is to identify the underlying evolutionary and developmental mechanisms leading to correlated phenotypic characters. Patterns and magnitudes of morphological integration and their association with environmental variables are essential for understanding the evolution of complex phenotypes, yet the nature of the relevant selective pressures remains poorly understood. In this study, the adaptive significance of morphological integration was evaluated through the association between feeding mechanics, ingestive behavior and craniofacial variation. Five capuchin species were examined, Cebus apella sensu stricto, Cebus libidinosus, Cebus nigritus, Cebus olivaceus and Cebus albifrons. Twenty three-dimensional landmarks were chosen to sample facial regions experiencing high strains during feeding, characteristics affecting muscular mechanical advantage and basicranial regions. Integration structure and magnitude between and within the oral and zygomatic subunits, between and within blocks maximizing modularity and within the face, the basicranium and the cranium were examined using partial-least squares, eigenvalue variance, integration indices compared inter-specifically at a common level of sampled population variance and cluster analyses. Results are consistent with previous findings reporting a relative constancy of facial and cranial correlation patterns across mammals, while covariance magnitudes vary. Results further suggest that food material properties structure integration among functionally-linked facial elements and possibly integration between the face and the basicranium. Hard-object-feeding capuchins, especially C.apella s.s., whose faces experience particularly high biomechanical loads are characterized by higher facial and cranial integration especially compared to C.albifrons, likely because morphotypes compromising feeding performance are selected against in species relying on obdurate fallback foods. This is the first study to report a link between food material properties and facial and cranial integration. Furthermore, results do not identify the consistent presence of cranial modules yielding support to suggestions that despite the distinct embryological imprints of its elements the cranium of placental mammals is not characterized by a modular architecture. PMID:23110039
Is moral beauty different from facial beauty? Evidence from an fMRI study
Wang, Tingting; Mo, Ce; Tan, Li Hai; Cant, Jonathan S.; Zhong, Luojin; Cupchik, Gerald
2015-01-01
Is moral beauty different from facial beauty? Two functional magnetic resonance imaging experiments were performed to answer this question. Experiment 1 investigated the network of moral aesthetic judgments and facial aesthetic judgments. Participants performed aesthetic judgments and gender judgments on both faces and scenes containing moral acts. The conjunction analysis of the contrasts ‘facial aesthetic judgment > facial gender judgment’ and ‘scene moral aesthetic judgment > scene gender judgment’ identified the common involvement of the orbitofrontal cortex (OFC), inferior temporal gyrus and medial superior frontal gyrus, suggesting that both types of aesthetic judgments are based on the orchestration of perceptual, emotional and cognitive components. Experiment 2 examined the network of facial beauty and moral beauty during implicit perception. Participants performed a non-aesthetic judgment task on both faces (beautiful vs common) and scenes (containing morally beautiful vs neutral information). We observed that facial beauty (beautiful faces > common faces) involved both the cortical reward region OFC and the subcortical reward region putamen, whereas moral beauty (moral beauty scenes > moral neutral scenes) only involved the OFC. Moreover, compared with facial beauty, moral beauty spanned a larger-scale cortical network, indicating more advanced and complex cerebral representations characterizing moral beauty. PMID:25298010
Packiriswamy, Vasanthakumar; Kumar, Pramod; Rao, Mohandas
2012-12-01
The "golden ratio" is considered as a universal facial aesthetical standard. Researcher's opinion that deviation from golden ratio can result in development of facial abnormalities. This study was designed to study the facial morphology and to identify individuals with normal, short, and long face. We studied 300 Malaysian nationality subjects aged 18-28 years of Chinese, Indian, and Malay extraction. The parameters measured were physiognomical facial height and width of face, and physiognomical facial index was calculated. Face shape was classified based on golden ratio. Independent t test was done to test the difference between sexes and among the races. The mean values of the measurements and index showed significant sexual and interracial differences. Out of 300 subjects, the face shape was normal in 60 subjects, short in 224 subjects, and long in 16 subjects. As anticipated, the measurements showed variations according to gender and race. Only 60 subjects had a regular face shape, and remaining 240 subjects had irregular face shape (short and long). Since the short and long shape individuals may be at risk of developing various disorders, the knowledge of facial shapes in the given population is important for early diagnostic and treatment procedures.
Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms
NASA Astrophysics Data System (ADS)
Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan
2010-12-01
This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.
Petrican, Raluca; Todorov, Alexander; Grady, Cheryl
2016-01-01
Character judgments, based on facial appearance, impact both perceivers’ and targets’ interpersonal decisions and behaviors. Nonetheless, the resilience of such effects in the face of longer acquaintanceship duration is yet to be determined. To address this question, we had 51 elderly long-term married couples complete self and informant versions of a Big Five Inventory. Participants were also photographed, while they were requested to maintain an emotionally neutral expression. A subset of the initial sample completed a shortened version of the Big Five Inventory in response to the pictures of other opposite sex participants (with whom they were unacquainted). Oosterhof and Todorov’s (2008) computer-based model of face evaluation was used to generate facial trait scores on trustworthiness, dominance, and attractiveness, based on participants’ photographs. Results revealed that structural facial characteristics, suggestive of greater trustworthiness, predicted positively biased, global informant evaluations of a target’s personality, among both spouses and strangers. Among spouses, this effect was impervious to marriage length. There was also evidence suggestive of a Dorian Gray effect on personality, since facial trustworthiness predicted not only spousal and stranger, but also self-ratings of extraversion. Unexpectedly, though, follow-up analyses revealed that (low) facial dominance, rather than (high) trustworthiness, was the strongest predictor of self-rated extraversion. Our present findings suggest that subtle emotional cues, embedded in the structure of emotionally neutral faces, exert long-lasting effects on personality judgments even among very well-acquainted targets and perceivers. PMID:27330234
Facial Expression Influences Face Identity Recognition During the Attentional Blink
2014-01-01
Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry—suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another. PMID:25286076
Facial expression influences face identity recognition during the attentional blink.
Bach, Dominik R; Schmidt-Daffy, Martin; Dolan, Raymond J
2014-12-01
Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry-suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another.
Consensus on Changing Trends, Attitudes, and Concepts of Asian Beauty.
Liew, Steven; Wu, Woffles T L; Chan, Henry H; Ho, Wilson W S; Kim, Hee-Jin; Goodman, Greg J; Peng, Peter H L; Rogers, John D
2016-04-01
Asians increasingly seek non-surgical facial esthetic treatments, especially at younger ages. Published recommendations and clinical evidence mostly reference Western populations, but Asians differ from them in terms of attitudes to beauty, structural facial anatomy, and signs and rates of aging. A thorough knowledge of the key esthetic concerns and requirements for the Asian face is required to strategize appropriate facial esthetic treatments with botulinum toxin and hyaluronic acid (HA) fillers. The Asian Facial Aesthetics Expert Consensus Group met to develop consensus statements on concepts of facial beauty, key esthetic concerns, facial anatomy, and aging in Southeastern and Eastern Asians, as a prelude to developing consensus opinions on the cosmetic facial use of botulinum toxin and HA fillers in these populations. Beautiful and esthetically attractive people of all races share similarities in appearance while retaining distinct ethnic features. Asians between the third and sixth decades age well compared with age-matched Caucasians. Younger Asians' increasing requests for injectable treatments to improve facial shape and three-dimensionality often reflect a desire to correct underlying facial structural deficiencies or weaknesses that detract from ideals of facial beauty. Facial esthetic treatments in Asians are not aimed at Westernization, but rather the optimization of intrinsic Asian ethnic features, or correction of specific underlying structural features that are perceived as deficiencies. Thus, overall facial attractiveness is enhanced while retaining esthetic characteristics of Asian ethnicity. Because Asian patients age differently than Western patients, different management and treatment planning strategies are utilized. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to Table of Contents or the online Instructions to Authors www.springer.com/00266.
Rezlescu, Constantin; Duchaine, Brad; Olivola, Christopher Y; Chater, Nick
2012-01-01
Many human interactions are built on trust, so widespread confidence in first impressions generally favors individuals with trustworthy-looking appearances. However, few studies have explicitly examined: 1) the contribution of unfakeable facial features to trust-based decisions, and 2) how these cues are integrated with information about past behavior. Using highly controlled stimuli and an improved experimental procedure, we show that unfakeable facial features associated with the appearance of trustworthiness attract higher investments in trust games. The facial trustworthiness premium is large for decisions based solely on faces, with trustworthy identities attracting 42% more money (Study 1), and remains significant though reduced to 6% when reputational information is also available (Study 2). The face trustworthiness premium persists with real (rather than virtual) currency and when higher payoffs are at stake (Study 3). Our results demonstrate that cooperation may be affected not only by controllable appearance cues (e.g., clothing, facial expressions) as shown previously, but also by features that are impossible to mimic (e.g., individual facial structure). This unfakeable face trustworthiness effect is not limited to the rare situations where people lack any information about their partners, but survives in richer environments where relevant details about partner past behavior are available.
Buchy, Lisa; Barbato, Mariapaola; Makowski, Carolina; Bray, Signe; MacMaster, Frank P; Deighton, Stephanie; Addington, Jean
2017-11-01
People with psychosis show deficits recognizing facial emotions and disrupted activation in the underlying neural circuitry. We evaluated associations between facial emotion recognition and cortical thickness using a correlation-based approach to map structural covariance networks across the brain. Fifteen people with an early psychosis provided magnetic resonance scans and completed the Penn Emotion Recognition and Differentiation tasks. Fifteen historical controls provided magnetic resonance scans. Cortical thickness was computed using CIVET and analyzed with linear models. Seed-based structural covariance analysis was done using the mapping anatomical correlations across the cerebral cortex methodology. To map structural covariance networks involved in facial emotion recognition, the right somatosensory cortex and bilateral fusiform face areas were selected as seeds. Statistics were run in SurfStat. Findings showed increased cortical covariance between the right fusiform face region seed and right orbitofrontal cortex in controls than early psychosis subjects. Facial emotion recognition scores were not significantly associated with thickness in any region. A negative effect of Penn Differentiation scores on cortical covariance was seen between the left fusiform face area seed and right superior parietal lobule in early psychosis subjects. Results suggest that facial emotion recognition ability is related to covariance in a temporal-parietal network in early psychosis. Copyright © 2017 Elsevier B.V. All rights reserved.
Home-use TriPollar RF device for facial skin tightening: Clinical study results.
Beilin, Ghislaine
2011-04-01
Professional, non-invasive, anti-aging treatments based on radio-frequency (RF) technologies are popular for skin tightening and improvement of wrinkles. A new home-use RF device for facial treatments has recently been developed based on TriPollar™ technology. To evaluate the STOP™ home-use device for facial skin tightening using objective and subjective methods. Twenty-three female subjects used the STOP at home for a period of 6 weeks followed by a maintenance period of 6 weeks. Facial skin characteristics were objectively evaluated at baseline and at the end of the treatment and maintenance periods using a three-dimensional imaging system. Additionally, facial wrinkles were classified and subjects scored their satisfaction and sensations. Following STOP treatment, a statistically significant reduction of perioral and periorbital wrinkles was achieved in 90% and 95% of the patients, respectively, with an average periorbital wrinkle reduction of 41%. This objective result correlated well with the periorbital wrinkle classification result of 40%. All patients were satisfied to extremely satisfied with the treatments and all reported moderate to excellent visible results. The clinical study demonstrated the safety and efficacy of the STOP home-use device for facial skin tightening. Treatment can maintain a tighter and suppler skin with improvement of fine lines and wrinkles.
Rezlescu, Constantin; Duchaine, Brad; Olivola, Christopher Y.; Chater, Nick
2012-01-01
Background Many human interactions are built on trust, so widespread confidence in first impressions generally favors individuals with trustworthy-looking appearances. However, few studies have explicitly examined: 1) the contribution of unfakeable facial features to trust-based decisions, and 2) how these cues are integrated with information about past behavior. Methodology/Principal Findings Using highly controlled stimuli and an improved experimental procedure, we show that unfakeable facial features associated with the appearance of trustworthiness attract higher investments in trust games. The facial trustworthiness premium is large for decisions based solely on faces, with trustworthy identities attracting 42% more money (Study 1), and remains significant though reduced to 6% when reputational information is also available (Study 2). The face trustworthiness premium persists with real (rather than virtual) currency and when higher payoffs are at stake (Study 3). Conclusions/Significance Our results demonstrate that cooperation may be affected not only by controllable appearance cues (e.g., clothing, facial expressions) as shown previously, but also by features that are impossible to mimic (e.g., individual facial structure). This unfakeable face trustworthiness effect is not limited to the rare situations where people lack any information about their partners, but survives in richer environments where relevant details about partner past behavior are available. PMID:22470553
A shape-based account for holistic face processing.
Zhao, Mintao; Bülthoff, Heinrich H; Bülthoff, Isabelle
2016-04-01
Faces are processed holistically, so selective attention to 1 face part without any influence of the others often fails. In this study, 3 experiments investigated what type of facial information (shape or surface) underlies holistic face processing and whether generalization of holistic processing to nonexperienced faces requires extensive discrimination experience. Results show that facial shape information alone is sufficient to elicit the composite face effect (CFE), 1 of the most convincing demonstrations of holistic processing, whereas facial surface information is unnecessary (Experiment 1). The CFE is eliminated when faces differ only in surface but not shape information, suggesting that variation of facial shape information is necessary to observe holistic face processing (Experiment 2). Removing 3-dimensional (3D) facial shape information also eliminates the CFE, indicating the necessity of 3D shape information for holistic face processing (Experiment 3). Moreover, participants show similar holistic processing for faces with and without extensive discrimination experience (i.e., own- and other-race faces), suggesting that generalization of holistic processing to nonexperienced faces requires facial shape information, but does not necessarily require further individuation experience. These results provide compelling evidence that facial shape information underlies holistic face processing. This shape-based account not only offers a consistent explanation for previous studies of holistic face processing, but also suggests a new ground-in addition to expertise-for the generalization of holistic processing to different types of faces and to nonface objects. (c) 2016 APA, all rights reserved).
Grouping patients for masseter muscle genotype-phenotype studies.
Moawad, Hadwah Abdelmatloub; Sinanan, Andrea C M; Lewis, Mark P; Hunt, Nigel P
2012-03-01
To use various facial classifications, including either/both vertical and horizontal facial criteria, to assess their effects on the interpretation of masseter muscle (MM) gene expression. Fresh MM biopsies were obtained from 29 patients (age, 16-36 years) with various facial phenotypes. Based on clinical and cephalometric analysis, patients were grouped using three different classifications: (1) basic vertical, (2) basic horizontal, and (3) combined vertical and horizontal. Gene expression levels of the myosin heavy chain genes MYH1, MYH2, MYH3, MYH6, MYH7, and MYH8 were recorded using quantitative reverse transcriptase polymerase chain reaction (RT-PCR) and were related to the various classifications. The significance level for statistical analysis was set at P ≤ .05. Using classification 1, none of the MYH genes were found to be significantly different between long face (LF) patients and the average vertical group. Using classification 2, MYH3, MYH6, and MYH7 genes were found to be significantly upregulated in retrognathic patients compared with prognathic and average horizontal groups. Using classification 3, only the MYH7 gene was found to be significantly upregulated in retrognathic LF compared with prognathic LF, prognathic average vertical faces, and average vertical and horizontal groups. The use of basic vertical or basic horizontal facial classifications may not be sufficient for genetics-based studies of facial phenotypes. Prognathic and retrognathic facial phenotypes have different MM gene expressions; therefore, it is not recommended to combine them into one single group, even though they may have a similar vertical facial phenotype.
Anatomical and neuropsychological effects of cluster munitions.
Fares, Youssef; Fares, Jawad
2013-12-01
The aim of this article is to investigate the effects of cluster munitions on the different environmental, anatomical and neuropsychological levels. We conducted a study to explore the effects of sub-munitions on Lebanese victims. The study included a total of 407 cases that have been subjected to the detonation of unexploded sub-munitions in Lebanon, between 2006 and 2011. In our series, 356 casualties were injured and 51 were dead. 382 were males and 25 were females. We recorded 83 cases of amputations, and injuries involving cranio-facial regions, thorax, abdomen, and upper and lower extremities. These injuries lead to loss of function, body disfiguration, and chronic pain caused by the injuries or the amputations, as well as post-traumatic stress disorder. The peripheral nervous system was mostly affected and patients suffered from significant psychosocial tribulations. Cluster munitions harm human beings and decrease biodiversity. Survivors suffer from physical and psychological impairments. Laws should be passed and enforced to ban the use of these detrimental weapons that have negative effects on ecosystem and societal levels.
Decoding facial expressions based on face-selective and motion-sensitive areas.
Liang, Yin; Liu, Baolin; Xu, Junhai; Zhang, Gaoyan; Li, Xianglin; Wang, Peiyuan; Wang, Bin
2017-06-01
Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes-obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye-related information on facial expression decoding were also examined. It was found that motion-sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face-selective and motion-sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye-related information was also observed. Overall, the findings showed that emotional expressions are represented in motion-sensitive areas in addition to conventional face-selective areas, suggesting that motion-sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye-related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113-3125, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Feng, Zhi-hong; Dong, Yan; Bai, Shi-zhu; Wu, Guo-feng; Bi, Yun-peng; Wang, Bo; Zhao, Yi-min
2010-01-01
The aim of this article was to demonstrate a novel approach to designing facial prostheses using the transplantation concept and computer-assisted technology for extensive, large, maxillofacial defects that cross the facial midline. The three-dimensional (3D) facial surface images of a patient and his relative were reconstructed using data obtained through optical scanning. Based on these images, the corresponding portion of the relative's face was transplanted to the patient's where the defect was located, which could not be rehabilitated using mirror projection, to design the virtual facial prosthesis without the eye. A 3D model of an artificial eye that mimicked the patient's remaining one was developed, transplanted, and fit onto the virtual prosthesis. A personalized retention structure for the artificial eye was designed on the virtual facial prosthesis. The wax prosthesis was manufactured through rapid prototyping, and the definitive silicone prosthesis was completed. The size, shape, and cosmetic appearance of the prosthesis were satisfactory and matched the defect area well. The patient's facial appearance was recovered perfectly with the prosthesis, as determined through clinical evaluation. The optical 3D imaging and computer-aided design/computer-assisted manufacturing system used in this study can design and fabricate facial prostheses more precisely than conventional manual sculpturing techniques. The discomfort generally associated with such conventional methods was decreased greatly. The virtual transplantation used to design the facial prosthesis for the maxillofacial defect, which crossed the facial midline, and the development of the retention structure for the eye were both feasible.
Volk, Gerd Fabian; Pohlmann, Martin; Finkensieper, Mira; Chalmers, Heather J; Guntinas-Lichius, Orlando
2014-01-01
While standardized methods are established to examine the pathway from motorcortex to the peripheral nerve in patients with facial palsy, a reliable method to evaluate the facial muscles in patients with long-term palsy for therapy planning is lacking. A 3D ultrasonographic (US) acquisition system driven by a motorized linear mover combined with conventional US probe was used to acquire 3D data sets of several facial muscles on both sides of the face in a healthy subject and seven patients with different types of unilateral degenerative facial nerve lesions. The US results were correlated to the duration of palsy and the electromyography results. Consistent 3D US based volumetry through bilateral comparison was feasible for parts of the frontalis muscle, orbicularis oculi muscle, depressor anguli oris muscle, depressor labii inferioris muscle, and mentalis muscle. With the exception of the frontal muscle, the facial muscles volumes were much smaller on the palsy side (minimum: 3% for the depressor labii inferior muscle) than on the healthy side in patients with severe facial nerve lesion. In contrast, the frontal muscles did not show a side difference. In the two patients with defective healing after spontaneous regeneration a decrease in muscle volume was not seen. Synkinesis and hyperkinesis was even more correlated to muscle hypertrophy on the palsy compared with the healthy side. 3D ultrasonography seems to be a promising tool for regional and quantitative evaluation of facial muscles in patients with facial palsy receiving a facial reconstructive surgery or conservative treatment.
2014-01-01
Background While standardized methods are established to examine the pathway from motorcortex to the peripheral nerve in patients with facial palsy, a reliable method to evaluate the facial muscles in patients with long-term palsy for therapy planning is lacking. Methods A 3D ultrasonographic (US) acquisition system driven by a motorized linear mover combined with conventional US probe was used to acquire 3D data sets of several facial muscles on both sides of the face in a healthy subject and seven patients with different types of unilateral degenerative facial nerve lesions. Results The US results were correlated to the duration of palsy and the electromyography results. Consistent 3D US based volumetry through bilateral comparison was feasible for parts of the frontalis muscle, orbicularis oculi muscle, depressor anguli oris muscle, depressor labii inferioris muscle, and mentalis muscle. With the exception of the frontal muscle, the facial muscles volumes were much smaller on the palsy side (minimum: 3% for the depressor labii inferior muscle) than on the healthy side in patients with severe facial nerve lesion. In contrast, the frontal muscles did not show a side difference. In the two patients with defective healing after spontaneous regeneration a decrease in muscle volume was not seen. Synkinesis and hyperkinesis was even more correlated to muscle hypertrophy on the palsy compared with the healthy side. Conclusion 3D ultrasonography seems to be a promising tool for regional and quantitative evaluation of facial muscles in patients with facial palsy receiving a facial reconstructive surgery or conservative treatment. PMID:24782657
Deep facial analysis: A new phase I epilepsy evaluation using computer vision.
Ahmedt-Aristizabal, David; Fookes, Clinton; Nguyen, Kien; Denman, Simon; Sridharan, Sridha; Dionisio, Sasha
2018-05-01
Semiology observation and characterization play a major role in the presurgical evaluation of epilepsy. However, the interpretation of patient movements has subjective and intrinsic challenges. In this paper, we develop approaches to attempt to automatically extract and classify semiological patterns from facial expressions. We address limitations of existing computer-based analytical approaches of epilepsy monitoring, where facial movements have largely been ignored. This is an area that has seen limited advances in the literature. Inspired by recent advances in deep learning, we propose two deep learning models, landmark-based and region-based, to quantitatively identify changes in facial semiology in patients with mesial temporal lobe epilepsy (MTLE) from spontaneous expressions during phase I monitoring. A dataset has been collected from the Mater Advanced Epilepsy Unit (Brisbane, Australia) and is used to evaluate our proposed approach. Our experiments show that a landmark-based approach achieves promising results in analyzing facial semiology, where movements can be effectively marked and tracked when there is a frontal face on visualization. However, the region-based counterpart with spatiotemporal features achieves more accurate results when confronted with extreme head positions. A multifold cross-validation of the region-based approach exhibited an average test accuracy of 95.19% and an average AUC of 0.98 of the ROC curve. Conversely, a leave-one-subject-out cross-validation scheme for the same approach reveals a reduction in accuracy for the model as it is affected by data limitations and achieves an average test accuracy of 50.85%. Overall, the proposed deep learning models have shown promise in quantifying ictal facial movements in patients with MTLE. In turn, this may serve to enhance the automated presurgical epilepsy evaluation by allowing for standardization, mitigating bias, and assessing key features. The computer-aided diagnosis may help to support clinical decision-making and prevent erroneous localization and surgery. Copyright © 2018 Elsevier Inc. All rights reserved.
Three-dimensional visualization system as an aid for facial surgical planning
NASA Astrophysics Data System (ADS)
Barre, Sebastien; Fernandez-Maloigne, Christine; Paume, Patricia; Subrenat, Gilles
2001-05-01
We present an aid for facial deformities treatment. We designed a system for surgical planning and prediction of human facial aspect after maxillo-facial surgery. We study the 3D reconstruction process of the tissues involved in the simulation, starting from CT acquisitions. 3D iso-surfaces meshes of soft tissues and bone structures are built. A sparse set of still photographs is used to reconstruct a 360 degree(s) texture of the facial surface and increase its visual realism. Reconstructed objects are inserted into an object-oriented, portable and scriptable visualization software allowing the practitioner to manipulate and visualize them interactively. Several LODs (Level-Of- Details) techniques are used to ensure usability. Bone structures are separated and moved by means of cut planes matching orthognatic surgery procedures. We simulate soft tissue deformations by creating a physically-based springs model between both tissues. The new static state of the facial model is computed by minimizing the energy of the springs system to achieve equilibrium. This process is optimized by transferring informations like participation hints at vertex-level between a warped generic model and the facial mesh.
A 63-year-old man with peripheral facial nerve paralysis and a pulmonary lesion.
Yserbyt, J; Wilms, G; Lievens, Y; Nackaerts, K
2009-01-01
Occasionally, malignant neoplasms may cause peripheral facial nerve paralysis as a presenting symptom. A 63-year-old man was referred to the Emergency Department because of a peripheral facial nerve paralysis, lasting for 10 days. Initial diagnostic examinations revealed no apparent cause for this facial nerve paralysis. Chest X-ray, however, showed a suspicious tumoural mass, located in the right hilar region, as confirmed by CAT scan. The diagnosis of an advanced stage lung adenocarcinoma was finally confirmed by bronchial biopsy. MRI scanning showed diffuse brain metastases and revealed a pontine lesion as the most probable underlying cause of this case of peripheral facial nerve paralysis. Platin-based palliative chemotherapy was given, after an initial pancranial irradiation. According to the MRI findings, the pontine lesion was responsible for the peripheral facial nerve paralysis, as an initial presenting symptom in this case of lung adenocarcinoma. This clinical case of a peripheral facial nerve paralysis was caused by a pontine brain metastasis and illustrates a rather rare presenting symptom of metastatic lung cancer.
Chechko, Natalya; Pagel, Alena; Otte, Ellen; Koch, Iring; Habel, Ute
2016-01-01
Spontaneous emotional expressions (rapid facial mimicry) perform both emotional and social functions. In the current study, we sought to test whether there were deficits in automatic mimic responses to emotional facial expressions in patients (15 of them) with stable schizophrenia compared to 15 controls. In a perception-action interference paradigm (the Simon task; first experiment), and in the context of a dual-task paradigm (second experiment), the task-relevant stimulus feature was the gender of a face, which, however, displayed a smiling or frowning expression (task-irrelevant stimulus feature). We measured the electromyographical activity in the corrugator supercilii and zygomaticus major muscle regions in response to either compatible or incompatible stimuli (i.e., when the required response did or did not correspond to the depicted facial expression). The compatibility effect based on interactions between the implicit processing of a task-irrelevant emotional facial expression and the conscious production of an emotional facial expression did not differ between the groups. In stable patients (in spite of a reduced mimic reaction), we observed an intact capacity to respond spontaneously to facial emotional stimuli. PMID:27303335
Outcome of a graduated minimally invasive facial reanimation in patients with facial paralysis.
Holtmann, Laura C; Eckstein, Anja; Stähr, Kerstin; Xing, Minzhi; Lang, Stephan; Mattheis, Stefan
2017-08-01
Peripheral paralysis of the facial nerve is the most frequent of all cranial nerve disorders. Despite advances in facial surgery, the functional and aesthetic reconstruction of a paralyzed face remains a challenge. Graduated minimally invasive facial reanimation is based on a modular principle. According to the patients' needs, precondition, and expectations, the following modules can be performed: temporalis muscle transposition and facelift, nasal valve suspension, endoscopic brow lift, and eyelid reconstruction. Applying a concept of a graduated minimally invasive facial reanimation may help minimize surgical trauma and reduce morbidity. Twenty patients underwent a graduated minimally invasive facial reanimation. A retrospective chart review was performed with a follow-up examination between 1 and 8 months after surgery. The FACEgram software was used to calculate pre- and postoperative eyelid closure, the level of brows, nasal, and philtral symmetry as well as oral commissure position at rest and oral commissure excursion with smile. As a patient-oriented outcome parameter, the Glasgow Benefit Inventory questionnaire was applied. There was a statistically significant improvement in the postoperative score of eyelid closure, brow asymmetry, nasal asymmetry, philtral asymmetry as well as oral commissure symmetry at rest (p < 0.05). Smile evaluation revealed no significant change of oral commissure excursion. The mean Glasgow Benefit Inventory score indicated substantial improvement in patients' overall quality of life. If a primary facial nerve repair or microneurovascular tissue transfer cannot be applied, graduated minimally invasive facial reanimation is a promising option to restore facial function and symmetry at rest.
Marur, Tania; Tuna, Yakup; Demirci, Selman
2014-01-01
Dermatologic problems of the face affect both function and aesthetics, which are based on complex anatomical features. Treating dermatologic problems while preserving the aesthetics and functions of the face requires knowledge of normal anatomy. When performing successfully invasive procedures of the face, it is essential to understand its underlying topographic anatomy. This chapter presents the anatomy of the facial musculature and neurovascular structures in a systematic way with some clinically important aspects. We describe the attachments of the mimetic and masticatory muscles and emphasize their functions and nerve supply. We highlight clinically relevant facial topographic anatomy by explaining the course and location of the sensory and motor nerves of the face and facial vasculature with their relations. Additionally, this chapter reviews the recent nomenclature of the branching pattern of the facial artery. © 2013 Elsevier Inc. All rights reserved.
Laptop Computer - Based Facial Recognition System Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. A. Cain; G. B. Singleton
2001-03-01
The objective of this project was to assess the performance of the leading commercial-off-the-shelf (COTS) facial recognition software package when used as a laptop application. We performed the assessment to determine the system's usefulness for enrolling facial images in a database from remote locations and conducting real-time searches against a database of previously enrolled images. The assessment involved creating a database of 40 images and conducting 2 series of tests to determine the product's ability to recognize and match subject faces under varying conditions. This report describes the test results and includes a description of the factors affecting the results.more » After an extensive market survey, we selected Visionics' FaceIt{reg_sign} software package for evaluation and a review of the Facial Recognition Vendor Test 2000 (FRVT 2000). This test was co-sponsored by the US Department of Defense (DOD) Counterdrug Technology Development Program Office, the National Institute of Justice, and the Defense Advanced Research Projects Agency (DARPA). Administered in May-June 2000, the FRVT 2000 assessed the capabilities of facial recognition systems that were currently available for purchase on the US market. Our selection of this Visionics product does not indicate that it is the ''best'' facial recognition software package for all uses. It was the most appropriate package based on the specific applications and requirements for this specific application. In this assessment, the system configuration was evaluated for effectiveness in identifying individuals by searching for facial images captured from video displays against those stored in a facial image database. An additional criterion was that the system be capable of operating discretely. For this application, an operational facial recognition system would consist of one central computer hosting the master image database with multiple standalone systems configured with duplicates of the master operating in remote locations. Remote users could perform real-time searches where network connectivity is not available. As images are enrolled at the remote locations, periodic database synchronization is necessary.« less
Liu, Zhi-dan; He, Jiang-bo; Guo, Si-si; Yang, Zhi-xin; Shen, Jun; Li, Xiao-yan; Liang, Wei; Shen, Wei-dong
2015-08-25
Although many patients with facial paralysis have obtained benefits or completely recovered after acupuncture or electroacupuncture therapy, it is still difficult to list intuitive evidence besides evaluation using neurological function scales and a few electrophysiologic data. Hence, the aim of this study is to use more intuitive and reliable detection techniques such as facial nerve magnetic resonance imaging (MRI), nerve electromyography, and F waves to observe changes in the anatomic morphology of facial nerves and nerve conduction before and after applying acupuncture or electroacupuncture, and to verify their effectiveness by combining neurological function scales. A total of 132 patients with Bell's palsy (grades III and IV in the House-Brackmann [HB] Facial Nerve Grading System) will be randomly divided into electroacupuncture, manual acupuncture, non-acupuncture, and medicine control groups. All the patients will be given electroacupuncture treatment after the acute period, except for patients in the medicine control group. The acupuncture or electroacupuncture treatments will be performed every 2 days until the patients recover or withdraw from the study. The primary outcome is analysis based on facial nerve functional scales (HB scale and Sunnybrook facial grading system), and the secondary outcome is analysis based on MRI, nerve electromyography and F-wave detection. All the patients will undergo MRI within 3 days after Bell's palsy onset for observation of the signal intensity and facial nerve swelling of the unaffected and affected sides. They will also undergo facial nerve electromyography and F-wave detection within 1 week after onset of Bell's palsy. Nerve function will be evaluated using the HB scale and Sunnybrook facial grading system at each hospital visit for treatment until the end of the study. The MRI, nerve electromyography, and F-wave detection will be performed again at 1 month after the onset of Bell's palsy. Chinese Clinical Trials Register identifier: ChiCTR-IPR-14005730. Registered on 23 December 2014.
Morphological quantitative criteria and aesthetic evaluation of eight female Han face types.
Zhao, Qiming; Zhou, Rongrong; Zhang, XuDong; Sun, Huafeng; Lu, Xin; Xia, Dongsheng; Song, Mingli; Liang, Yang
2013-04-01
Human facial aesthetics relies on the classification of facial features and standards of attractiveness. However, there are no widely accepted quantitative criteria for facial attractiveness, particularly for Chinese Han faces. Establishing quantitative standards of attractiveness for facial landmarks within facial types is important for planning outcomes in cosmetic plastic surgery. The aim of this study was to determine quantitatively the criteria for attractiveness of eight female Chinese Han facial types. A photographic database of young Chinese Han women's faces was created. Photographed faces (450) were classified based on eight established types and scored for attractiveness. Measurements taken at seven standard facial landmarks and their relative proportions were analyzed for correlations to attractiveness scores. Attractive faces of each type were averaged via an image-morphing algorithm to generate synthetic facial types. Results were compared with the neoclassical ideal and data for Caucasians. Morphological proportions corresponding to the highest attractiveness scores for Chinese Han women differed from the neoclassical ideal. In our population of young, normal, healthy Han women, high attractiveness ratings were given to those with greater temporal width and pogonion-gonion distance, and smaller bizygomatic and bigonial widths. As attractiveness scores increased, the ratio of the temporal to bizygomatic widths increased, and the ratio of the distance between the pogonion and gonion to the bizygomatic width also increased slightly. Among the facial types, the oval and inverted triangular were the most attractive. The neoclassical ideal of attractiveness does not apply to Han faces. However, the proportion of faces considered attractive in this population was similar to that of Caucasian populations. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
FaceWarehouse: a 3D facial expression database for visual computing.
Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun
2014-03-01
We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.
ERIC Educational Resources Information Center
Vail, Kathleen
1995-01-01
Biometrics (hand geometry, iris and retina scanners, voice and facial recognition, signature dynamics, facial thermography, and fingerprint readers) identifies people based on physical characteristics. Administrators worried about kidnapping, vandalism, theft, and violent intruders might welcome these security measures when they become more…
Wavelet filtered shifted phase-encoded joint transform correlation for face recognition
NASA Astrophysics Data System (ADS)
Moniruzzaman, Md.; Alam, Mohammad S.
2017-05-01
A new wavelet-filtered-based Shifted- phase-encoded Joint Transform Correlation (WPJTC) technique has been proposed for efficient face recognition. The proposed technique uses discrete wavelet decomposition for preprocessing and can effectively accommodate various 3D facial distortions, effects of noise, and illumination variations. After analyzing different forms of wavelet basis functions, an optimal method has been proposed by considering the discrimination capability and processing speed as performance trade-offs. The proposed technique yields better correlation discrimination compared to alternate pattern recognition techniques such as phase-shifted phase-encoded fringe-adjusted joint transform correlator. The performance of the proposed WPJTC has been tested using the Yale facial database and extended Yale facial database under different environments such as illumination variation, noise, and 3D changes in facial expressions. Test results show that the proposed WPJTC yields better performance compared to alternate JTC based face recognition techniques.
A New Method of Facial Expression Recognition Based on SPE Plus SVM
NASA Astrophysics Data System (ADS)
Ying, Zilu; Huang, Mingwei; Wang, Zhen; Wang, Zhewei
A novel method of facial expression recognition (FER) is presented, which uses stochastic proximity embedding (SPE) for data dimension reduction, and support vector machine (SVM) for expression classification. The proposed algorithm is applied to Japanese Female Facial Expression (JAFFE) database for FER, better performance is obtained compared with some traditional algorithms, such as PCA and LDA etc.. The result have further proved the effectiveness of the proposed algorithm.
Ding, Liya; Martinez, Aleix M
2010-11-01
The appearance-based approach to face detection has seen great advances in the last several years. In this approach, we learn the image statistics describing the texture pattern (appearance) of the object class we want to detect, e.g., the face. However, this approach has had limited success in providing an accurate and detailed description of the internal facial features, i.e., eyes, brows, nose, and mouth. In general, this is due to the limited information carried by the learned statistical model. While the face template is relatively rich in texture, facial features (e.g., eyes, nose, and mouth) do not carry enough discriminative information to tell them apart from all possible background images. We resolve this problem by adding the context information of each facial feature in the design of the statistical model. In the proposed approach, the context information defines the image statistics most correlated with the surroundings of each facial component. This means that when we search for a face or facial feature, we look for those locations which most resemble the feature yet are most dissimilar to its context. This dissimilarity with the context features forces the detector to gravitate toward an accurate estimate of the position of the facial feature. Learning to discriminate between feature and context templates is difficult, however, because the context and the texture of the facial features vary widely under changing expression, pose, and illumination, and may even resemble one another. We address this problem with the use of subclass divisions. We derive two algorithms to automatically divide the training samples of each facial feature into a set of subclasses, each representing a distinct construction of the same facial component (e.g., closed versus open eyes) or its context (e.g., different hairstyles). The first algorithm is based on a discriminant analysis formulation. The second algorithm is an extension of the AdaBoost approach. We provide extensive experimental results using still images and video sequences for a total of 3,930 images. We show that the results are almost as good as those obtained with manual detection.
Yamasaki, Fumiyuki; Akiyama, Yuji; Tsumura, Ryu; Kolakshyapati, Manish; Adhikari, Rupendra Bahadur; Takayasu, Takeshi; Nosaka, Ryo; Kurisu, Kaoru
2016-07-01
Traumatic injuries of the abducens nerve as a consequence of facial and/or head trauma occur with or without associated cervical or skull base fracture. This is the first report on unilateral avulsion of the abducens nerve in a 29-year-old man with severe right facial trauma. In addition, he exhibited mild left facial palsy, and moderate left hearing disturbance. Magnetic resonance imaging (MRI) using fast imaging employing steady-state acquisition (FIESTA) revealed avulsion of left sixth cranial nerve. We recommend thin-slice MR examination in patients with abducens palsy after severe facial and/or head trauma.
Subspecialization in the human posterior medial cortex
Bzdok, Danilo; Heeger, Adrian; Langner, Robert; Laird, Angela R.; Fox, Peter T.; Palomero-Gallagher, Nicola; Vogt, Brent A.; Zilles, Karl; Eickhoff, Simon B.
2014-01-01
The posterior medial cortex (PMC) is particularly poorly understood. Its neural activity changes have been related to highly disparate mental processes. We therefore investigated PMC properties with a data-driven exploratory approach. First, we subdivided the PMC by whole-brain coactivation profiles. Second, functional connectivity of the ensuing PMC regions was compared by task-constrained meta-analytic coactivation mapping (MACM) and task-unconstrained resting-state correlations (RSFC). Third, PMC regions were functionally described by forward/reverse functional inference. A precuneal cluster was mostly connected to the intraparietal sulcus, frontal eye fields, and right temporo-parietal junction; associated with attention and motor tasks. A ventral posterior cingulate cortex (PCC) cluster was mostly connected to the ventromedial prefrontal cortex and middle left inferior parietal cortex (IPC); associated with facial appraisal and language tasks. A dorsal PCC cluster was mostly connected to the dorsomedial prefrontal cortex, anterior/posterior IPC, posterior midcingulate cortex, and left dorsolateral prefrontal cortex; associated with delay discounting. A cluster in the retrosplenial cortex was mostly connected to the anterior thalamus and hippocampus. Furthermore, all PMC clusters were congruently coupled with the default mode network according to task-constrained but not task-unconstrained connectivity. We thus identified distinct regions in the PMC and characterized their neural networks and functional implications. PMID:25462801
Evidence-Based Medicine in Facial Trauma.
Dougherty, William M; Christophel, John Jared; Park, Stephen S
2017-11-01
This article provides the reader with a comprehensive review of high-level evidence-based medicine in facial trauma and highlights areas devoid of high-level evidence. The article is organized in the order one might approach a clinical problem: starting with the workup, followed by treatment considerations, operative decisions, and postoperative treatments. Individual injuries are discussed within each section, with an overview of the available high-level clinical evidence. This article not only provides a quick reference for the facial traumatologist, but also allows the reader to identify areas that lack high-level evidence, perhaps motivating future endeavors. Copyright © 2017 Elsevier Inc. All rights reserved.
Mothers' pupillary responses to infant facial expressions.
Yrttiaho, Santeri; Niehaus, Dana; Thomas, Eileen; Leppänen, Jukka M
2017-02-06
Human parental care relies heavily on the ability to monitor and respond to a child's affective states. The current study examined pupil diameter as a potential physiological index of mothers' affective response to infant facial expressions. Pupillary time-series were measured from 86 mothers of young infants in response to an array of photographic infant faces falling into four emotive categories based on valence (positive vs. negative) and arousal (mild vs. strong). Pupil dilation was highly sensitive to the valence of facial expressions, being larger for negative vs. positive facial expressions. A separate control experiment with luminance-matched non-face stimuli indicated that the valence effect was specific to facial expressions and cannot be explained by luminance confounds. Pupil response was not sensitive to the arousal level of facial expressions. The results show the feasibility of using pupil diameter as a marker of mothers' affective responses to ecologically valid infant stimuli and point to a particularly prompt maternal response to infant distress cues.
The Perception and Mimicry of Facial Movements Predict Judgments of Smile Authenticity
Korb, Sebastian; With, Stéphane; Niedenthal, Paula; Kaiser, Susanne; Grandjean, Didier
2014-01-01
The mechanisms through which people perceive different types of smiles and judge their authenticity remain unclear. Here, 19 different types of smiles were created based on the Facial Action Coding System (FACS), using highly controlled, dynamic avatar faces. Participants observed short videos of smiles while their facial mimicry was measured with electromyography (EMG) over four facial muscles. Smile authenticity was judged after each trial. Avatar attractiveness was judged once in response to each avatar’s neutral face. Results suggest that, in contrast to most earlier work using static pictures as stimuli, participants relied less on the Duchenne marker (the presence of crow’s feet wrinkles around the eyes) in their judgments of authenticity. Furthermore, mimicry of smiles occurred in the Zygomaticus Major, Orbicularis Oculi, and Corrugator muscles. Consistent with theories of embodied cognition, activity in these muscles predicted authenticity judgments, suggesting that facial mimicry influences the perception of smiles. However, no significant mediation effect of facial mimicry was found. Avatar attractiveness did not predict authenticity judgments or mimicry patterns. PMID:24918939
Caricaturing facial expressions.
Calder, A J; Rowland, D; Young, A W; Nimmo-Smith, I; Keane, J; Perrett, D I
2000-08-14
The physical differences between facial expressions (e.g. fear) and a reference norm (e.g. a neutral expression) were altered to produce photographic-quality caricatures. In Experiment 1, participants rated caricatures of fear, happiness and sadness for their intensity of these three emotions; a second group of participants rated how 'face-like' the caricatures appeared. With increasing levels of exaggeration the caricatures were rated as more emotionally intense, but less 'face-like'. Experiment 2 demonstrated a similar relationship between emotional intensity and level of caricature for six different facial expressions. Experiments 3 and 4 compared intensity ratings of facial expression caricatures prepared relative to a selection of reference norms - a neutral expression, an average expression, or a different facial expression (e.g. anger caricatured relative to fear). Each norm produced a linear relationship between caricature and rated intensity of emotion; this finding is inconsistent with two-dimensional models of the perceptual representation of facial expression. An exemplar-based multidimensional model is proposed as an alternative account.
Sutherland, Clare A M; Liu, Xizi; Zhang, Lingshan; Chu, Yingtung; Oldmeadow, Julian A; Young, Andrew W
2018-04-01
People form first impressions from facial appearance rapidly, and these impressions can have considerable social and economic consequences. Three dimensions can explain Western perceivers' impressions of Caucasian faces: approachability, youthful-attractiveness, and dominance. Impressions along these dimensions are theorized to be based on adaptive cues to threat detection or sexual selection, making it likely that they are universal. We tested whether the same dimensions of facial impressions emerge across culture by building data-driven models of first impressions of Asian and Caucasian faces derived from Chinese and British perceivers' unconstrained judgments. We then cross-validated the dimensions with computer-generated average images. We found strong evidence for common approachability and youthful-attractiveness dimensions across perceiver and face race, with some evidence of a third dimension akin to capability. The models explained ~75% of the variance in facial impressions. In general, the findings demonstrate substantial cross-cultural agreement in facial impressions, especially on the most salient dimensions.
Is moral beauty different from facial beauty? Evidence from an fMRI study.
Wang, Tingting; Mo, Lei; Mo, Ce; Tan, Li Hai; Cant, Jonathan S; Zhong, Luojin; Cupchik, Gerald
2015-06-01
Is moral beauty different from facial beauty? Two functional magnetic resonance imaging experiments were performed to answer this question. Experiment 1 investigated the network of moral aesthetic judgments and facial aesthetic judgments. Participants performed aesthetic judgments and gender judgments on both faces and scenes containing moral acts. The conjunction analysis of the contrasts 'facial aesthetic judgment > facial gender judgment' and 'scene moral aesthetic judgment > scene gender judgment' identified the common involvement of the orbitofrontal cortex (OFC), inferior temporal gyrus and medial superior frontal gyrus, suggesting that both types of aesthetic judgments are based on the orchestration of perceptual, emotional and cognitive components. Experiment 2 examined the network of facial beauty and moral beauty during implicit perception. Participants performed a non-aesthetic judgment task on both faces (beautiful vs common) and scenes (containing morally beautiful vs neutral information). We observed that facial beauty (beautiful faces > common faces) involved both the cortical reward region OFC and the subcortical reward region putamen, whereas moral beauty (moral beauty scenes > moral neutral scenes) only involved the OFC. Moreover, compared with facial beauty, moral beauty spanned a larger-scale cortical network, indicating more advanced and complex cerebral representations characterizing moral beauty. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Happe, Svenja; Bunten, Sabine
2012-01-01
Unilateral facial weakness is common. Transcranial magnetic stimulation (TMS) allows identification of a conduction failure at the level of the canalicular portion of the facial nerve and may help to confirm the diagnosis. We retrospectively analyzed 216 patients with the diagnosis of peripheral facial palsy. The electrophysiological investigations included the blink reflex, preauricular electrical stimulation and the response to TMS at the labyrinthine part of the canalicular proportion of the facial nerve within 3 days after symptom onset. A similar reduction or loss of the TMS amplitude (p < 0.005) of the affected side was seen in each patient group. Of the 216 patients (107 female, mean age 49.7 ± 18.0 years), 193 were diagnosed with Bell's palsy. Test results of the remaining patients led to the diagnosis of infectious [including herpes simplex, varicella zoster infection and borreliosis (n = 13)] and noninfectious [including diabetes and neoplasma (n = 10)] etiology. A conduction block in TMS supports the diagnosis of peripheral facial palsy without being specific for Bell's palsy. These data shed light on the TMS-based diagnosis of peripheral facial palsy, an ability to localize the site of lesion within the Fallopian channel regardless of the underlying pathology. Copyright © 2012 S. Karger AG, Basel.
Botulinum toxin and the facial feedback hypothesis: can looking better make you feel happier?
Alam, Murad; Barrett, Karen C; Hodapp, Robert M; Arndt, Kenneth A
2008-06-01
The facial feedback hypothesis suggests that muscular manipulations which result in more positive facial expressions may lead to more positive emotional states in affected individuals. In this essay, we hypothesize that the injection of botulinum toxin for upper face dynamic creases might induce positive emotional states by reducing the ability to frown and create other negative facial expressions. The use of botulinum toxin to pharmacologically alter upper face muscular expressiveness may curtail the appearance of negative emotions, most notably anger, but also fear and sadness. This occurs via the relaxation of the corrugator supercilii and the procerus, which are responsible for brow furrowing, and to a lesser extent, because of the relaxation of the frontalis. Concurrently, botulinum toxin may dampen some positive expressions like the true smile, which requires activity of the orbicularis oculi, a muscle also relaxed after toxin injections. On balance, the evidence suggests that botulinum toxin injections for upper face dynamic creases may reduce negative facial expressions more than they reduce positive facial expressions. Based on the facial feedback hypothesis, this net change in facial expression may potentially have the secondary effect of reducing the internal experience of negative emotions, thus making patients feel less angry, sad, and fearful.
Packiriswamy, Vasanthakumar; Kumar, Pramod; Rao, Mohandas
2012-01-01
Background: The “golden ratio” is considered as a universal facial aesthetical standard. Researcher's opinion that deviation from golden ratio can result in development of facial abnormalities. Aims: This study was designed to study the facial morphology and to identify individuals with normal, short, and long face. Materials and Methods: We studied 300 Malaysian nationality subjects aged 18-28 years of Chinese, Indian, and Malay extraction. The parameters measured were physiognomical facial height and width of face, and physiognomical facial index was calculated. Face shape was classified based on golden ratio. Independent t test was done to test the difference between sexes and among the races. Results: The mean values of the measurements and index showed significant sexual and interracial differences. Out of 300 subjects, the face shape was normal in 60 subjects, short in 224 subjects, and long in 16 subjects. Conclusion: As anticipated, the measurements showed variations according to gender and race. Only 60 subjects had a regular face shape, and remaining 240 subjects had irregular face shape (short and long). Since the short and long shape individuals may be at risk of developing various disorders, the knowledge of facial shapes in the given population is important for early diagnostic and treatment procedures. PMID:23272303
Computer-Aided Recognition of Facial Attributes for Fetal Alcohol Spectrum Disorders.
Valentine, Matthew; Bihm, Dustin C J; Wolf, Lior; Hoyme, H Eugene; May, Philip A; Buckley, David; Kalberg, Wendy; Abdul-Rahman, Omar A
2017-12-01
To compare the detection of facial attributes by computer-based facial recognition software of 2-D images against standard, manual examination in fetal alcohol spectrum disorders (FASD). Participants were gathered from the Fetal Alcohol Syndrome Epidemiology Research database. Standard frontal and oblique photographs of children were obtained during a manual, in-person dysmorphology assessment. Images were submitted for facial analysis conducted by the facial dysmorphology novel analysis technology (an automated system), which assesses ratios of measurements between various facial landmarks to determine the presence of dysmorphic features. Manual blinded dysmorphology assessments were compared with those obtained via the computer-aided system. Areas under the curve values for individual receiver-operating characteristic curves revealed the computer-aided system (0.88 ± 0.02) to be comparable to the manual method (0.86 ± 0.03) in detecting patients with FASD. Interestingly, cases of alcohol-related neurodevelopmental disorder (ARND) were identified more efficiently by the computer-aided system (0.84 ± 0.07) in comparison to the manual method (0.74 ± 0.04). A facial gestalt analysis of patients with ARND also identified more generalized facial findings compared to the cardinal facial features seen in more severe forms of FASD. We found there was an increased diagnostic accuracy for ARND via our computer-aided method. As this category has been historically difficult to diagnose, we believe our experiment demonstrates that facial dysmorphology novel analysis technology can potentially improve ARND diagnosis by introducing a standardized metric for recognizing FASD-associated facial anomalies. Earlier recognition of these patients will lead to earlier intervention with improved patient outcomes. Copyright © 2017 by the American Academy of Pediatrics.
Bendella, Habib; Brackmann, Derald E; Goldbrunner, Roland; Angelov, Doychin N
2016-10-01
Little is known about the reasons for occurrence of facial nerve palsy after removal of cerebellopontine angle tumors. Since the intra-arachnoidal portion of the facial nerve is considered to be so vulnerable that even the slightest tension or pinch may result in ruptured axons, we tested whether a graded stretch or controlled crush would affect the postoperative motor performance of the facial (vibrissal) muscle in rats. Thirty Wistar rats, divided into five groups (one with intact controls and four with facial nerve lesions), were used. Under inhalation anesthesia, the occipital squama was opened, the cerebellum gently retracted to the left, and the intra-arachnoidal segment of the right facial nerve exposed. A mechanical displacement of the brainstem with 1 or 3 mm toward the midline or an electromagnet-controlled crush of the facial nerve with a tweezers at a closure velocity of 50 and 100 mm/s was applied. On the next day, whisking motor performance was determined by video-based motion analysis. Even the larger (with 3 mm) mechanical displacement of the brainstem had no harmful effect: The amplitude of the vibrissal whisks was in the normal range of 50°-60°. On the other hand, even the light nerve crush (50 mm/s) injured the facial nerve and resulted in paralyzed vibrissal muscles (amplitude of 10°-15°). We conclude that, contrary to the generally acknowledged assumptions, it is the nerve crush but not the displacement-induced stretching of the intra-arachnoidal facial trunk that promotes facial palsy after cerebellopontine angle surgery in rats.
Romani, Maria; Vigliante, Miriam; Faedda, Noemi; Rossetti, Serena; Pezzuti, Lina; Guidetti, Vincenzo; Cardona, Francesco
2018-06-01
This review focuses on facial recognition abilities in children and adolescents with attention deficit hyperactivity disorder (ADHD). A systematic review, using PRISMA guidelines, was conducted to identify original articles published prior to May 2017 pertaining to memory, face recognition, affect recognition, facial expression recognition and recall of faces in children and adolescents with ADHD. The qualitative synthesis based on different studies shows a particular focus of the research on facial affect recognition without paying similar attention to the structural encoding of facial recognition. In this review, we further investigate facial recognition abilities in children and adolescents with ADHD, providing synthesis of the results observed in the literature, while detecting face recognition tasks used on face processing abilities in ADHD and identifying aspects not yet explored. Copyright © 2018 Elsevier Ltd. All rights reserved.
[Magnetic resonance imaging in facial injuries and digital fusion CT/MRI].
Kozakiewicz, Marcin; Olszycki, Marek; Arkuszewski, Piotr; Stefańczyk, Ludomir
2006-01-01
Magnetic resonance images [MRI] and their digital fusion with computed tomography [CT] data, observed in patients affected with facial injuries, are presented in this study. The MR imaging of 12 posttraumatic patients was performed in the same plains as their previous CT scans. Evaluation focused on quality of the facial soft tissues depicting, which was unsatisfactory in CT. Using the own "Dental Studio" programme the digital fusion of the both modalities was performed. Pathologic dislocations and injures of facial soft tissues are visualized better in MRI than in CT examination. Especially MRI properly reveals disturbances in intraorbital soft structures. MRI-based assessment is valuable in patients affected with facial soft tissues injuries, especially in case of orbita/sinuses hernia. Fusion CT/MRI scans allows to evaluate simultaneously bone structure and soft tissues of the same region.
Biometric identification based on novel frequency domain facial asymmetry measures
NASA Astrophysics Data System (ADS)
Mitra, Sinjini; Savvides, Marios; Vijaya Kumar, B. V. K.
2005-03-01
In the modern world, the ever-growing need to ensure a system's security has spurred the growth of the newly emerging technology of biometric identification. The present paper introduces a novel set of facial biometrics based on quantified facial asymmetry measures in the frequency domain. In particular, we show that these biometrics work well for face images showing expression variations and have the potential to do so in presence of illumination variations as well. A comparison of the recognition rates with those obtained from spatial domain asymmetry measures based on raw intensity values suggests that the frequency domain representation is more robust to intra-personal distortions and is a novel approach for performing biometric identification. In addition, some feature analysis based on statistical methods comparing the asymmetry measures across different individuals and across different expressions is presented.
Facial soft biometric features for forensic face recognition.
Tome, Pedro; Vera-Rodriguez, Ruben; Fierrez, Julian; Ortega-Garcia, Javier
2015-12-01
This paper proposes a functional feature-based approach useful for real forensic caseworks, based on the shape, orientation and size of facial traits, which can be considered as a soft biometric approach. The motivation of this work is to provide a set of facial features, which can be understood by non-experts such as judges and support the work of forensic examiners who, in practice, carry out a thorough manual comparison of face images paying special attention to the similarities and differences in shape and size of various facial traits. This new approach constitutes a tool that automatically converts a set of facial landmarks to a set of features (shape and size) corresponding to facial regions of forensic value. These features are furthermore evaluated in a population to generate statistics to support forensic examiners. The proposed features can also be used as additional information that can improve the performance of traditional face recognition systems. These features follow the forensic methodology and are obtained in a continuous and discrete manner from raw images. A statistical analysis is also carried out to study the stability, discrimination power and correlation of the proposed facial features on two realistic databases: MORPH and ATVS Forensic DB. Finally, the performance of both continuous and discrete features is analyzed using different similarity measures. Experimental results show high discrimination power and good recognition performance, especially for continuous features. A final fusion of the best systems configurations achieves rank 10 match results of 100% for ATVS database and 75% for MORPH database demonstrating the benefits of using this information in practice. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Yokoi, Aya; Endo, Koji; Ozawa, Toshiaki; Miyaki, Masahiro; Matsuo, Keiko; Nozawa, Kazumi; Manabe, Motomu; Takagi, Yutaka
2014-12-01
Because excess sebum and/or metabolites of sebum induce skin problems, cleansers that can remove those kinds of sebum are sought after. However, many people, especially who have little facial sebum, are afraid to wash off sebum well because that may induce dry skin. This concern may be caused by the result that cleansers with a high cleansing ability tend to decrease not only facial sebum but also natural moisturizing factors and intercellular lipids that are essential for cutaneous function. Recently, we have developed a new cleanser based on sodium laureth carboxylate and alkyl carboxylates (AEC/soap) that cleans sebum well without penetrating the stratum corneum. This trial was aim to clarify the effects of sebum removal by AEC/soap-based cleanser on the induction of dry skin. We designed a controlled single blind parallel trial. Thirty female subjects with mild dry skin were assigned randomly to two groups: one group used AEC/soap-based cleanser while the other group kept using their usual facial cleanser twice a day for 4 weeks in the winter season. Using a colored artificial sebum mixture, it was demonstrated that this cleanser washed sebum well. Following usage of this cleanser, their dry skin improved rather than worsen which was indicated by instrumental analysis and visual assessment. These improvements were recognized by subjects. These results suggest that AEC/soap-based cleanser washes off facial sebum well, but it has little effect on the induction of dry skin because of less penetration into stratum corneum. © 2014 Wiley Periodicals, Inc.
Weisbuch, Max; Grunberg, Rebecca L; Slepian, Michael L; Ambady, Nalini
2016-10-01
Beliefs about the malleability versus stability of traits (incremental vs. entity lay theories) have a profound impact on social cognition and self-regulation, shaping phenomena that range from the fundamental attribution error and group-based stereotyping to academic motivation and achievement. Less is known about the causes than the effects of these lay theories, and in the current work the authors examine the perception of facial emotion as a causal influence on lay theories. Specifically, they hypothesized that (a) within-person variability in facial emotion signals within-person variability in traits and (b) social environments replete with within-person variability in facial emotion encourage perceivers to endorse incremental lay theories. Consistent with Hypothesis 1, Study 1 participants were more likely to attribute dynamic (vs. stable) traits to a person who exhibited several different facial emotions than to a person who exhibited a single facial emotion across multiple images. Hypothesis 2 suggests that social environments support incremental lay theories to the extent that they include many people who exhibit within-person variability in facial emotion. Consistent with Hypothesis 2, participants in Studies 2-4 were more likely to endorse incremental theories of personality, intelligence, and morality after exposure to multiple individuals exhibiting within-person variability in facial emotion than after exposure to multiple individuals exhibiting a single emotion several times. Perceptions of within-person variability in facial emotion-rather than perceptions of simple diversity in facial emotion-were responsible for these effects. Discussion focuses on how social ecologies shape lay theories. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Chirivella, Praveen; Singaraju, Gowri Sankar; Mandava, Prasad; Reddy, V Karunakar; Neravati, Jeevan Kumar; George, Suja Ani
2017-01-01
Objective: To test the null hypothesis that there is no effect of esthetic perception of smiling profile in three different facial types by a change in the maxillary incisor inclination and position. Materials and Methods: A smiling profile photograph with Class I skeletal and dental pattern, normal profile were taken in each of the three facial types dolichofacial, mesofacial, and brachyfacial. Based on the original digital image, 15 smiling profiles in each of the facial types were created using the FACAD software by altering the labiolingual inclination and anteroposterior position of the maxillary incisors. These photographs were rated on a visual analog scale by three panels of examiners consisting of orthodontists, dentists, and nonprofessionals with twenty members in each group. The responses were assessed by analysis of variance (ANOVA) test followed by post hoc Scheffe. Results: Significant differences (P < 0.001) were detected when ratings of each photograph in each of the individual facial type was compared. In dolichofacial and mesofacial pattern, the position of the maxillary incisor must be limited to 2 mm from the goal anterior limit line. In brachyfacial pattern, any movement of facial axis point of maxillary incisors away from GALL is worsens the facial esthetics. The result of the ANOVA showed differences among the three groups for certain facial profiles. Conclusion: The hypothesis was rejected. The esthetic perception of labiolingual inclination and anteroposterior of maxillary incisors differ in different facial types, and this may effect in formulating treatment plans for different facial types. PMID:28197396
Vioarsdóttir, Una Strand; O'Higgins, Paul; Stringer, Chris
2002-09-01
This study examines interpopulation variations in the facial skeleton of 10 modern human populations and places these in an ontogenetic perspective. It aims to establish the extent to which the distinctive features of adult representatives of these populations are present in the early post natal period and to what extent population differences in ontogenetic scaling and allometric trajectories contribute to distinct facial forms. The analyses utilize configurations of facial landmarks and are carried out using geometric morphometric methods. The results of this study show that modern human populations can be distinguished based on facial shape alone, irrespective of age or sex, indicating the early presence of differences. Additionally, some populations have statistically distinct facial ontogenetic trajectories that lead to the development of further differences later in ontogeny. We conclude that population-specific facial morphologies develop principally through distinctions in facial shape probably already present at birth and further accentuated and modified to variable degrees during growth. These findings raise interesting questions regarding the plasticity of facial growth patterns in modern humans. Further, they have important implications in relation to the study of growth in the face of fossil hominins and in relation to the possibility of developing effective discriminant functions for the identification of population affinities of immature facial skeletal material. Such tools would be of value in archaeological, forensic and anthropological applications. The findings of this study underline the need to examine more deeply, and in more detail, the ontogenetic basis of other causes of craniometric variation, such as sexual dimorphism and hominin species differentiation.
Seager, Dennis Craig; Kau, Chung How; English, Jeryl D; Tawfik, Wael; Bussa, Harry I; Ahmed, Abou El Yazeed M
2009-09-01
To compare the facial morphologies of an adult Egyptian population with those of a Houstonian white population. The three-dimensional (3D) images were acquired via a commercially available stereophotogrammetric camera capture system. The 3dMDface System photographed 186 subjects from two population groups (Egypt and Houston). All of the participants from both population groups were between 18 and 30 years of age and had no apparent facial anomalies. All facial images were overlaid and superimposed, and a complex mathematical algorithm was performed to generate a composite facial average (one male and one female) for each subgroup (EGY-M: Egyptian male subjects; EGY-F: Egyptian female subjects; HOU-M: Houstonian male subjects; and HOU-F: Houstonian female subjects). The computer-generated facial averages were superimposed based on a previously validated superimposition method, and the facial differences were evaluated and quantified. Distinct facial differences were evident between the subgroups evaluated, involving various regions of the face including the slant of the forehead, and the nasal, malar, and labial regions. Overall, the mean facial differences between the Egyptian and Houstonian female subjects were 1.33 +/- 0.93 mm, while the differences in Egyptian and Houstonian male subjects were 2.32 +/- 2.23 mm. The range of differences for the female population pairings and the male population pairings were 14.34 mm and 13.71 mm, respectively. The average adult Egyptian and white Houstonian face possess distinct differences. Different populations and ethnicities have different facial features and averages.
Vivas, Esther X; Carlson, Matthew L; Neff, Brian A; Shepard, Neil T; McCracken, D Jay; Sweeney, Alex D; Olson, Jeffrey J
2018-02-01
Does intraoperative facial nerve monitoring during vestibular schwannoma surgery lead to better long-term facial nerve function? This recommendation applies to adult patients undergoing vestibular schwannoma surgery regardless of tumor characteristics. Level 3: It is recommended that intraoperative facial nerve monitoring be routinely utilized during vestibular schwannoma surgery to improve long-term facial nerve function. Can intraoperative facial nerve monitoring be used to accurately predict favorable long-term facial nerve function after vestibular schwannoma surgery? This recommendation applies to adult patients undergoing vestibular schwannoma surgery. Level 3: Intraoperative facial nerve can be used to accurately predict favorable long-term facial nerve function after vestibular schwannoma surgery. Specifically, the presence of favorable testing reliably portends a good long-term facial nerve outcome. However, the absence of favorable testing in the setting of an anatomically intact facial nerve does not reliably predict poor long-term function and therefore cannot be used to direct decision-making regarding the need for early reinnervation procedures. Does an anatomically intact facial nerve with poor electromyogram (EMG) electrical responses during intraoperative testing reliably predict poor long-term facial nerve function? This recommendation applies to adult patients undergoing vestibular schwannoma surgery. Level 3: Poor intraoperative EMG electrical response of the facial nerve should not be used as a reliable predictor of poor long-term facial nerve function. Should intraoperative eighth cranial nerve monitoring be used during vestibular schwannoma surgery? This recommendation applies to adult patients undergoing vestibular schwannoma surgery with measurable preoperative hearing levels and tumors smaller than 1.5 cm. Level 3: Intraoperative eighth cranial nerve monitoring should be used during vestibular schwannoma surgery when hearing preservation is attempted. Is direct monitoring of the eighth cranial nerve superior to the use of far-field auditory brain stem responses? This recommendation applies to adult patients undergoing vestibular schwannoma surgery with measurable preoperative hearing levels and tumors smaller than 1.5 cm. Level 3: There is insufficient evidence to make a definitive recommendation. The full guideline can be found at: https://www.cns.org/guidelines/guidelines-manage-ment-patients-vestibular-schwannoma/chapter_4. Copyright © 2017 by the Congress of Neurological Surgeons
ERIC Educational Resources Information Center
Xiao, Naiqi G.; Quinn, Paul C.; Ge, Liezhong; Lee, Kang
2017-01-01
Although most of the faces we encounter daily are moving ones, much of what we know about face processing and its development is based on studies using static faces that emphasize holistic processing as the hallmark of mature face processing. Here the authors examined the effects of facial movements on face processing developmentally in children…
ERIC Educational Resources Information Center
Li, Pengli; Zhang, Chunhua; Yi, Li
2016-01-01
The current study examined how children with Autism Spectrum Disorders (ASD) could selectively trust others based on three facial cues: the face race, attractiveness, and trustworthiness. In a computer-based hide-and-seek game, two face images, which differed significantly in one of the three facial cues, were presented as two cues for selective…
A Brain Network Processing the Age of Faces
Homola, György A.; Jbabdi, Saad; Beckmann, Christian F.; Bartsch, Andreas J.
2012-01-01
Age is one of the most salient aspects in faces and of fundamental cognitive and social relevance. Although face processing has been studied extensively, brain regions responsive to age have yet to be localized. Using evocative face morphs and fMRI, we segregate two areas extending beyond the previously established face-sensitive core network, centered on the inferior temporal sulci and angular gyri bilaterally, both of which process changes of facial age. By means of probabilistic tractography, we compare their patterns of functional activation and structural connectivity. The ventral portion of Wernicke's understudied perpendicular association fasciculus is shown to interconnect the two areas, and activation within these clusters is related to the probability of fiber connectivity between them. In addition, post-hoc age-rating competence is found to be associated with high response magnitudes in the left angular gyrus. Our results provide the first evidence that facial age has a distinct representation pattern in the posterior human brain. We propose that particular face-sensitive nodes interact with additional object-unselective quantification modules to obtain individual estimates of facial age. This brain network processing the age of faces differs from the cortical areas that have previously been linked to less developmental but instantly changeable face aspects. Our probabilistic method of associating activations with connectivity patterns reveals an exemplary link that can be used to further study, assess and quantify structure-function relationships. PMID:23185334
Brain activation to facial expressions in youth with PTSD symptoms.
Garrett, Amy S; Carrion, Victor; Kletter, Hilit; Karchemskiy, Asya; Weems, Carl F; Reiss, Allan
2012-05-01
This study examined activation to facial expressions in youth with a history of interpersonal trauma and current posttraumatic stress symptoms (PTSS) compared to healthy controls (HC). Twenty-three medication-naive youth with PTSS and 23 age- and gender-matched HC underwent functional magnetic resonance imaging (fMRI) while viewing fearful, angry, sad, happy, and neutral faces. Data were analyzed for group differences in location of activation, as well as timing of activation during the early versus late phase of the block. Using SPM5, significant activation (P < .05 FWE [Family-Wise Error] corrected, extent = 10 voxels) associated with the main effect of group was identified. Activation from selected clusters was extracted to SPSS software for further analysis of specific facial expressions and temporal patterns of activation. The PTSS group showed significantly greater activation than controls in several regions, including the amygdala/hippocampus, medial prefrontal cortex, insula, and ventrolateral prefrontal cortex, and less activation than controls in the dorsolateral prefrontal cortex (DLPFC). These group differences in activation were greatest during angry, happy, and neutral faces, and predominantly during the early phase of the block. Post hoc analyses showed significant Group × Phase interactions in the right amygdala and left hippocampus. Traumatic stress may impact development of brain regions important for emotion processing. Timing of activation may be altered in youth with PTSS. © 2012 Wiley Periodicals, Inc.
A Web-based Game for Teaching Facial Expressions to Schizophrenic Patients.
Gülkesen, Kemal Hakan; Isleyen, Filiz; Cinemre, Buket; Samur, Mehmet Kemal; Sen Kaya, Semiha; Zayim, Nese
2017-07-12
Recognizing facial expressions is an important social skill. In some psychological disorders such as schizophrenia, loss of this skill may complicate the patient's daily life. Prior research has shown that information technology may help to develop facial expression recognition skills through educational software and games. To examine if a computer game designed for teaching facial expressions would improve facial expression recognition skills of patients with schizophrenia. We developed a website composed of eight serious games. Thirty-two patients were given a pre-test composed of 21 facial expression photographs. Eighteen patients were in the study group while 14 were in the control group. Patients in the study group were asked to play the games on the website. After a period of one month, we performed a post-test for all patients. The median score of the correct answers was 17.5 in the control group whereas it was 16.5 in the study group (of 21) in pretest. The median post-test score was 18 in the control group (p=0.052) whereas it was 20 in the study group (p<0.001). Computer games may be used for the purpose of educating people who have difficulty in recognizing facial expressions.
The Oval Female Facial Shape--A Study in Beauty.
Goodman, Greg J
2015-12-01
Our understanding of who is beautiful seems to be innate but has been argued to conform to mathematical principles and proportions. One aspect of beauty is facial shape that is gender specific. In women, an oval facial shape is considered attractive. To study the facial shape of beautiful actors, pageant title winners, and performers across ethnicities and in different time periods and to construct an ideal oval shape based on the average of their facial shape dimensions. Twenty-one full-face photographs of purportedly beautiful female actors, performers, and pageant winners were analyzed and an oval constructed from their facial parameters. Only 3 of the 21 faces were totally symmetrical, with the most larger in the left upper and lower face. The average oval was subsequently constructed from an average bizygomatic distance (horizontal parameter) of 4.3 times their intercanthal distance (ICD) and a vertical dimension that averaged 6.3 times their ICD. This average oval could be fitted to many of the individual subjects showing a smooth flow from the forehead through temples, cheeks, jaw angle, jawline, and chin with all these facial aspects abutting the oval. Where they did not abut, treatment may have improved these subjects.
Impaired Perception of Emotional Expression in Amyotrophic Lateral Sclerosis.
Oh, Seong Il; Oh, Ki Wook; Kim, Hee Jin; Park, Jin Seok; Kim, Seung Hyun
2016-07-01
The increasing recognition that deficits in social emotions occur in amyotrophic lateral sclerosis (ALS) is helping to explain the spectrum of neuropsychological dysfunctions, thus supporting the view of ALS as a multisystem disorder involving neuropsychological deficits as well as motor deficits. The aim of this study was to characterize the emotion perception abilities of Korean patients with ALS based on the recognition of facial expressions. Twenty-four patients with ALS and 24 age- and sex-matched healthy controls completed neuropsychological tests and facial emotion recognition tasks [ChaeLee Korean Facial Expressions of Emotions (ChaeLee-E)]. The ChaeLee-E test includes facial expressions for seven emotions: happiness, sadness, anger, disgust, fear, surprise, and neutral. The ability to perceive facial emotions was significantly worse among ALS patients performed than among healthy controls [65.2±18.0% vs. 77.1±6.6% (mean±SD), p=0.009]. Eight of the 24 patients (33%) scored below the 5th percentile score of controls for recognizing facial emotions. Emotion perception deficits occur in Korean ALS patients, particularly regarding facial expressions of emotion. These findings expand the spectrum of cognitive and behavioral dysfunction associated with ALS into emotion processing dysfunction.
Hizay, Arzu; Ozsoy, Umut; Demirel, Bahadir Murat; Ozsoy, Ozlem; Angelova, Srebrina K; Ankerne, Janina; Sarikcioglu, Sureyya Bilmen; Dunlop, Sarah A; Angelov, Doychin N; Sarikcioglu, Levent
2012-06-01
Despite increased understanding of peripheral nerve regeneration, functional recovery after surgical repair remains disappointing. A major contributing factor is the extensive collateral branching at the lesion site, which leads to inaccurate axonal navigation and aberrant reinnervation of targets. To determine whether the Y tube reconstruction improved axonal regrowth and whether this was associated with improved function. We used a Y-tube conduit with the aim of improving navigation of regenerating axons after facial nerve transection in rats. Retrograde labeling from the zygomatic and buccal branches showed a halving in the number of double-labeled facial motor neurons (15% vs 8%; P < .05) after Y tube reconstruction compared with facial-facial anastomosis coaptation. However, in both surgical groups, the proportion of polyinnervated motor endplates was similar (≈ 30%; P > .05), and video-based motion analysis of whisking revealed similarly poor function. Although Y-tube reconstruction decreases axonal branching at the lesion site and improves axonal navigation compared with facial-facial anastomosis coaptation, it fails to promote monoinnervation of motor endplates and confers no functional benefit.
Human facial neural activities and gesture recognition for machine-interfacing applications.
Hamedi, M; Salleh, Sh-Hussain; Tan, T S; Ismail, K; Ali, J; Dee-Uam, C; Pavaganun, C; Yupapin, P P
2011-01-01
The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human-machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2-11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy >90% of chosen combinations proved their ability to be used as command controllers.
Yang, Cheng-San; Chen, Solomon Chih-Cheng; Yang, Yung-Cheng; Huang, Li-Chung; Guo, How-Ran; Yang, Hsin-Yi
2017-10-03
The facial region is a commonly fractured site, but the etiology varies widely by country and geographic region. To date, there are no population-based studies of facial fractures in Taiwan. We conducted a retrospective study of patients diagnosed with facial fracture and registered in the National Health Insurance Research Database of Taiwan between 1997 and 2011. The epidemiological characteristics of this cohort were analyzed, including the etiology, fracture site, associated injuries, and sex and age distributions. A total of 6,013 cases were identified that involved facial fractures. Most patients were male (69.8%), aged 18-29 years (35.8%), and had fractures caused by road traffic accidents (RTAs; 55.2%), particularly motorcycle accidents (31.5%). Falls increased in frequency with advancing age, reaching 23.9% among the elderly (age > 65 years). The most common sites of involvement were the malar and maxillary bones (54.0%), but nasal bone fractures were more common among those younger than 18 years. Most facial injuries in Taiwan occur in young males and typically result from RTAs, particularly involving motorcycles. However, with increasing age, there is an increase in the proportion of facial injuries due to falls.
Labbè, D; Bussu, F; Iodice, A
2012-06-01
Long-standing peripheral monolateral facial paralysis in the adult has challenged otolaryngologists, neurologists and plastic surgeons for centuries. Notwithstanding, the ultimate goal of normality of the paralyzed hemi-face with symmetry at rest, and the achievement of a spontaneous symmetrical smile with corneal protection, has not been fully reached. At the beginning of the 20(th) century, the main options were neural reconstructions including accessory to facial nerve transfer and hypoglossal to facial nerve crossover. In the first half of the 20(th) century, various techniques for static correction with autologous temporalis muscle and fascia grafts were proposed as the techniques of Gillies (1934) and McLaughlin (1949). Cross-facial nerve grafts have been performed since the beginning of the 1970s often with the attempt to transplant free-muscle to restore active movements. However, these transplants were non-vascularized, and further evaluations revealed central fibrosis and minimal return of function. A major step was taken in the second half of the 1970s, with the introduction of microneurovascular muscle transfer in facial reanimation, which, often combined in two steps with a cross-facial nerve graft, has become the most popular option for the comprehensive treatment of long-standing facial paralysis. In the second half of the 1990s in France, a regional muscle transfer technique with the definite advantages of being one-step, technically easier and relatively fast, namely lengthening temporalis myoplasty, acquired popularity and consensus among surgeons treating facial paralysis. A total of 111 patients with facial paralysis were treated in Caen between 1997 and 2005 by a single surgeon who developed 2 variants of the technique (V1, V2), each with its advantages and disadvantages, but both based on the same anatomo-functional background and aim, which is transfer of the temporalis muscle tendon on the coronoid process to the lips. For a comprehensive treatment of the paralysis, the eyelids are usually managed by Paul Tessier's technique to lengthen the levator muscle of the upper eyelid by aponeurosis interposition, combined with external blepharorrhaphy with Krastinova-Lolov's technique. Facial reanimation using lengthening temporalis myoplasty is a dynamic procedure that has its roots in the techniques of Gillies and McLaughlin. This method is a true lengthening myoplasty procedure using no intermediate grafts. In general, the results with a 1-stage combination of lengthening temporalis myoplasty and static correction of the lagophthalmos appear comparable with the major series in the literature using free microneurovascular transfers combined with cross-facial nerve grafts for longstanding peripheral monolateral facial paralysis. The obvious advantages of temporalis elongation myoplasty consist in its technical ease, a single step, low incidence of complications and markedly reduced operating time.
Body size and allometric variation in facial shape in children.
Larson, Jacinda R; Manyama, Mange F; Cole, Joanne B; Gonzalez, Paula N; Percival, Christopher J; Liberton, Denise K; Ferrara, Tracey M; Riccardi, Sheri L; Kimwaga, Emmanuel A; Mathayo, Joshua; Spitzmacher, Jared A; Rolian, Campbell; Jamniczky, Heather A; Weinberg, Seth M; Roseman, Charles C; Klein, Ophir; Lukowiak, Ken; Spritz, Richard A; Hallgrimsson, Benedikt
2018-02-01
Morphological integration, or the tendency for covariation, is commonly seen in complex traits such as the human face. The effects of growth on shape, or allometry, represent a ubiquitous but poorly understood axis of integration. We address the question of to what extent age and measures of size converge on a single pattern of allometry for human facial shape. Our study is based on two large cross-sectional cohorts of children, one from Tanzania and the other from the United States (N = 7,173). We employ 3D facial imaging and geometric morphometrics to relate facial shape to age and anthropometric measures. The two populations differ significantly in facial shape, but the magnitude of this difference is small relative to the variation within each group. Allometric variation for facial shape is similar in both populations, representing a small but significant proportion of total variation in facial shape. Different measures of size are associated with overlapping but statistically distinct aspects of shape variation. Only half of the size-related variation in facial shape can be explained by the first principal component of four size measures and age while the remainder associates distinctly with individual measures. Allometric variation in the human face is complex and should not be regarded as a singular effect. This finding has important implications for how size is treated in studies of human facial shape and for the developmental basis for allometric variation more generally. © 2017 Wiley Periodicals, Inc.
Dehghani, Mahboobe; Jahanbin, Arezoo; Omidkhoda, Maryam; Entezari, Mostafa; Shadkam, Elaheh
2018-03-01
Craniofacial anthropometric studies measure the differences in humans' craniofacial dimensions. The aim of this study was to determine facial anthropometric dimensions of newborn to 12-year-old girls with nonsyndromic unilateral cleft lip and palate (UCLP). In this cross-sectional analytical study, data was collected from 65 infant to 12-year old girls with UCLP. Digital frontal and profile facial photographs were transferred to a computer and desired anthropometric landmarks were traced on each image. Fifteen anthropometric parameters were measured which were the angles of facial, nasofacial, nasomental, Z, nasolabial, inclination of nasal base and labial fissure, nasal deviation, mentocervical, facial convexity and also ratios of nasal prominence relative to nasal height, middle to lower facial third, upper lip to lower lip height, columellar length relative to upper lip, and incisal show relative to incisal width. Pearson coefficient and linear regression were used for statistical analysis. Upper lip to lower lip height ratio and angles of nasofacial, nasolabial, and facial convexity decreased with the age of the patients. In contrast, nasomental angle and the ratios of columellar length to upper lip length, middle facial height to lower facial height, and incisal show relative to incisal width increased. Other parameters studied did not appear to have any significant correlation with age. In the girls with UCLP, various craniofacial dimensions have different growth rates with some parts growing slower than others. Some of the parameters studied were significantly correlated with age, thus growth-related curves and equations were obtained and presented.
Investigation into the use of photoanthropometry in facial image comparison.
Moreton, Reuben; Morley, Johanna
2011-10-10
Photoanthropometry is a metric based facial image comparison technique. Measurements of the face are taken from an image using predetermined facial landmarks. Measurements are then converted to proportionality indices (PIs) and compared to PIs from another facial image. Photoanthropometry has been presented as a facial image comparison technique in UK courts for over 15 years. It is generally accepted that extrinsic factors (e.g. orientation of the head, camera angle and distance from the camera) can cause discrepancies in anthropometric measurements of the face from photographs. However there has been limited empirical research into quantifying the influence of such variables. The aim of this study was to determine the reliability of photoanthropometric measurements between different images of the same individual taken with different angulations of the camera. The study examined the facial measurements of 25 individuals from high resolution photographs, taken at different horizontal and vertical camera angles in a controlled environment. Results show that the degree of variability in facial measurements of the same individual due to variations in camera angle can be as great as the variability of facial measurements between different individuals. Results suggest that photoanthropometric facial comparison, as it is currently practiced, is unsuitable for elimination purposes. Preliminary investigations into the effects of distance from camera and image resolution in poor quality images suggest that such images are not an accurate representation of an individuals face, however further work is required. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Facial nerve palsy: analysis of cases reported in children in a suburban hospital in Nigeria.
Folayan, M O; Arobieke, R I; Eziyi, E; Oyetola, E O; Elusiyan, J
2014-01-01
The study describes the epidemiology, treatment, and treatment outcomes of the 10 cases of facial nerve palsy seen in children managed at the Obafemi Awolowo University Teaching Hospitals Complex, Ile-Ife over a 10 year period. It also compares findings with report from developed countries. This was a retrospective cohort review of pediatric cases of facial nerve palsy encountered in all the clinics run by specialists in the above named hospital. A diagnosis of facial palsy was based on International Classification of Diseases, Ninth Revision, Clinical Modification codes. Information retrieved from the case note included sex, age, number of days with lesion prior to presentation in the clinic, diagnosis, treatment, treatment outcome, and referral clinic. Only 10 cases of facial nerve palsy were diagnosed in the institution during the study period. Prevalence of facial nerve palsy in this hospital was 0.01%. The lesion more commonly affected males and the right side of the face. All cases were associated with infections: Mainly mumps (70% of cases). Case management include the use of steroids and eye pads for cases that presented within 7 days; and steroids, eye pad, and physical therapy for cases that presented later. All cases of facial nerve palsy associated with mumps and malaria infection fully recovered. The two cases of facial nerve palsy associated with otitis media only partially recovered. Facial nerve palsy in pediatric patients is more commonly associated with mumps in the study environment. Successes are recorded with steroid therapy.
Variation of facial features among three African populations: Body height match analyses.
Taura, M G; Adamu, L H; Gudaji, A
2017-01-01
Body height is one of the variables that show a correlation with facial craniometry. Here we seek to discriminate the three populations (Nigerians, Ugandans and Kenyans) using facial craniometry based on different categories of body height of adult males. A total of 513 individuals comprising 234 Nigerians, 169 Ugandans and 110 Kenyans with mean age of 25.27, s=5.13 (18-40 years) participated. Paired and unpaired facial features were measured using direct craniometry. Multivariate and stepwise discriminate function analyses were used for differentiation of the three populations. The result showed significant overall facial differences among the three populations in all the body height categories. Skull height, total facial height, outer canthal distance, exophthalmometry, right ear width and nasal length were significantly different among the three different populations irrespective of body height categories. Other variables were sensitive to body height. Stepwise discriminant function analyses included maximum of six variables for better discrimination between the three populations. The single best discriminator of the groups was total facial height, however, for body height >1.70m the single best discriminator was nasal length. Most of the variables were better used with function 1, hence, better discrimination than function 2. In conclusion, adult body height in addition to other factors such as age, sex, and ethnicity should be considered in making decision on facial craniometry. However, not all the facial linear dimensions were sensitive to body height. Copyright © 2016 Elsevier GmbH. All rights reserved.
Tse, Kwong Ming; Tan, Long Bin; Lee, Shu Jin; Lim, Siak Piang; Lee, Heow Pueh
2015-06-01
In spite of anatomic proximity of the facial skeleton and cranium, there is lack of information in the literature regarding the relationship between facial and brain injuries. This study aims to correlate brain injuries with facial injuries using finite element method (FEM). Nine common impact scenarios of facial injuries are simulated with their individual stress wave propagation paths in the facial skeleton and the intracranial brain. Fractures of cranio-facial bones and intracranial injuries are evaluated based on the tolerance limits of the biomechanical parameters. General trend of maximum intracranial biomechanical parameters found in nasal bone and zygomaticomaxillary impacts indicates that severity of brain injury is highly associated with the proximity of location of impact to the brain. It is hypothesized that the midface is capable of absorbing considerable energy and protecting the brain from impact. The nasal cartilages dissipate the impact energy in the form of large scale deformation and fracture, with the vomer-ethmoid diverging stress to the "crumpling zone" of air-filled sphenoid and ethmoidal sinuses; in its most natural manner, the face protects the brain. This numerical study hopes to provide surgeons some insight in what possible brain injuries to be expected in various scenarios of facial trauma and to help in better diagnosis of unsuspected brain injury, thereby resulting in decreasing the morbidity and mortality associated with facial trauma. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sultana, Maryam; Bhatti, Naeem; Javed, Sajid; Jung, Soon Ki
2017-09-01
Facial expression recognition (FER) is an important task for various computer vision applications. The task becomes challenging when it requires the detection and encoding of macro- and micropatterns of facial expressions. We present a two-stage texture feature extraction framework based on the local binary pattern (LBP) variants and evaluate its significance in recognizing posed and nonposed facial expressions. We focus on the parametric limitations of the LBP variants and investigate their effects for optimal FER. The size of the local neighborhood is an important parameter of the LBP technique for its extraction in images. To make the LBP adaptive, we exploit the granulometric information of the facial images to find the local neighborhood size for the extraction of center-symmetric LBP (CS-LBP) features. Our two-stage texture representations consist of an LBP variant and the adaptive CS-LBP features. Among the presented two-stage texture feature extractions, the binarized statistical image features and adaptive CS-LBP features were found showing high FER rates. Evaluation of the adaptive texture features shows competitive and higher performance than the nonadaptive features and other state-of-the-art approaches, respectively.
Kaulard, Kathrin; Cunningham, Douglas W.; Bülthoff, Heinrich H.; Wallraven, Christian
2012-01-01
The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions. PMID:22438875
The normal-equivalent: a patient-specific assessment of facial harmony.
Claes, P; Walters, M; Gillett, D; Vandermeulen, D; Clement, J G; Suetens, P
2013-09-01
Evidence-based practice in oral and maxillofacial surgery would greatly benefit from an objective assessment of facial harmony or gestalt. Normal reference faces have previously been introduced, but they describe harmony in facial form as an average only and fail to report on harmonic variations found between non-dysmorphic faces. In this work, facial harmony, in all its complexity, is defined using a face-space, which describes all possible variations within a non-dysmorphic population; this was sampled here, based on 400 healthy subjects. Subsequently, dysmorphometrics, which involves the measurement of morphological abnormalities, is employed to construct the normal-equivalent within the given face-space of a presented dysmorphic face. The normal-equivalent can be seen as a synthetic identical but unaffected twin that is a patient-specific and population-based normal. It is used to extract objective scores of facial discordancy. This technique, along with a comparing approach, was used on healthy subjects to establish ranges of discordancy that are accepted to be normal, as well as on two patient examples before and after surgical intervention. The specificity of the presented normal-equivalent approach was confirmed by correctly attributing abnormality and providing regional depictions of the known dysmorphologies. Furthermore, it proved to be superior to the comparing approach. Copyright © 2013 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Balbin, Jessie R.; Pinugu, Jasmine Nadja J.; Basco, Abigail Joy S.; Cabanada, Myla B.; Gonzales, Patrisha Melrose V.; Marasigan, Juan Carlos C.
2017-06-01
The research aims to build a tool in assessing patients for post-traumatic stress disorder or PTSD. The parameters used are heart rate, skin conductivity, and facial gestures. Facial gestures are recorded using OpenFace, an open-source face recognition program that uses facial action units in to track facial movements. Heart rate and skin conductivity is measured through sensors operated using Raspberry Pi. Results are stored in a database for easy and quick access. Databases to be used are uploaded to a cloud platform so that doctors have direct access to the data. This research aims to analyze these parameters and give accurate assessment of the patient.
A fiber-reinforced composite prosthesis restoring a lateral midfacial defect: a clinical report.
Kurunmäki, Hemmo; Kantola, Rosita; Hatamleh, Muhanad M; Watts, David C; Vallittu, Pekka K
2008-11-01
This clinical report describes the use of a glass fiber-reinforced composite (FRC) substructure to reinforce the silicone elastomer of a large facial prosthesis. The FRC substructure was shaped into a framework and embedded into the silicone elastomer to form a reinforced facial prosthesis. The prosthesis is designed to overcome the disadvantages associated with traditionally fabricated prostheses; namely, delamination of the silicone of the acrylic base, poor marginal adaptation over time, and poor simulation of facial expressions.
[Facial palsy: diagnosis and management by primary care physicians].
Alvarez, V; Dussoix, P; Gaspoz, J-M
2009-01-28
The incidence of facial palsy is about 50/100000/year, i.e. 210 cases/year in Geneva. Clinicians can be puzzled by it, because it encompasses aetiologies with very diverse prognoses. Most patients suffer from Bell palsy that evolves favourably. Some, however, suffer from diseases such as meningitis, HIV infection, Lyme's disease, CVA, that require fast identification because of their severity and of the need for specific treatments. This article proposes an algorithm for pragmatic and evidence-based management of facial palsy.
Cosmetics alter biologically-based factors of beauty: evidence from facial contrast.
Jones, Alex L; Russell, Richard; Ward, Robert
2015-02-28
The use of cosmetics by women seems to consistently increase their attractiveness. What factors of attractiveness do cosmetics alter to achieve this? Facial contrast is a known cue to sexual dimorphism and youth, and cosmetics exaggerate sexual dimorphisms in facial contrast. Here, we demonstrate that the luminance contrast pattern of the eyes and eyebrows is consistently sexually dimorphic across a large sample of faces, with females possessing lower brow contrasts than males, and greater eye contrast than males. Red-green and yellow-blue color contrasts were not found to differ consistently between the sexes. We also show that women use cosmetics not only to exaggerate sexual dimorphisms of brow and eye contrasts, but also to increase contrasts that decline with age. These findings refine the notion of facial contrast, and demonstrate how cosmetics can increase attractiveness by manipulating factors of beauty associated with facial contrast.
Estimation of human emotions using thermal facial information
NASA Astrophysics Data System (ADS)
Nguyen, Hung; Kotani, Kazunori; Chen, Fan; Le, Bac
2014-01-01
In recent years, research on human emotion estimation using thermal infrared (IR) imagery has appealed to many researchers due to its invariance to visible illumination changes. Although infrared imagery is superior to visible imagery in its invariance to illumination changes and appearance differences, it has difficulties in handling transparent glasses in the thermal infrared spectrum. As a result, when using infrared imagery for the analysis of human facial information, the regions of eyeglasses are dark and eyes' thermal information is not given. We propose a temperature space method to correct eyeglasses' effect using the thermal facial information in the neighboring facial regions, and then use Principal Component Analysis (PCA), Eigen-space Method based on class-features (EMC), and PCA-EMC method to classify human emotions from the corrected thermal images. We collected the Kotani Thermal Facial Emotion (KTFE) database and performed the experiments, which show the improved accuracy rate in estimating human emotions.
Men's facial masculinity: when (body) size matters.
Holzleitner, Iris J; Hunter, David W; Tiddeman, Bernard P; Seck, Alassane; Re, Daniel E; Perrett, David I
2014-01-01
Recent studies suggest that judgments of facial masculinity reflect more than sexually dimorphic shape. Here, we investigated whether the perception of masculinity is influenced by facial cues to body height and weight. We used the average differences in three-dimensional face shape of forty men and forty women to compute a morphological masculinity score, and derived analogous measures for facial correlates of height and weight based on the average face shape of short and tall, and light and heavy men. We found that facial cues to body height and weight had substantial and independent effects on the perception of masculinity. Our findings suggest that men are perceived as more masculine if they appear taller and heavier, independent of how much their face shape differs from women's. We describe a simple method to quantify how body traits are reflected in the face and to define the physical basis of psychological attributions.
Developmental trends in the process of constructing own- and other-race facial composites.
Kehn, Andre; Renken, Maggie D; Gray, Jennifer M; Nunez, Narina L
2014-01-01
The current study examined developmental differences from the age of 5 to 18 in the creation process of own- and other-race facial composites. In addition, it considered how differences in the creation process affect similarity ratings. Participants created two composites (one own- and one other-race) from memory. The complexity of the composite creation process was recorded during Phase One. In Phase Two, a separate group of participants rated the composites for similarity to the corresponding target face. Results support the cross-race effect, developmental differences (based on composite creators) in similarity ratings, and the importance of the creation process for own- and other-race facial composites. Together, these findings suggest that as children get older the process through which they create facial composites becomes more complex and their ability to create facial composites improves. Increased complexity resulted in higher rated composites. Results are discussed from a psycho-legal perspective.
Developing psychological services following facial trauma.
Choudhury-Peters, Deba; Dain, Vicky
2016-01-01
Adults presenting to oral and maxillofacial surgery services are at high risk of psychological morbidity. Research by the Institute of Psychotrauma and the centre for oral and maxillofacial surgery trauma clinic at the Royal London hospital (2015) demonstrated nearly 40% of patients met diagnostic criteria for either depression, post traumatic stress disorder (PTSD), anxiety, alcohol misuse, or substance misuse, or were presenting with facial appearance distress. Most facial injury patients were not receiving mental health assessment or treatment, and the maxillofacial team did not have direct access to psychological services. Based on these research findings, an innovative one-year pilot psychology service was designed and implemented within the facial trauma clinic. The project addressed this need by offering collaborative medical and psychological care for all facial injury patients. The project provided brief screening, assessment, and early psychological intervention. The medical team were trained to better recognise and respond to psychological distress.
Developing psychological services following facial trauma
Choudhury-Peters, Deba; Dain, Vicky
2016-01-01
Adults presenting to oral and maxillofacial surgery services are at high risk of psychological morbidity. Research by the Institute of Psychotrauma and the centre for oral and maxillofacial surgery trauma clinic at the Royal London hospital (2015) demonstrated nearly 40% of patients met diagnostic criteria for either depression, post traumatic stress disorder (PTSD), anxiety, alcohol misuse, or substance misuse, or were presenting with facial appearance distress. Most facial injury patients were not receiving mental health assessment or treatment, and the maxillofacial team did not have direct access to psychological services. Based on these research findings, an innovative one-year pilot psychology service was designed and implemented within the facial trauma clinic. The project addressed this need by offering collaborative medical and psychological care for all facial injury patients. The project provided brief screening, assessment, and early psychological intervention. The medical team were trained to better recognise and respond to psychological distress. PMID:27493750
Wei, R; Claes, P; Walters, M; Wholley, C; Clement, J G
2011-06-01
The facial region has traditionally been quantified using linear anthropometrics. These are well established in dentistry, but require expertise to be used effectively. The aim of this study was to augment the utility of linear anthropometrics by applying them in conjunction with modern 3-D morphometrics. Facial images of 75 males and 94 females aged 18-25 years with self-reported Caucasian ancestry were used. An anthropometric mask was applied to establish corresponding quasi-landmarks on the images in the dataset. A statistical face-space, encoding shape covariation, was established. The facial median plane was extracted facilitating both manual and automated indication of commonly used midline landmarks. From both indications, facial convexity angles were calculated and compared. The angles were related to the face-space using a regression based pathway enabling the visualization of facial form associated with convexity variation. Good agreement between the manual and automated angles was found (Pearson correlation: 0.9478-0.9474, Dahlberg root mean squared error: 1.15°-1.24°). The population mean angle was 166.59°-166.29° (SD 5.09°-5.2°) for males-females. The angle-pathway provided valuable feedback. Linear facial anthropometrics can be extended when used in combination with a face-space derived from 3-D scans and the exploration of property pathways inferred in a statistically verifiable way. © 2011 Australian Dental Association.
Rigon, A; Voss, M W; Turkstra, L S; Mutlu, B; Duff, M C
2017-01-01
Although several studies have demonstrated that facial-affect recognition impairment is common following moderate-severe traumatic brain injury (TBI), and that there are diffuse alterations in large-scale functional brain networks in TBI populations, little is known about the relationship between the two. Here, in a sample of 26 participants with TBI and 20 healthy comparison participants (HC) we measured facial-affect recognition abilities and resting-state functional connectivity (rs-FC) using fMRI. We then used network-based statistics to examine (A) the presence of rs-FC differences between individuals with TBI and HC within the facial-affect processing network, and (B) the association between inter-individual differences in emotion recognition skills and rs-FC within the facial-affect processing network. We found that participants with TBI showed significantly lower rs-FC in a component comprising homotopic and within-hemisphere, anterior-posterior connections within the facial-affect processing network. In addition, within the TBI group, participants with higher emotion-labeling skills showed stronger rs-FC within a network comprised of intra- and inter-hemispheric bilateral connections. Findings indicate that the ability to successfully recognize facial-affect after TBI is related to rs-FC within components of facial-affective networks, and provide new evidence that further our understanding of the mechanisms underlying emotion recognition impairment in TBI.
Nichol, Kathryn; Bigelow, Philip; O'Brien-Pallas, Linda; McGeer, Allison; Manno, Mike; Holness, D Linn
2008-09-01
Communicable respiratory illness is an important cause of morbidity among nurses. One of the key reasons for occupational transmission of this illness is the failure to implement appropriate barrier precautions, particularly facial protection. The objectives of this study were to describe the factors that influence nurses' decisions to use facial protection and to determine their relative importance in predicting compliance. This cross-sectional survey was conducted in 9 units of 2 urban hospitals in which nursing staff regularly use facial protection. A total of 400 self-administered questionnaires were provided to nurses, and 177 were returned (44% response rate). Less than half of respondents reported compliance with the recommended use of facial protection (eye/face protection, respirators, and surgical masks) to prevent occupational transmission of communicable respiratory disease. Multivariate analysis showed 5 factors to be key predictors of nurses' compliance with the recommended use of facial protection. These factors include full-time work status, greater than 5 years tenure as a nurse, at least monthly use of facial protection, a belief that media coverage of infectious diseases impacts risk perception and work practices, and organizational support for health and safety. Strategies and interventions based on these findings should result in enhanced compliance with facial protection and, ultimately, a reduction in occupational transmission of communicable respiratory illness.
Wirthlin, J; Kau, C H; English, J D; Pan, F; Zhou, H
2013-09-01
The objective of this study was to compare the facial morphologies of an adult Chinese population to a Houstonian white population. Three-dimensional (3D) images were acquired via a commercially available stereophotogrammetric camera system, 3dMDface™. Using the system, 100 subjects from a Houstonian population and 71 subjects from a Chinese population were photographed. A complex mathematical algorithm was performed to generate a composite facial average (one for males and one for females) for each subgroup. The computer-generated facial averages were then superimposed based on a previously validated superimposition method. The facial averages were evaluated for differences. Distinct facial differences were evident between the subgroups evaluated. These areas included the nasal tip, the peri-orbital area, the malar process, the labial region, the forehead, and the chin. Overall, the mean facial difference between the Chinese and Houstonian female averages was 2.73±2.20mm, while the difference between the Chinese and Houstonian males was 2.83±2.20mm. The percent similarity for the female population pairings and male population pairings were 10.45% and 12.13%, respectively. The average adult Chinese and Houstonian faces possess distinct differences. Different populations and ethnicities have different facial features and averages that should be considered in the planning of treatment. Copyright © 2013 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Social perception of morbidity in facial nerve paralysis.
Li, Matthew Ka Ki; Niles, Navin; Gore, Sinclair; Ebrahimi, Ardalan; McGuinness, John; Clark, Jonathan Robert
2016-08-01
There are many patient-based and clinician-based scales measuring the severity of facial nerve paralysis and the impact on quality of life, however, the social perception of facial palsy has received little attention. The purpose of this pilot study was to measure the consequences of facial paralysis on selected domains of social perception and compare the social impact of paralysis of the different components. Four patients with typical facial palsies (global, marginal mandibular, zygomatic/buccal, and frontal) and 1 control were photographed. These images were each shown to 100 participants who subsequently rated variables of normality, perceived distress, trustworthiness, intelligence, interaction, symmetry, and disability. Statistical analysis was performed to compare the results among each palsy. Paralyzed faces were considered less normal compared to the control on a scale of 0 to 10 (mean, 8.6; 95% confidence interval [CI] = 8.30-8.86) with global paralysis (mean, 3.4; 95% CI = 3.08-3.80) rated as the most disfiguring, followed by the zygomatic/buccal (mean, 6.0; 95% CI = 5.68-6.37), marginal (mean, 6.5; 95% CI = 6.08-6.86), and then temporal palsies (mean, 6.9; 95% CI = 6.57-7.21). Similar trends were seen when analyzing these palsies for perceived distress, intelligence, and trustworthiness, using a random effects regression model. Our sample suggests that society views paralyzed faces as less normal, less trustworthy, and more distressed. Different components of facial paralysis are worse than others and surgical correction may need to be prioritized in an evidence-based manner with social morbidity in mind. © 2016 Wiley Periodicals, Inc. Head Neck 38:1158-1163, 2016. © 2016 Wiley Periodicals, Inc.
Children with facial morphoea managing everyday life: a qualitative study.
Stasiulis, E; Gladstone, B; Boydell, K; O'Brien, C; Pope, E; Laxer, R M
2018-02-16
Facial morphoea is a chronic inflammatory skin disorder, typically presenting in childhood and adolescence, which can be disfiguring, and which has been suggested to cause mild-to-moderate impairment in quality of life. To explore the everyday experiences of children with facial morphoea by examining the psychosocial impact of living with facial morphoea and how children and their families manage its impact. We used a qualitative, social constructionist approach involving focus groups, in-depth interviews and drawing activities with 10 children with facial morphoea aged 8-17 years and 13 parents. Interpretive thematic analysis was utilized to examine the data. Children and parents reported on the stress of living with facial morphoea, which was related to the lack of knowledge about facial morphoea and the extent to which they perceived themselves as different from others. Self-perceptions were based on the visibility of the lesion, different phases of life transitions and the reactions of others (e.g. intrusive questioning and bullying). Medication routines, and side-effects such as weight gain, added to the stress experienced by the participants. To manage the impact of facial morphoea, children and their parents used strategies to normalize the experience by hiding physical signs of the illness, constructing explanations about what 'it' is, and by connecting with their peers. Understanding what it is like to live with facial morphoea from the perspectives of children and parents is important for devising ways to help children with the disorder achieve a better quality of life. Healthcare providers can help families access resources to manage anxiety, deal with bullying and construct adequate explanations of facial morphoea, in addition to providing opportunities for peer support. © 2018 British Association of Dermatologists.
Flack, Tessa R; Andrews, Timothy J; Hymers, Mark; Al-Mosaiwi, Mohammed; Marsden, Samuel P; Strachan, James W A; Trakulpipat, Chayanit; Wang, Liang; Wu, Tian; Young, Andrew W
2015-08-01
The face-selective region of the right posterior superior temporal sulcus (pSTS) plays an important role in analysing facial expressions. However, it is less clear how facial expressions are represented in this region. In this study, we used the face composite effect to explore whether the pSTS contains a holistic or feature-based representation of facial expression. Aligned and misaligned composite images were created from the top and bottom halves of faces posing different expressions. In Experiment 1, participants performed a behavioural matching task in which they judged whether the top half of two images was the same or different. The ability to discriminate the top half of the face was affected by changes in the bottom half of the face when the images were aligned, but not when they were misaligned. This shows a holistic behavioural response to expression. In Experiment 2, we used fMR-adaptation to ask whether the pSTS has a corresponding holistic neural representation of expression. Aligned or misaligned images were presented in blocks that involved repeating the same image or in which the top or bottom half of the images changed. Increased neural responses were found in the right pSTS regardless of whether the change occurred in the top or bottom of the image, showing that changes in expression were detected across all parts of the face. However, in contrast to the behavioural data, the pattern did not differ between aligned and misaligned stimuli. This suggests that the pSTS does not encode facial expressions holistically. In contrast to the pSTS, a holistic pattern of response to facial expression was found in the right inferior frontal gyrus (IFG). Together, these results suggest that pSTS reflects an early stage in the processing of facial expression in which facial features are represented independently. Copyright © 2015 Elsevier Ltd. All rights reserved.
INFRARED- BASED BLINK DETECTING GLASSES FOR FACIAL PACING: TOWARDS A BIONIC BLINK
Frigerio, Alice; Hadlock, Tessa A; Murray, Elizabeth H; Heaton, James T
2015-01-01
IMPORTANCE Facial paralysis remains one of the most challenging conditions to effectively manage, often causing life-altering deficits in both function and appearance. Facial rehabilitation via pacing and robotic technology has great yet unmet potential. A critical first step towards reanimating symmetrical facial movement in cases of unilateral paralysis is the detection of healthy movement to use as a trigger for stimulated movement. OBJECTIVE To test a blink detection system that can be attached to standard eyeglasses and used as part of a closed-loop facial pacing system. DESIGN Standard safety glasses were equipped with an infrared (IR) emitter/detector pair oriented horizontally across the palpebral fissure, creating a monitored IR beam that became interrupted when the eyelids closed. SETTING Tertiary care Facial Nerve Center. PARTICIPANTS 24 healthy volunteers. MAIN OUTCOME MEASURE Video-quantified blinking was compared with both IR sensor signal magnitude and rate of change in healthy participants with their gaze in repose, while they shifted gaze from central to far peripheral positions, and during the production of particular facial expressions. RESULTS Blink detection based on signal magnitude achieved 100% sensitivity in forward gaze, but generated false-detections on downward gaze. Calculations of peak rate of signal change (first derivative) typically distinguished blinks from gaze-related lid movements. During forward gaze, 87% of detected blink events were true positives, 11% were false positives, and 2% false negatives. Of the 11% false positives, 6% were associated with partial eyelid closures. During gaze changes, false blink detection occurred 6.3% of the time during lateral eye movements, 10.4% during upward movements, 46.5% during downward movements, and 5.6% for movements from an upward or downward gaze back to the primary gaze. Facial expressions disrupted sensor output if they caused substantial squinting or shifted the glasses. CONCLUSION AND RELEVANCE Our blink detection system provides a reliable, non-invasive indication of eyelid closure using an invisible light beam passing in front of the eye. Future versions will aim to mitigate detection errors by using multiple IR emitter/detector pairs mounted on the glasses, and alternative frame designs may reduce shifting of the sensors relative to the eye during facial movements. PMID:24699708
Blanco Souza, Túlio Armanini; Colomé, Letícia Marques; Bender, Eduardo André; Lemperle, Gottfriede
2018-06-05
Considering that aesthetic benefits can be obtained with the use of permanent filling materials, this work focuses on the development of a consensus regarding the facial and corporal use of polymethylmethacrylate (PMMA) filler in Brazil. A questionnaire regarding PMMA treatment, which included items on the main indication, application site, volume of product applied, criteria for selection of the material, complications, contraindications, and individual professional experience, was distributed to the Expert Group members. In addition, the responses were summarized, constituting the starting point for the debate regarding the use of PMMA-based fillers on The First Brazilian PMMA Symposium to create a guideline to be followed in PMMA facial and corporal treatments. This survey involved 87,371 cases. PMMA treatment is recommended for restorative and aesthetic purposes in facial and corporal cases, particularly for facial balance. PMMA 30% filler is recommended in specific facial sites (nose, mentum, mandible angle, zygomatic arc, and malar). PMMA filler is contraindicated in other sites (lips) regardless of concentration. With regard to facial treatment, the juxtaperiostal is the application plane most recommended. For PMMA corporal application, intramuscular is the application plane most indicated, while intradermal and justadermal planes are contraindicated. The submuscular plane application is relative to PMMA filler concentration. The experts also inquired regarding the amount of PMMA recommended in each corporal site (50 mL in the calf, 100-150 mL in the gluteal region). These recommendations provide a guideline for physicians, supporting them to perform safe and efficacious treatment with PMMA fillers. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Wiertel-Krawczuk, Agnieszka; Huber, Juliusz; Wojtysiak, Magdalena; Golusiński, Wojciech; Pieńkowski, Piotr; Golusiński, Paweł
2015-05-01
Parotid gland tumor surgery sometimes leads to facial nerve paralysis. Malignant more than benign tumors determine nerve function preoperatively, while postoperative observations based on clinical, histological and neurophysiological studies have not been reported in detail. The aims of this pilot study were evaluation and correlations of histological properties of tumor (its size and location) and clinical and neurophysiological assessment of facial nerve function pre- and post-operatively (1 and 6 months). Comparative studies included 17 patients with benign (n = 13) and malignant (n = 4) tumors. Clinical assessment was based on House-Brackmann scale (H-B), neurophysiological diagnostics included facial electroneurography [ENG, compound muscle action potential (CMAP)], mimetic muscle electromyography (EMG) and blink-reflex examinations (BR). Mainly grade I of H-B was recorded both pre- (n = 13) and post-operatively (n = 12) in patients with small (1.5-2.4 cm) benign tumors located in superficial lobes. Patients with medium size (2.5-3.4 cm) malignant tumors in both lobes were scored at grade I (n = 2) and III (n = 2) pre- and mainly VI (n = 4) post-operatively. CMAP amplitudes after stimulation of mandibular marginal branch were reduced at about 25 % in patients with benign tumors after surgery. In the cases of malignant tumors CMAPs were not recorded following stimulation of any branch. A similar trend was found for BR results. H-B and ENG results revealed positive correlations between the type of tumor and surgery with facial nerve function. Neurophysiological studies detected clinically silent facial nerve neuropathy of mandibular marginal branch in postoperative period. Needle EMG, ENG and BR examinations allow for the evaluation of face muscles reinnervation and facial nerve regeneration.
Use of Computer Imaging in Rhinoplasty: A Survey of the Practices of Facial Plastic Surgeons.
Singh, Prabhjyot; Pearlman, Steven
2017-08-01
The objective of this study was to quantify the use of computer imaging by facial plastic surgeons. AAFPRS Facial plastic surgeons were surveyed about their use of computer imaging during rhinoplasty consultations. The survey collected information about surgeon demographics, practice settings, practice patterns, and rates of computer imaging (CI) for primary and revision rhinoplasty. For those surgeons who used CI, additional information was also collected, which included who performed the imaging and whether the patient was given the morphed images after the consultation. A total of 238 out of 1200 (19.8%) facial plastic surgeons responded to the survey. Out of those who responded, 195 surgeons (83%) were board certified by the American Board of Facial Plastic and Reconstructive Surgeons (ABFPRS). The majority of respondents (150 surgeons, 63%) used CI during rhinoplasty consultation. Of the surgeons who use CI, 92% performed the image morphing themselves. Approximately two-thirds of surgeons who use CI gave their patient a printout of the morphed images after the consultation. Computer imaging (CI) is a frequently utilized tool for facial plastic surgeons during cosmetic consultations with patients. Based on these results of this study, it can be suggested that the majority of facial plastic surgeons who use CI do so for both primary and revision rhinoplasty. As more sophisticated systems become available, it is possible that utilization of CI modalities will increase. This provides the surgeon with further tools to use at his or her disposal during discussion of aesthetic surgery. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Putative golden proportions as predictors of facial esthetics in adolescents.
Kiekens, Rosemie M A; Kuijpers-Jagtman, Anne Marie; van 't Hof, Martin A; van 't Hof, Bep E; Maltha, Jaap C
2008-10-01
In orthodontics, facial esthetics is assumed to be related to golden proportions apparent in the ideal human face. The aim of the study was to analyze the putative relationship between facial esthetics and golden proportions in white adolescents. Seventy-six adult laypeople evaluated sets of photographs of 64 adolescents on a visual analog scale (VAS) from 0 to 100. The facial esthetic value of each subject was calculated as a mean VAS score. Three observers recorded the position of 13 facial landmarks included in 19 putative golden proportions, based on the golden proportions as defined by Ricketts. The proportions and each proportion's deviation from the golden target (1.618) were calculated. This deviation was then related to the VAS scores. Only 4 of the 19 proportions had a significant negative correlation with the VAS scores, indicating that beautiful faces showed less deviation from the golden standard than less beautiful faces. Together, these variables explained only 16% of the variance. Few golden proportions have a significant relationship with facial esthetics in adolescents. The explained variance of these variables is too small to be of clinical importance.
A facial reconstruction and identification technique for seriously devastating head wounds.
Joukal, Marek; Frišhons, Jan
2015-07-01
Many authors have focused on facial identification techniques, and facial reconstructions for cases when skulls have been found are especially well known. However, a standardized facial identification technique for an unknown body with seriously devastating head injuries has not yet been developed. A reconstruction and identification technique was used in 7 cases of accidents involving trains striking pedestrians. This identification technique is based on the removal of skull bone fragments, subsequent fixation of soft tissue onto a universal commercial polystyrene head model, precise suture of dermatomuscular flaps, and definitive adjustment using cosmetic treatments. After reconstruction, identifying marks such as scars, eyebrows, facial lines, facial hair and partly hairstyle become evident. It is then possible to present a modified picture of the reconstructed face to relatives. After comparing the results with photos of the person before death, this technique has proven to be very useful for identifying unknown bodies when other identification techniques are not available. This technique is useful for its being rather quick and especially for its results. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
More than mere mimicry? The influence of emotion on rapid facial reactions to faces.
Moody, Eric J; McIntosh, Daniel N; Mann, Laura J; Weisser, Kimberly R
2007-05-01
Within a second of seeing an emotional facial expression, people typically match that expression. These rapid facial reactions (RFRs), often termed mimicry, are implicated in emotional contagion, social perception, and embodied affect, yet ambiguity remains regarding the mechanism(s) involved. Two studies evaluated whether RFRs to faces are solely nonaffective motor responses or whether emotional processes are involved. Brow (corrugator, related to anger) and forehead (frontalis, related to fear) activity were recorded using facial electromyography (EMG) while undergraduates in two conditions (fear induction vs. neutral) viewed fear, anger, and neutral facial expressions. As predicted, fear induction increased fear expressions to angry faces within 1000 ms of exposure, demonstrating an emotional component of RFRs. This did not merely reflect increased fear from the induction, because responses to neutral faces were unaffected. Considering RFRs to be merely nonaffective automatic reactions is inaccurate. RFRs are not purely motor mimicry; emotion influences early facial responses to faces. The relevance of these data to emotional contagion, autism, and the mirror system-based perspectives on imitation is discussed.
Muhn, Channy; Rosen, Nathan; Solish, Nowell; Bertucci, Vince; Lupin, Mark; Dansereau, Alain; Weksberg, Fred; Remington, B Kent; Swift, Arthur
2012-01-01
Recent advancements, including more versatile facial fillers, refined injection techniques and the adoption of a global facial approach, have contributed to improved patient outcome and increased patient satisfaction. Nine Canadian specialists (eight dermatologists, one plastic surgeon) collaborated to develop an overview on volume restoration and contouring based on published literature and their collective clinical experience. The specialists concurred that optimal results in volume restoration and contouring depend on correcting deficiencies at various layers of the facial envelope. This includes creating a foundation for deep structural support in the supraperiosteal or submuscular plane; volume repletion of subcutaneous fat compartments; and the reestablishment of dermal and subdermal support to minimize cutaneous rhytids, grooves and furrows. It was also agreed that volume restoration and contouring using a global facial approach is essential to create a natural, youthful appearance in facial aesthetics. A comprehensive non-surgical approach should therefore incorporate combining fillers such as high-viscosity, low-molecular-weight hyaluronic acid (LMWHA) for structural support and hyaluronic acid (HA) for lines, grooves and furrows with neuromodulators, lasers and energy devices. PMID:23071398
Synthesis of Speaker Facial Movement to Match Selected Speech Sequences
NASA Technical Reports Server (NTRS)
Scott, K. C.; Kagels, D. S.; Watson, S. H.; Rom, H.; Wright, J. R.; Lee, M.; Hussey, K. J.
1994-01-01
A system is described which allows for the synthesis of a video sequence of a realistic-appearing talking human head. A phonic based approach is used to describe facial motion; image processing rather than physical modeling techniques are used to create video frames.
Retinoids: Literature Review and Suggested Algorithm for Use Prior to Facial Resurfacing Procedures
Buchanan, Patrick J; Gilman, Robert H
2016-01-01
Vitamin A-containing products have been used topically since the early 1940s to treat various skin conditions. To date, there are four generations of retinoids, a family of Vitamin A-containing compounds. Tretinoin, all-trans-retinoic acid, is a first-generation, naturally occurring, retinoid. It is available, commercially, as a gel or cream. The authors conducted a complete review of all studies, clinical- and basic science-based studies, within the literature involving tretinoin treatment recommendations for impending facial procedures. The literature currently lacks definitive recommendations for the use of tretinoin-containing products prior to undergoing facial procedures. Tretinoin pretreatment regimens vary greatly in terms of the strength of retinoid used, the length of the pre-procedure treatment, and the ideal time to stop treatment before the procedure. Based on the current literature and personal experience, the authors set forth a set of guidelines for the use of tretinoin prior to various facial procedures. PMID:27761082
Development and characterization of clay facial mask containing turmeric extract solid dispersion.
Pan-On, Suchiwa; Rujivipat, Soravoot; Ounaroon, Anan; Tiyaboonchai, Waree
2018-04-01
To develop clay facial mask containing turmeric extract solid dispersion (TESD) for enhancing curcumin water solubility and permeability and to determine suitable clay based facial mask. The TESD were prepared by solvent and melting solvent method with various TE to polyvinylpyrrolidone (PVP) K30 mass ratios. The physicochemical properties, water solubility, and permeability were examined. The effects of clay types on physical stability of TESD, water adsorption, and curcumin adsorption capacity were evaluated. The TESD prepared by solvent method with a TE to PVP K30 mass ratio of 1:2 showed physically stable, dry powders, when mixed with clay. When TESD was dissolved in water, the obtained TESD micelles showed spherical shape with mean size of ∼100 nm resulting in a substantial enhancement of curcumin water solubility, ∼5 mg/ml. Bentonite (Bent) and mica (M) showed the highest water adsorption capacity. The TESD's color was altered when mixed with Bent, titanium dioxide (TiO 2 ) and zinc oxide (ZnO) indicating curcumin instability. Talcum (Talc) showed the greatest curcumin adsorption followed by M and kaolin (K), respectively. Consequently, in vitro permeation studies of the TESD mixed with Talc showed lowest curcumin permeation, while TESD mixed with M or K showed similar permeation profile as free TESD solutions. The developed TESD-based clay facial mask showed lower curcumin permeation as compared to those formulations with Tween 80. The water solubility and permeability of curcumin in clay based facial mask could be improved using solid dispersion technique and suitable clay base composed of K, M, and Talc.
Bianchi, Francesca A; Roccia, Fabio; Fiorini, Paola; Berrone, Sid
2010-05-01
In this prospective study, we used the Patient and Observer Scar Assessment Scale (POSAS) to evaluate the outcome of the healing process of posttraumatic and surgical facial scars that were treated with self-drying silicone gel, by both the patient and the observer. In our division, the application of base cream and massage represents the standard management of facial scars after suture removal. In the current study, 15 patients (7 men and 8 women) with facial scars were treated with self-drying silicone gel that was applied without massage, and 15 patients (8 men and 7 women) were treated with base cream and massage. Both groups underwent a clinical evaluation of facial scars by POSAS at the time of suture removal (T0) and after 2 months of treatment (T1). The patient rated scar pain, itch, color, stiffness, thickness, and surface (Patient Scale), and the observer rated scar vascularity, pigmentation, thickness, relief, pliability, and surface area (Observer Scale [OS]). The Patient Scale reported the greatest improvement in the items color, stiffness, and thickness. Itch was the only item that worsened in the group self-drying silicone gel. The OS primarily reported an improvement in the items vascularization, pigmentation, and pliability. The only item in the OS that underwent no change from T0 to T1 was surface area. The POSAS revealed satisfactory healing of posttraumatic and surgical facial scars that were treated with self-drying silicone gel.
FACIAL ASYMMETRY IS NEGATIVELY RELATED TO CONDITION IN FEMALE MACAQUE MONKEYS
Little, Anthony C.; Paukner, Annika; Woodward, Ruth A.; Suomi, Stephen J.
2013-01-01
The face is an important visual trait in social communication across many species. In evolutionary terms there are large and obvious selective advantages in detecting healthy partners, both in terms of avoiding individuals with poor health to minimise contagion and in mating with individuals with high health to help ensure healthy offspring. Many models of sexual selection suggest that an individual’s phenotype provides cues to their quality. Fluctuating asymmetry is a trait that is proposed to be an honest indicator of quality and previous studies have demonstrated that rhesus monkeys gaze longer at symmetric faces, suggesting preferences for such faces. The current study examined the relationship between measured facial symmetry and measures of health in a captive population of female rhesus macaque monkeys. We measured asymmetry from landmarks marked on front-on facial photographs and computed measures of health based on veterinary health and condition ratings, number of minor and major wounds sustained, and gain in weight over the first four years of life. Analysis revealed that facial asymmetry was negatively related to condition related health measures, with symmetric individuals being healthier than more asymmetric individuals. Facial asymmetry appears to be an honest indicator of health in rhesus macaques and asymmetry may then be used by conspecifics in mate-choice situations. More broadly, our data support the notion that faces are valuable sources of information in non-human primates and that sexual selection based on facial information is potentially important across the primate lineage. PMID:23667290
Hannah, Beverly; Wang, Yue; Jongman, Allard; Sereno, Joan A; Cao, Jiguo; Nie, Yunlong
2017-01-01
Speech perception involves multiple input modalities. Research has indicated that perceivers establish cross-modal associations between auditory and visuospatial events to aid perception. Such intermodal relations can be particularly beneficial for speech development and learning, where infants and non-native perceivers need additional resources to acquire and process new sounds. This study examines how facial articulatory cues and co-speech hand gestures mimicking pitch contours in space affect non-native Mandarin tone perception. Native English as well as Mandarin perceivers identified tones embedded in noise with either congruent or incongruent Auditory-Facial (AF) and Auditory-FacialGestural (AFG) inputs. Native Mandarin results showed the expected ceiling-level performance in the congruent AF and AFG conditions. In the incongruent conditions, while AF identification was primarily auditory-based, AFG identification was partially based on gestures, demonstrating the use of gestures as valid cues in tone identification. The English perceivers' performance was poor in the congruent AF condition, but improved significantly in AFG. While the incongruent AF identification showed some reliance on facial information, incongruent AFG identification relied more on gestural than auditory-facial information. These results indicate positive effects of facial and especially gestural input on non-native tone perception, suggesting that cross-modal (visuospatial) resources can be recruited to aid auditory perception when phonetic demands are high. The current findings may inform patterns of tone acquisition and development, suggesting how multi-modal speech enhancement principles may be applied to facilitate speech learning.
Hannah, Beverly; Wang, Yue; Jongman, Allard; Sereno, Joan A.; Cao, Jiguo; Nie, Yunlong
2017-01-01
Speech perception involves multiple input modalities. Research has indicated that perceivers establish cross-modal associations between auditory and visuospatial events to aid perception. Such intermodal relations can be particularly beneficial for speech development and learning, where infants and non-native perceivers need additional resources to acquire and process new sounds. This study examines how facial articulatory cues and co-speech hand gestures mimicking pitch contours in space affect non-native Mandarin tone perception. Native English as well as Mandarin perceivers identified tones embedded in noise with either congruent or incongruent Auditory-Facial (AF) and Auditory-FacialGestural (AFG) inputs. Native Mandarin results showed the expected ceiling-level performance in the congruent AF and AFG conditions. In the incongruent conditions, while AF identification was primarily auditory-based, AFG identification was partially based on gestures, demonstrating the use of gestures as valid cues in tone identification. The English perceivers’ performance was poor in the congruent AF condition, but improved significantly in AFG. While the incongruent AF identification showed some reliance on facial information, incongruent AFG identification relied more on gestural than auditory-facial information. These results indicate positive effects of facial and especially gestural input on non-native tone perception, suggesting that cross-modal (visuospatial) resources can be recruited to aid auditory perception when phonetic demands are high. The current findings may inform patterns of tone acquisition and development, suggesting how multi-modal speech enhancement principles may be applied to facilitate speech learning. PMID:29255435
Melo, Andréa Reis de; Conti, Ana Cláudia de Castro Ferreira; Almeida-Pedrin, Renata Rodrigues; Didier, Victor; Valarelli, Danilo Pinelli; Capelozza Filho, Leopoldino
2017-02-01
The objective of this study was to evaluate the facial attractiveness in 30 black individuals, according to the Subjective Facial Analysis criteria. Frontal and profile view photographs of 30 black individuals were evaluated for facial attractiveness and classified as esthetically unpleasant, acceptable, or pleasant by 50 evaluators: the 30 individuals from the sample, 10 orthodontists, and 10 laymen. Besides assessing the facial attractiveness, the evaluators had to identify the structures responsible for the classification as unpleasant and pleasant. Intraexaminer agreement was assessed by using Spearman's correlation, correlation within each category using Kendall concordance coefficient, and correlation between the 3 categories using chi-square test and proportions. Most of the frontal (53. 5%) and profile view (54. 9%) photographs were classified as esthetically acceptable. The structures most identified as esthetically unpleasant were the mouth, lips, and face, in the frontal view; and nose and chin in the profile view. The structures most identified as esthetically pleasant were harmony, face, and mouth, in the frontal view; and harmony and nose in the profile view. The ratings by the examiners in the sample and laymen groups showed statistically significant correlation in both views. The orthodontists agreed with the laymen on the evaluation of the frontal view and disagreed on profile view, especially regarding whether the images were esthetically unpleasant or acceptable. Based on these results, the evaluation of facial attractiveness according to the Subjective Facial Analysis criteria proved to be applicable and to have a subjective influence; therefore, it is suggested that the patient's opinion regarding the facial esthetics should be considered in orthodontic treatmentplanning.
Marshall, Christopher D; Vaughn, Susan D; Sarko, Diana K; Reep, Roger L
2007-01-01
Florida manatees (Trichechus manatus latirostris) possess modified vibrissae that are used in conjunction with specialized perioral musculature to manipulate vegetation for ingestion, and aid in the tactile exploration of their environment. Therefore it is expected that manatees possess a large facial motor nucleus that exhibits a complex organization relative to other taxa. The topographical organization of the facial motor nucleus of five adult Florida manatees was analyzed using neuroanatomical methods. Cresyl violet and hematoxylin staining were used to localize the rostrocaudal extent of the facial motor nucleus as well as the organization and location of subdivisions within this nucleus. Differences in size, length, and organization of the facial motor nucleus among mammals correspond to the functional importance of the superficial facial muscles, including perioral musculature involved in the movement of mystacial vibrissae. The facial motor nucleus of Florida manatees was divided into seven subnuclei. The mean rostrocaudal length, width, and height of the entire Florida manatee facial motor nucleus was 6.6 mm (SD 8 0.51; range: 6.2-7.5 mm), 4.7 mm (SD 8 0.65; range: 4.0-5.6 mm), and 3.9 mm (SD 8 0.26; range: 3.5-4.2 mm), respectively. It is speculated that manatees could possess direct descending corticomotorneuron projections to the facial motornucleus. This conjecture is based on recent data for rodents, similiarities in the rodent and sirenian muscular-vibrissal complex, and the analogous nature of the sirenian cortical Rindenkerne system with the rodent barrel system. Copyright (c) 2007 S. Karger AG, Basel.
A Method of Face Detection with Bayesian Probability
NASA Astrophysics Data System (ADS)
Sarker, Goutam
2010-10-01
The objective of face detection is to identify all images which contain a face, irrespective of its orientation, illumination conditions etc. This is a hard problem, because the faces are highly variable in size, shape lighting conditions etc. Many methods have been designed and developed to detect faces in a single image. The present paper is based on one `Appearance Based Method' which relies on learning the facial and non facial features from image examples. This in its turn is based on statistical analysis of examples and counter examples of facial images and employs Bayesian Conditional Classification Rule to detect the probability of belongingness of a face (or non-face) within an image frame. The detection rate of the present system is very high and thereby the number of false positive and false negative detection is substantially low.
Sutural growth restriction and modern human facial evolution: an experimental study in a pig model
Holton, Nathan E; Franciscus, Robert G; Nieves, Mary Ann; Marshall, Steven D; Reimer, Steven B; Southard, Thomas E; Keller, John C; Maddux, Scott D
2010-01-01
Facial size reduction and facial retraction are key features that distinguish modern humans from archaic Homo. In order to more fully understand the emergence of modern human craniofacial form, it is necessary to understand the underlying evolutionary basis for these defining characteristics. Although it is well established that the cranial base exerts considerable influence on the evolutionary and ontogenetic development of facial form, less emphasis has been placed on developmental factors intrinsic to the facial skeleton proper. The present analysis was designed to assess anteroposterior facial reduction in a pig model and to examine the potential role that this dynamic has played in the evolution of modern human facial form. Ten female sibship cohorts, each consisting of three individuals, were allocated to one of three groups. In the experimental group (n = 10), microplates were affixed bilaterally across the zygomaticomaxillary and frontonasomaxillary sutures at 2 months of age. The sham group (n = 10) received only screw implantation and the controls (n = 10) underwent no surgery. Following 4 months of post-surgical growth, we assessed variation in facial form using linear measurements and principal components analysis of Procrustes scaled landmarks. There were no differences between the control and sham groups; however, the experimental group exhibited a highly significant reduction in facial projection and overall size. These changes were associated with significant differences in the infraorbital region of the experimental group including the presence of an infraorbital depression and an inferiorly and coronally oriented infraorbital plane in contrast to a flat, superiorly and sagittally infraorbital plane in the control and sham groups. These altered configurations are markedly similar to important additional facial features that differentiate modern humans from archaic Homo, and suggest that facial length restriction via rigid plate fixation is a potentially useful model to assess the developmental factors that underlie changing patterns in craniofacial form associated with the emergence of modern humans. PMID:19929910
Gor, Troy; Kau, Chung How; English, Jeryl D; Lee, Robert P; Borbely, Peter
2010-03-01
The aim of this study was to assess the use of 3-dimensional facial averages in determining facial morphologic differences in 2 white population groups. Three-dimensional images were obtained in a reproducible and controlled environment from a commercially available stereo-photogrammetric camera capture system. The 3dMDface system (3dMD, Atlanta, Ga) photographed 200 subjects from 2 population groups (Budapest, Hungary, and Houston, Tex); each group included 50 men and 50 women, aged 18 to 30 years. Each face was obtained as a facial mesh and orientated along a triangulated axis. All faces were overlaid, one on top of the other, and a complex mathematical algorithm was used until an average composite face of 1 man and 1 woman was obtained for each subgroup (Hungarian men, Hungarian women, Texas men, and Texas women). These average facial composites were superimposed (men and women) based on a previously validated superimposition method, and the facial differences were quantified. Distinct facial differences were observed between the population groups. These differences could be seen in the nasal, malar, lips, and lower facial regions. In general, the mean facial differences were 0.55 +/- 0.60 mm between the Hungarian and Texas women, and 0.44 +/- 0.42 mm between the Hungarian and Texas men. The ranges of differences were -2.02 to 3.77 and -2.05 to 1.94 mm for the female and male pairings, respectively. Three-dimensional facial averages representing the facial soft-tissue morphology of adults can be used to assess diagnostic and treatment regimens for patients by population. Each population is different with respect to their soft-tissue structures, and traditional soft-tissue normative data (eg, white norms) should be altered and used for specific groups. American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
Facial skin color measurement based on camera colorimetric characterization
NASA Astrophysics Data System (ADS)
Yang, Boquan; Zhou, Changhe; Wang, Shaoqing; Fan, Xin; Li, Chao
2016-10-01
The objective measurement of facial skin color and its variance is of great significance as much information can be obtained from it. In this paper, we developed a new skin color measurement procedure which includes following parts: first, a new skin tone color checker made of pantone Skin Tone Color Checker was designed for camera colorimetric characterization; second, the chromaticity of light source was estimated via a new scene illumination estimation method considering several previous algorithms; third, chromatic adaption was used to convert the input facial image into output facial image which appears taken under canonical light; finally the validity and accuracy of our method was verified by comparing the results obtained by our procedure with these by spectrophotometer.
Gernhardt, Ariane; Rübeling, Hartmut; Keller, Heidi
2015-01-01
This study investigated tadpole self-drawings from 183 three- to six-year-old children living in seven cultural groups, representing three ecosocial contexts. Based on assumed general production principles, the influence of cultural norms and values upon specific characteristics of the tadpole drawings was examined. The results demonstrated that children from all cultural groups realized the body-proportion effect in the self-drawings, indicating universal production principles. However, children differed in single drawing characteristics, depending on the specific ecosocial context. Children from Western and non-Western urban educated contexts drew themselves rather tall, with many facial features, and preferred smiling facial expressions, while children from rural traditional contexts depicted themselves significantly smaller, with less facial details, and neutral facial expressions.
Myomodulation with Injectable Fillers: An Innovative Approach to Addressing Facial Muscle Movement.
de Maio, Maurício
2018-06-01
Consideration of facial muscle dynamics is underappreciated among clinicians who provide injectable filler treatment. Injectable fillers are customarily used to fill static wrinkles, folds, and localized areas of volume loss, whereas neuromodulators are used to address excessive muscle movement. However, a more comprehensive understanding of the role of muscle function in facial appearance, taking into account biomechanical concepts such as the balance of activity among synergistic and antagonistic muscle groups, is critical to restoring facial appearance to that of a typical youthful individual with facial esthetic treatments. Failure to fully understand the effects of loss of support (due to aging or congenital structural deficiency) on muscle stability and interaction can result in inadequate or inappropriate treatment, producing an unnatural appearance. This article outlines these concepts to provide an innovative framework for an understanding of the role of muscle movement on facial appearance and presents cases that illustrate how modulation of muscle movement with injectable fillers can address structural deficiencies, rebalance abnormal muscle activity, and restore facial appearance. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Perceptual integration of kinematic components in the recognition of emotional facial expressions.
Chiovetto, Enrico; Curio, Cristóbal; Endres, Dominik; Giese, Martin
2018-04-01
According to a long-standing hypothesis in motor control, complex body motion is organized in terms of movement primitives, reducing massively the dimensionality of the underlying control problems. For body movements, this low-dimensional organization has been convincingly demonstrated by the learning of low-dimensional representations from kinematic and EMG data. In contrast, the effective dimensionality of dynamic facial expressions is unknown, and dominant analysis approaches have been based on heuristically defined facial "action units," which reflect contributions of individual face muscles. We determined the effective dimensionality of dynamic facial expressions by learning of a low-dimensional model from 11 facial expressions. We found an amazingly low dimensionality with only two movement primitives being sufficient to simulate these dynamic expressions with high accuracy. This low dimensionality is confirmed statistically, by Bayesian model comparison of models with different numbers of primitives, and by a psychophysical experiment that demonstrates that expressions, simulated with only two primitives, are indistinguishable from natural ones. In addition, we find statistically optimal integration of the emotion information specified by these primitives in visual perception. Taken together, our results indicate that facial expressions might be controlled by a very small number of independent control units, permitting very low-dimensional parametrization of the associated facial expression.
Towards Multimodal Emotion Recognition in E-Learning Environments
ERIC Educational Resources Information Center
Bahreini, Kiavash; Nadolski, Rob; Westera, Wim
2016-01-01
This paper presents a framework (FILTWAM (Framework for Improving Learning Through Webcams And Microphones)) for real-time emotion recognition in e-learning by using webcams. FILTWAM offers timely and relevant feedback based upon learner's facial expressions and verbalizations. FILTWAM's facial expression software module has been developed and…
An audiovisual emotion recognition system
NASA Astrophysics Data System (ADS)
Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun
2007-12-01
Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.
Two-Stream Transformer Networks for Video-based Face Alignment.
Liu, Hao; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie
2017-08-01
In this paper, we propose a two-stream transformer networks (TSTN) approach for video-based face alignment. Unlike conventional image-based face alignment approaches which cannot explicitly model the temporal dependency in videos and motivated by the fact that consistent movements of facial landmarks usually occur across consecutive frames, our TSTN aims to capture the complementary information of both the spatial appearance on still frames and the temporal consistency information across frames. To achieve this, we develop a two-stream architecture, which decomposes the video-based face alignment into spatial and temporal streams accordingly. Specifically, the spatial stream aims to transform the facial image to the landmark positions by preserving the holistic facial shape structure. Accordingly, the temporal stream encodes the video input as active appearance codes, where the temporal consistency information across frames is captured to help shape refinements. Experimental results on the benchmarking video-based face alignment datasets show very competitive performance of our method in comparisons to the state-of-the-arts.
Head and facial injuries due to cluster munitions.
Fares, Youssef; Fares, Jawad; Gebeily, Souheil
2014-06-01
Cluster munitions are weapons that scatter smaller sub-munitions intended to kill or mutilate on impact. They have been used by the Israeli army in the south of Lebanon and are now scattered over wide rural areas affecting its inhabitants. Because of their easily "pickable" nature, sub-munitions can inflict injuries to the head and face regions. In this study, we aimed to explore the head and face injuries along with their clinical features in a group of Lebanese patients who suffered from such injuries due to a sub-munition's detonation. The study included all the cases reported between 14 August 2006 and 15 February 2013, with head and face injuries related to cluster bombs. Injuries were classified into brain, eye, otologic and auditory impairments, oral and maxillofacial, and skin and soft-tissue injuries. Psychological effects of these patients were also examined as for post-traumatic stress disorder, major depressive disorder, generalized anxiety disorder and acute stress syndrome. During the study period, there were 417 casualties as a result of cluster munitions' blasts. Out of the total number of victims, 29 (7 %) were injured in the head and the face region. The convention on cluster munitions of 2008 should be adhered to, as these inhumane weapons indiscriminately and disproportionately harm innocent civilians, thereby violating the well-established international principles governing conflict and war today.
[Melkersson-Rosenthal syndrome. Report of five cases and review of the literature].
Ghorbel, Imed Ben; SioudDhrif, Asma; Lamloum, Mounir; Trabelsi, Salem; Habib Houman, Mohammed
2006-12-01
The goal of this work is to report five cases of Melkersson-Rosenthal syndrom with a literature review. It is a rare entity and is characterized in its complete presentation, by the association of reccurent orofacial swelling, peripheral facial palsy and plicated tongue. Incomplete forms are more frequent and more difficult to establish its diagnosis. This latter is based on major and minor clinical and histological critieria sorted in three levels. There is four forms of MRS. The pathogenesis of this syndrome is still unknown; treatment remains random. It is based on topical or systemic steroids with or without cheiloplastic procedure. We must think of MRS in presence of any recurrent peripheral facial palsy and/or chronic facial swelling.
Facial expression reconstruction on the basis of selected vertices of triangle mesh
NASA Astrophysics Data System (ADS)
Peszor, Damian; Wojciechowska, Marzena
2016-06-01
Facial expression reconstruction is an important issue in the field of computer graphics. While it is relatively easy to create an animation based on meshes constructed through video recordings, this kind of high-quality data is often not transferred to another model because of lack of intermediary, anthropometry-based way to do so. However, if a high-quality mesh is sampled with sufficient density, it is possible to use obtained feature points to encode the shape of surrounding vertices in a way that can be easily transferred to another mesh with corresponding feature points. In this paper we present a method used for obtaining information for the purpose of reconstructing changes in facial surface on the basis of selected feature points.
Kilangalanga, Janvier; Ndjemba, Jean Marie; Uvon, Pitchouna A; Kibangala, Felix M; Mwandulo, Jean-Lebone Safari B; Mavula, Nicaise; Ndombe, Martin; Kazadi, Junior; Limbaka, Henry; Cohn, Daniel; Tougoue, Jean-Jacques; Kabore, Achille; Rotondo, Lisa; Willis, Rebecca; Bio, Amadou Alfa; Kadri, Boubacar; Bakhtiari, Ana; Ngondi, Jeremiah M; Solomon, Anthony W
2017-08-29
Trachoma was suspected to be endemic in parts of the Democratic Republic of the Congo (DRC). We aimed to estimate prevalences of trachomatous inflammation-follicular (TF), trichiasis, and water and sanitation (WASH) indicators in suspected-endemic Health Zones. A population-based prevalence survey was undertaken in each of 46 Health Zones across nine provinces of DRC, using Global Trachoma Mapping Project methods. A two-stage cluster random sampling design was used in each Health Zone, whereby 25 villages (clusters) and 30 households per cluster were sampled. Consenting eligible participants (children aged 1-9 years and adults aged ≥15 years) were examined for trachoma by GTMP-certified graders; households were assessed for access to WASH. A total of 32,758 households were surveyed, and 141,853 participants (98.2% of those enumerated) were examined for trachoma. Health Zone-level TF prevalence in 1-9-year-olds ranged from 1.9-41.6%. Among people aged ≥15 years, trichiasis prevalences ranged from 0.02-5.1% (95% CI 3.3-6.8). TF prevalence in 1-9-year-olds was ≥5% in 30 Health Zones, while trichiasis prevalence was ≥0.2% in 37 Health Zones. Trachoma is a public health problem in 39 of 46 Health Zones surveyed. To meet elimination targets, 37 Health Zones require expanded trichiasis surgery services while 30 health zones require antibiotics, facial cleanliness and environmental improvement interventions. Survey data suggest that trachoma is widespread: further surveys are warranted.
Facial expression identification using 3D geometric features from Microsoft Kinect device
NASA Astrophysics Data System (ADS)
Han, Dongxu; Al Jawad, Naseer; Du, Hongbo
2016-05-01
Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.
Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis
Peng, Zhenyun; Zhang, Yaohui
2014-01-01
Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying. PMID:24592182
Norm-based coding of facial identity in adults with autism spectrum disorder.
Walsh, Jennifer A; Maurer, Daphne; Vida, Mark D; Rhodes, Gillian; Jeffery, Linda; Rutherford, M D
2015-03-01
It is unclear whether reported deficits in face processing in individuals with autism spectrum disorders (ASD) can be explained by deficits in perceptual face coding mechanisms. In the current study, we examined whether adults with ASD showed evidence of norm-based opponent coding of facial identity, a perceptual process underlying the recognition of facial identity in typical adults. We began with an original face and an averaged face and then created an anti-face that differed from the averaged face in the opposite direction from the original face by a small amount (near adaptor) or a large amount (far adaptor). To test for norm-based coding, we adapted participants on different trials to the near versus far adaptor, then asked them to judge the identity of the averaged face. We varied the size of the test and adapting faces in order to reduce any contribution of low-level adaptation. Consistent with the predictions of norm-based coding, high functioning adults with ASD (n = 27) and matched typical participants (n = 28) showed identity aftereffects that were larger for the far than near adaptor. Unlike results with children with ASD, the strength of the aftereffects were similar in the two groups. This is the first study to demonstrate norm-based coding of facial identity in adults with ASD. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ventura, Joseph; Wood, Rachel C.; Jimenez, Amy M.; Hellemann, Gerhard S.
2014-01-01
Background In schizophrenia patients, one of the most commonly studied deficits of social cognition is emotion processing (EP), which has documented links to facial recognition (FR). But, how are deficits in facial recognition linked to emotion processing deficits? Can neurocognitive and symptom correlates of FR and EP help differentiate the unique contribution of FR to the domain of social cognition? Methods A meta-analysis of 102 studies (combined n = 4826) in schizophrenia patients was conducted to determine the magnitude and pattern of relationships between facial recognition, emotion processing, neurocognition, and type of symptom. Results Meta-analytic results indicated that facial recognition and emotion processing are strongly interrelated (r = .51). In addition, the relationship between FR and EP through voice prosody (r = .58) is as strong as the relationship between FR and EP based on facial stimuli (r = .53). Further, the relationship between emotion recognition, neurocognition, and symptoms is independent of the emotion processing modality – facial stimuli and voice prosody. Discussion The association between FR and EP that occurs through voice prosody suggests that FR is a fundamental cognitive process. The observed links between FR and EP might be due to bottom-up associations between neurocognition and EP, and not simply because most emotion recognition tasks use visual facial stimuli. In addition, links with symptoms, especially negative symptoms and disorganization, suggest possible symptom mechanisms that contribute to FR and EP deficits. PMID:24268469
Ventura, Joseph; Wood, Rachel C; Jimenez, Amy M; Hellemann, Gerhard S
2013-12-01
In schizophrenia patients, one of the most commonly studied deficits of social cognition is emotion processing (EP), which has documented links to facial recognition (FR). But, how are deficits in facial recognition linked to emotion processing deficits? Can neurocognitive and symptom correlates of FR and EP help differentiate the unique contribution of FR to the domain of social cognition? A meta-analysis of 102 studies (combined n=4826) in schizophrenia patients was conducted to determine the magnitude and pattern of relationships between facial recognition, emotion processing, neurocognition, and type of symptom. Meta-analytic results indicated that facial recognition and emotion processing are strongly interrelated (r=.51). In addition, the relationship between FR and EP through voice prosody (r=.58) is as strong as the relationship between FR and EP based on facial stimuli (r=.53). Further, the relationship between emotion recognition, neurocognition, and symptoms is independent of the emotion processing modality - facial stimuli and voice prosody. The association between FR and EP that occurs through voice prosody suggests that FR is a fundamental cognitive process. The observed links between FR and EP might be due to bottom-up associations between neurocognition and EP, and not simply because most emotion recognition tasks use visual facial stimuli. In addition, links with symptoms, especially negative symptoms and disorganization, suggest possible symptom mechanisms that contribute to FR and EP deficits. © 2013 Elsevier B.V. All rights reserved.
Gender, age, and psychosocial context of the perception of facial esthetics.
Tole, Nikoleta; Lajnert, Vlatka; Kovacevic Pavicic, Daniela; Spalj, Stjepan
2014-01-01
To explore the effects of gender, age, and psychosocial context on the perception of facial esthetics. The study included 1,444 Caucasian subjects aged 16 to 85 years. Two sets of color photographs illustrating 13 male and 13 female Caucasian facial type alterations, representing different skeletal and dentoalveolar components of sagittal maxillary-mandibular relationships, were used to estimate the facial profile attractiveness. The examinees graded the profiles based on a 0 to 10 numerical rating scale. The examinees graded the profiles of their own sex only from a social perspective, whereas opposite sex profiles were graded both from the social and emotional perspective separately. The perception of facial esthetics was found to be related to the gender, age, and psychosocial context of evaluation (p < 0.05). The most attractive profiles to men are the orthognathic female profile from the social perspective and the moderate bialveolar protrusion from the emotional perspective. The most attractive profile to women is the orthognathic male profile, when graded from the social aspect, and the mild bialveolar retrusion when graded from the emotional aspect. The age increase of the assessor results in a higher attractiveness grade. When planning treatment that modifies the facial profile, the clinician should bear in mind that the perception of facial profile esthetics is a complex phenomenon influenced by biopsychosocial factors. This study allows a better understanding of the concept of perception of facial esthetics that includes gender, age, and psychosocial context. © 2013 Wiley Periodicals, Inc.
Automatic Facial Expression Recognition and Operator Functional State
NASA Technical Reports Server (NTRS)
Blanson, Nina
2012-01-01
The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions
Automatic Facial Expression Recognition and Operator Functional State
NASA Technical Reports Server (NTRS)
Blanson, Nina
2011-01-01
The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions.
Facial affect processing and depression susceptibility: cognitive biases and cognitive neuroscience.
Bistricky, Steven L; Ingram, Rick E; Atchley, Ruth Ann
2011-11-01
Facial affect processing is essential to social development and functioning and is particularly relevant to models of depression. Although cognitive and interpersonal theories have long described different pathways to depression, cognitive-interpersonal and evolutionary social risk models of depression focus on the interrelation of interpersonal experience, cognition, and social behavior. We therefore review the burgeoning depressive facial affect processing literature and examine its potential for integrating disciplines, theories, and research. In particular, we evaluate studies in which information processing or cognitive neuroscience paradigms were used to assess facial affect processing in depressed and depression-susceptible populations. Most studies have assessed and supported cognitive models. This research suggests that depressed and depression-vulnerable groups show abnormal facial affect interpretation, attention, and memory, although findings vary based on depression severity, comorbid anxiety, or length of time faces are viewed. Facial affect processing biases appear to correspond with distinct neural activity patterns and increased depressive emotion and thought. Biases typically emerge in depressed moods but are occasionally found in the absence of such moods. Indirect evidence suggests that childhood neglect might cultivate abnormal facial affect processing, which can impede social functioning in ways consistent with cognitive-interpersonal and interpersonal models. However, reviewed studies provide mixed support for the social risk model prediction that depressive states prompt cognitive hypervigilance to social threat information. We recommend prospective interdisciplinary research examining whether facial affect processing abnormalities promote-or are promoted by-depressogenic attachment experiences, negative thinking, and social dysfunction.
Psychocentricity and participant profiles: implications for lexical processing among multilinguals
Libben, Gary; Curtiss, Kaitlin; Weber, Silke
2014-01-01
Lexical processing among bilinguals is often affected by complex patterns of individual experience. In this paper we discuss the psychocentric perspective on language representation and processing, which highlights the centrality of individual experience in psycholinguistic experimentation. We discuss applications to the investigation of lexical processing among multilinguals and explore the advantages of using high-density experiments with multilinguals. High density experiments are designed to co-index measures of lexical perception and production, as well as participant profiles. We discuss the challenges associated with the characterization of participant profiles and present a new data visualization technique, that we term Facial Profiles. This technique is based on Chernoff faces developed over 40 years ago. The Facial Profile technique seeks to overcome some of the challenges associated with the use of Chernoff faces, while maintaining the core insight that recoding multivariate data as facial features can engage the human face recognition system and thus enhance our ability to detect and interpret patterns within multivariate datasets. We demonstrate that Facial Profiles can code participant characteristics in lexical processing studies by recoding variables such as reading ability, speaking ability, and listening ability into iconically-related relative sizes of eye, mouth, and ear, respectively. The balance of ability in bilinguals can be captured by creating composite facial profiles or Janus Facial Profiles. We demonstrate the use of Facial Profiles and Janus Facial Profiles in the characterization of participant effects in the study of lexical perception and production. PMID:25071614
The Two Sides of Beauty: Laterality and the Duality of Facial Attractiveness
ERIC Educational Resources Information Center
Franklin, Robert G., Jr.; Adams, Reginald B., Jr.
2010-01-01
We hypothesized that facial attractiveness represents a dual judgment, a combination of reward-based, sexual processes, and aesthetic, cognitive processes. Herein we describe a study that demonstrates that sexual and nonsexual processes both contribute to attractiveness judgments and that these processes can be dissociated. Female participants…
Production of Emotional Facial Expressions in European American, Japanese, and Chinese Infants.
ERIC Educational Resources Information Center
Camras, Linda A.; And Others
1998-01-01
European American, Japanese, and Chinese 11-month-olds participated in emotion-inducing laboratory procedures. Facial responses were scored with BabyFACS, an anatomically based coding system. Overall, Chinese infants were less expressive than European American and Japanese infants, suggesting that differences in expressivity between European…
Qi, Yue; Li, Qi; Du, Feng
2018-01-01
In the era of globalization, people meet strangers from different countries more often than ever. Previous research indicates that impressions of trustworthiness based on facial appearance play an important role in interpersonal cooperation behaviors. The current study examined whether additional information about socioeconomic status (SES), including national prosperity and individual monthly income, affects facial judgments and appearance-based trust decisions. Besides reproducing previous conclusions that trustworthy faces receive more money than untrustworthy faces, the present study showed that high-income individuals were judged as more trustworthy than low-income individuals, and also were given more money in a trust game. However, trust behaviors were not modulated by the nationality of the faces. The present research suggests that people are more likely to trust strangers with a high income, compared with individuals with a low income.
Stephan, Carl N; Devine, Matthew
2009-10-30
The construction of the facial muscles (particularly those of mastication) is generally thought to enhance the accuracy of facial approximation methods because they increase attention paid to face anatomy. However, the lack of consideration for non-muscular structures of the face when using these "anatomical" methods ironically forces one of the two large masticatory muscles to be exaggerated beyond reality. To demonstrate and resolve this issue the temporal region of nineteen caucasoid human cadavers (10 females, 9 males; mean age=84 years, s=9 years, range=58-97 years) were investigated. Soft tissue depths were measured at regular intervals across the temporal fossa in 10 cadavers, and the thickness of the muscle and fat components quantified in nine other cadavers. The measurements indicated that the temporalis muscle generally accounts for <50% of the total soft tissue depth, and does not fill the entirety of the fossa (as generally known in the anatomical literature, but not as followed in facial approximation practice). In addition, a soft tissue bulge was consistently observed in the anteroinferior portion of the temporal fossa (as also evident in younger individuals), and during dissection, this bulge was found to closely correspond to the superficial temporal fat pad (STFP). Thus, the facial surface does not follow a simple undulating curve of the temporalis muscle as currently undertaken in facial approximation methods. New metric-based facial approximation guidelines are presented to facilitate accurate construction of the STFP and the temporalis muscle for future facial approximation casework. This study warrants further investigations of the temporalis muscle and the STFP in younger age groups and demonstrates that untested facial approximation guidelines, including those propounded to be anatomical, should be cautiously regarded.
Masseteric nerve for reanimation of the smile in short-term facial paralysis.
Hontanilla, Bernardo; Marre, Diego; Cabello, Alvaro
2014-02-01
Our aim was to describe our experience with the masseteric nerve in the reanimation of short term facial paralysis. We present our outcomes using a quantitative measurement system and discuss its advantages and disadvantages. Between 2000 and 2012, 23 patients had their facial paralysis reanimated by masseteric-facial coaptation. All patients are presented with complete unilateral paralysis. Their background, the aetiology of the paralysis, and the surgical details were recorded. A retrospective study of movement analysis was made using an automatic optical system (Facial Clima). Commissural excursion and commissural contraction velocity were also recorded. The mean age at reanimation was 43(8) years. The aetiology of the facial paralysis included acoustic neurinoma, fracture of the skull base, schwannoma of the facial nerve, resection of a cholesteatoma, and varicella zoster infection. The mean time duration of facial paralysis was 16(5) months. Follow-up was more than 2 years in all patients except 1 in whom it was 12 months. The mean duration to recovery of tone (as reported by the patient) was 67(11) days. Postoperative commissural excursion was 8(4)mm for the reanimated side and 8(3)mm for the healthy side (p=0.4). Likewise, commissural contraction velocity was 38(10)mm/s for the reanimated side and 43(12)mm/s for the healthy side (p=0.23). Mean percentage of recovery was 92(5)mm for commissural excursion and 79(15)mm/s for commissural contraction velocity. Masseteric nerve transposition is a reliable and reproducible option for the reanimation of short term facial paralysis with reduced donor site morbidity and good symmetry with the opposite healthy side. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Automated and objective action coding of facial expressions in patients with acute facial palsy.
Haase, Daniel; Minnigerode, Laura; Volk, Gerd Fabian; Denzler, Joachim; Guntinas-Lichius, Orlando
2015-05-01
Aim of the present observational single center study was to objectively assess facial function in patients with idiopathic facial palsy with a new computer-based system that automatically recognizes action units (AUs) defined by the Facial Action Coding System (FACS). Still photographs using posed facial expressions of 28 healthy subjects and of 299 patients with acute facial palsy were automatically analyzed for bilateral AU expression profiles. All palsies were graded with the House-Brackmann (HB) grading system and with the Stennert Index (SI). Changes of the AU profiles during follow-up were analyzed for 77 patients. The initial HB grading of all patients was 3.3 ± 1.2. SI at rest was 1.86 ± 1.3 and during motion 3.79 ± 4.3. Healthy subjects showed a significant AU asymmetry score of 21 ± 11 % and there was no significant difference to patients (p = 0.128). At initial examination of patients, the number of activated AUs was significantly lower on the paralyzed side than on the healthy side (p < 0.0001). The final examination for patients took place 4 ± 6 months post baseline. The number of activated AUs and the ratio between affected and healthy side increased significantly between baseline and final examination (both p < 0.0001). The asymmetry score decreased between baseline and final examination (p < 0.0001). The number of activated AUs on the healthy side did not change significantly (p = 0.779). Radical rethinking in facial grading is worthwhile: automated FACS delivers fast and objective global and regional data on facial motor function for use in clinical routine and clinical trials.
Santosa, Katherine B; Fattah, Adel; Gavilán, Javier; Hadlock, Tessa A; Snyder-Warwick, Alison K
2017-07-01
There is no widely accepted assessment tool or common language used by clinicians caring for patients with facial palsy, making exchange of information challenging. Standardized photography may represent such a language and is imperative for precise exchange of information and comparison of outcomes in this special patient population. To review the literature to evaluate the use of facial photography in the management of patients with facial palsy and to examine the use of photography in documenting facial nerve function among members of the Sir Charles Bell Society-a group of medical professionals dedicated to care of patients with facial palsy. A literature search was performed to review photographic standards in patients with facial palsy. In addition, a cross-sectional survey of members of the Sir Charles Bell Society was conducted to examine use of medical photography in documenting facial nerve function. The literature search and analysis was performed in August and September 2015, and the survey was conducted in August and September 2013. The literature review searched EMBASE, CINAHL, and MEDLINE databases from inception of each database through September 2015. Additional studies were identified by scanning references from relevant studies. Only English-language articles were eligible for inclusion. Articles that discussed patients with facial palsy and outlined photographic guidelines for this patient population were included in the study. The survey was disseminated to the Sir Charles Bell Society members in electronic form. It consisted of 10 questions related to facial grading scales, patient-reported outcome measures, other psychological assessment tools, and photographic and videographic recordings. In total, 393 articles were identified in the literature search, 7 of which fit the inclusion criteria. Six of the 7 articles discussed or proposed views specific to patients with facial palsy. However, none of the articles specifically focused on photographic standards for the population with facial palsy. Eighty-three of 151 members (55%) of the Sir Charles Bell Society responded to the survey. All survey respondents used photographic documentation, but there was variability in which facial expressions were used. Eighty-two percent (68 of 83) used some form of videography. From these data, we propose a set of minimum photographic standards for patients with facial palsy, including the following 10 static views: at rest or repose, small closed-mouth smile, large smile showing teeth, elevation of eyebrows, closure of eyes gently, closure of eyes tightly, puckering of lips, showing bottom teeth, snarling or wrinkling of the nose, and nasal base view. There is no consensus on photographic standardization to report outcomes for patients with facial palsy. Minimum photographic standards for facial paralysis publications are proposed. Videography of the dynamic movements of these views should also be recorded. NA.
Forensic facial comparison in South Africa: State of the science.
Steyn, M; Pretorius, M; Briers, N; Bacci, N; Johnson, A; Houlton, T M R
2018-06-01
Forensic facial comparison (FFC) is a scientific technique used to link suspects to a crime scene based on the analysis of photos or video recordings from that scene. While basic guidelines on practice and training are provided by the Facial Identification Scientific Working Group, details of how these are applied across the world are scarce. FFC is frequently used in South Africa, with more than 700 comparisons conducted in the last two years alone. In this paper the standards of practice are outlined, with new proposed levels of agreement/conclusions. We outline three levels of training that were established, with training in facial anatomy, terminology, principles of image comparison, image science, facial recognition and computer skills being aimed at developing general competency. Training in generating court charts and understanding court case proceedings are being specifically developed for the South African context. Various shortcomings still exist, specifically with regard to knowledge of the reliability of the technique. These need to be addressed in future research. Copyright © 2018 Elsevier B.V. All rights reserved.
Wang, Shu-Fan; Lai, Shang-Hong
2011-10-01
Facial expression modeling is central to facial expression recognition and expression synthesis for facial animation. In this work, we propose a manifold-based 3D face reconstruction approach to estimating the 3D face model and the associated expression deformation from a single face image. With the proposed robust weighted feature map (RWF), we can obtain the dense correspondences between 3D face models and build a nonlinear 3D expression manifold from a large set of 3D facial expression models. Then a Gaussian mixture model in this manifold is learned to represent the distribution of expression deformation. By combining the merits of morphable neutral face model and the low-dimensional expression manifold, a novel algorithm is developed to reconstruct the 3D face geometry as well as the facial deformation from a single face image in an energy minimization framework. Experimental results on simulated and real images are shown to validate the effectiveness and accuracy of the proposed algorithm.
Pediatric mandibular fractures.
Thaller, S R; Mabourakh, S
1991-06-01
In spite of curiosity, facial fractures, particularly mandibular fractures, in the pediatric age group embrace only a modest proportion of facial fractures that occur within the general population. Several large series report an overall incidence of approximately 1% of all facial bone fractures. A considerable volume of literature has been generated describing the pattern of injury and treatment modalities for pediatric facial bone fractures. At our institution, which is an extremely busy university-based regional trauma center, we have witnessed a persistent escalation in the number of patients requiring repair of their facial bone fractures. During the period of January 1989 through January 1990, we treated a total of 204 patients for repair of mandible fractures. An analysis of the records of this group revealed only 3 patients who were younger than 4 years of age and 2 additional patients younger than 8 years. There were another 10 patients 17 years and younger, for a total incidence of 0.08%. Additionally, we found that within this seemingly small group, there was a surprisingly high incidence of severe, associated injuries.
Face inversion increases attractiveness.
Leder, Helmut; Goller, Juergen; Forster, Michael; Schlageter, Lena; Paul, Matthew A
2017-07-01
Assessing facial attractiveness is a ubiquitous, inherent, and hard-wired phenomenon in everyday interactions. As such, it has highly adapted to the default way that faces are typically processed: viewing faces in upright orientation. By inverting faces, we can disrupt this default mode, and study how facial attractiveness is assessed. Faces, rotated at 90 (tilting to either side) and 180°, were rated on attractiveness and distinctiveness scales. For both orientations, we found that faces were rated more attractive and less distinctive than upright faces. Importantly, these effects were more pronounced for faces rated low in upright orientation, and smaller for highly attractive faces. In other words, the less attractive a face was, the more it gained in attractiveness by inversion or rotation. Based on these findings, we argue that facial attractiveness assessments might not rely on the presence of attractive facial characteristics, but on the absence of distinctive, unattractive characteristics. These unattractive characteristics are potentially weighed against an individual, attractive prototype in assessing facial attractiveness. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Johal, Ama; Chaggar, Amrit; Zou, Li Fong
2018-03-01
The present study used the optical surface laser scanning technique to compare the facial features of patients aged 8-18 years presenting with Class I and Class III incisor relationship in a case-control design. Subjects with a Class III incisor relationship, aged 8-18 years, were age and gender matched with Class I control and underwent a 3-dimensional (3-D) optical surface scan of the facial soft tissues. Landmark analysis revealed Class III subjects displayed greater mean dimensions compared to the control group most notably between the ages of 8-10 and 17-18 years in both males and females, in respect of antero-posterior (P = 0.01) and vertical (P = 0.006) facial dimensions. Surface-based analysis, revealed the greatest difference in the lower facial region, followed by the mid-face, whilst the upper face remained fairly consistent. Significant detectable differences were found in the surface facial features of developing Class III subjects.
Ozaki, Mine; Takushima, Akihiko; Momosawa, Akira; Kurita, Masakazu; Harii, Kiyonori
2008-07-01
For a treatment of facial paralysis, suture suspension of soft tissue is considered effective due to its less invasiveness and relatively simple technique, with minimal bruising and rapid recovery. However, suture suspension effect may not last for a long period of time. We obtained good outcome with temporary static suture suspension in 5 cases of severe facial paralysis in the intervening period between the onset of paralysis and expected spontaneous recovery. We used the S-S Cable Suture (Medical U&A, Tokyo, Japan), which was based on the modification of previously established method using the Gore-Tex cable suture originally reported by Sasaki et al in 2002. Because of the ease of technique and relatively strong lifting capability of the malar pad, we recommend it as a useful procedure for a patient suffering acute facial paralysis with possible spontaneous recovery for an improved quality of life by the quick elimination of facial distortion.
Weaver syndrome and EZH2 mutations: Clarifying the clinical phenotype.
Tatton-Brown, Katrina; Murray, Anne; Hanks, Sandra; Douglas, Jenny; Armstrong, Ruth; Banka, Siddharth; Bird, Lynne M; Clericuzio, Carol L; Cormier-Daire, Valerie; Cushing, Tom; Flinter, Frances; Jacquemont, Marie-Line; Joss, Shelagh; Kinning, Esther; Lynch, Sally Ann; Magee, Alex; McConnell, Vivienne; Medeira, Ana; Ozono, Keiichi; Patton, Michael; Rankin, Julia; Shears, Debbie; Simon, Marleen; Splitt, Miranda; Strenger, Volker; Stuurman, Kyra; Taylor, Clare; Titheradge, Hannah; Van Maldergem, Lionel; Temple, I Karen; Cole, Trevor; Seal, Sheila; Rahman, Nazneen
2013-12-01
Weaver syndrome, first described in 1974, is characterized by tall stature, a typical facial appearance, and variable intellectual disability. In 2011, mutations in the histone methyltransferase, EZH2, were shown to cause Weaver syndrome. To date, we have identified 48 individuals with EZH2 mutations. The mutations were primarily missense mutations occurring throughout the gene, with some clustering in the SET domain (12/48). Truncating mutations were uncommon (4/48) and only identified in the final exon, after the SET domain. Through analyses of clinical data and facial photographs of EZH2 mutation-positive individuals, we have shown that the facial features can be subtle and the clinical diagnosis of Weaver syndrome is thus challenging, especially in older individuals. However, tall stature is very common, reported in >90% of affected individuals. Intellectual disability is also common, present in ~80%, but is highly variable and frequently mild. Additional clinical features which may help in stratifying individuals to EZH2 mutation testing include camptodactyly, soft, doughy skin, umbilical hernia, and a low, hoarse cry. Considerable phenotypic overlap between Sotos and Weaver syndromes is also evident. The identification of an EZH2 mutation can therefore provide an objective means of confirming a subtle presentation of Weaver syndrome and/or distinguishing Weaver and Sotos syndromes. As mutation testing becomes increasingly accessible and larger numbers of EZH2 mutation-positive individuals are identified, knowledge of the clinical spectrum and prognostic implications of EZH2 mutations should improve. © 2013 Wiley Periodicals, Inc.
Local ICA for the Most Wanted face recognition
NASA Astrophysics Data System (ADS)
Guan, Xin; Szu, Harold H.; Markowitz, Zvi
2000-04-01
Facial disguises of FBI Most Wanted criminals are inevitable and anticipated in our design of automatic/aided target recognition (ATR) imaging systems. For example, man's facial hairs may hide his mouth and chin but not necessarily the nose and eyes. Sunglasses will cover the eyes but not the nose, mouth, and chins. This fact motivates us to build sets of the independent component analyses bases separately for each facial region of the entire alleged criminal group. Then, given an alleged criminal face, collective votes are obtained from all facial regions in terms of 'yes, no, abstain' and are tallied for a potential alarm. Moreover, and innocent outside shall fall below the alarm threshold and is allowed to pass the checkpoint. Such a PD versus FAR called ROC curve is obtained.
Gernhardt, Ariane; Rübeling, Hartmut; Keller, Heidi
2015-01-01
This study investigated tadpole self-drawings from 183 three- to six-year-old children living in seven cultural groups, representing three ecosocial contexts. Based on assumed general production principles, the influence of cultural norms and values upon specific characteristics of the tadpole drawings was examined. The results demonstrated that children from all cultural groups realized the body-proportion effect in the self-drawings, indicating universal production principles. However, children differed in single drawing characteristics, depending on the specific ecosocial context. Children from Western and non-Western urban educated contexts drew themselves rather tall, with many facial features, and preferred smiling facial expressions, while children from rural traditional contexts depicted themselves significantly smaller, with less facial details, and neutral facial expressions. PMID:26136707
Age Bias in Selection Decisions: The Role of Facial Appearance and Fitness Impressions
Kaufmann, Michèle C.; Krings, Franciska; Zebrowitz, Leslie A.; Sczesny, Sabine
2017-01-01
This research examined the impact of facial age appearance on hiring, and impressions of fitness as the underlying mechanism. In two experimental hiring simulations, one with lay persons and one with Human Resource professionals, participants evaluated a chronologically older or younger candidate (as indicated by date of birth and age label) with either younger or older facial age appearance (as indicated by a photograph). In both studies, older-looking candidates received lower hireability ratings, due to less favorable fitness impressions. In addition, Study 1 showed that this age bias was reduced when the candidates provided counter-stereotypic information about their fitness. Study 2 showed that facial age-based discrimination is less prevalent in jobs with less costumer contact (e.g., back office). PMID:29276492
Discrimination of emotional facial expressions by tufted capuchin monkeys (Sapajus apella).
Calcutt, Sarah E; Rubin, Taylor L; Pokorny, Jennifer J; de Waal, Frans B M
2017-02-01
Tufted or brown capuchin monkeys (Sapajus apella) have been shown to recognize conspecific faces as well as categorize them according to group membership. Little is known, though, about their capacity to differentiate between emotionally charged facial expressions or whether facial expressions are processed as a collection of features or configurally (i.e., as a whole). In 3 experiments, we examined whether tufted capuchins (a) differentiate photographs of neutral faces from either affiliative or agonistic expressions, (b) use relevant facial features to make such choices or view the expression as a whole, and (c) demonstrate an inversion effect for facial expressions suggestive of configural processing. Using an oddity paradigm presented on a computer touchscreen, we collected data from 9 adult and subadult monkeys. Subjects discriminated between emotional and neutral expressions with an exceptionally high success rate, including differentiating open-mouth threats from neutral expressions even when the latter contained varying degrees of visible teeth and mouth opening. They also showed an inversion effect for facial expressions, results that may indicate that quickly recognizing expressions does not originate solely from feature-based processing but likely a combination of relational processes. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Targeted presurgical decompensation in patients with yaw-dependent facial asymmetry
Kim, Kyung-A; Lee, Ji-Won; Park, Jeong-Ho; Kim, Byoung-Ho; Ahn, Hyo-Won
2017-01-01
Facial asymmetry can be classified into the rolling-dominant type (R-type), translation-dominant type (T-type), yawing-dominant type (Y-type), and atypical type (A-type) based on the distorted skeletal components that cause canting, translation, and yawing of the maxilla and/or mandible. Each facial asymmetry type represents dentoalveolar compensations in three dimensions that correspond to the main skeletal discrepancies. To obtain sufficient surgical correction, it is necessary to analyze the main skeletal discrepancies contributing to the facial asymmetry and then the skeletal-dental relationships in the maxilla and mandible separately. Particularly in cases of facial asymmetry accompanied by mandibular yawing, it is not simple to establish pre-surgical goals of tooth movement since chin deviation and posterior gonial prominence can be either aggravated or compromised according to the direction of mandibular yawing. Thus, strategic dentoalveolar decompensations targeting the real basal skeletal discrepancies should be performed during presurgical orthodontic treatment to allow for sufficient skeletal correction with stability. In this report, we document targeted decompensation of two asymmetry patients focusing on more complicated yaw-dependent types than others: Y-type and A-type. This may suggest a clinical guideline on the targeted decompensation in patient with different types of facial asymmetries. PMID:28523246
Targeted presurgical decompensation in patients with yaw-dependent facial asymmetry.
Kim, Kyung-A; Lee, Ji-Won; Park, Jeong-Ho; Kim, Byoung-Ho; Ahn, Hyo-Won; Kim, Su-Jung
2017-05-01
Facial asymmetry can be classified into the rolling-dominant type (R-type), translation-dominant type (T-type), yawing-dominant type (Y-type), and atypical type (A-type) based on the distorted skeletal components that cause canting, translation, and yawing of the maxilla and/or mandible. Each facial asymmetry type represents dentoalveolar compensations in three dimensions that correspond to the main skeletal discrepancies. To obtain sufficient surgical correction, it is necessary to analyze the main skeletal discrepancies contributing to the facial asymmetry and then the skeletal-dental relationships in the maxilla and mandible separately. Particularly in cases of facial asymmetry accompanied by mandibular yawing, it is not simple to establish pre-surgical goals of tooth movement since chin deviation and posterior gonial prominence can be either aggravated or compromised according to the direction of mandibular yawing. Thus, strategic dentoalveolar decompensations targeting the real basal skeletal discrepancies should be performed during presurgical orthodontic treatment to allow for sufficient skeletal correction with stability. In this report, we document targeted decompensation of two asymmetry patients focusing on more complicated yaw-dependent types than others: Y-type and A-type. This may suggest a clinical guideline on the targeted decompensation in patient with different types of facial asymmetries.
Testing the arousal hypothesis of neonatal imitation in infant rhesus macaques
Pedersen, Eric J.; Simpson, Elizabeth A.
2017-01-01
Neonatal imitation is the matching of (often facial) gestures by newborn infants. Some studies suggest that performance of facial gestures is due to general arousal, which may produce false positives on neonatal imitation assessments. Here we examine whether arousal is linked to facial gesturing in newborn infant rhesus macaques (Macaca mulatta). We tested 163 infants in a neonatal imitation paradigm in their first postnatal week and analyzed their lipsmacking gestures (a rapid opening and closing of the mouth), tongue protrusion gestures, and yawn responses (a measure of arousal). Arousal increased during dynamic stimulus presentation compared to the static baseline across all conditions, and arousal was higher in the facial gestures conditions than the nonsocial control condition. However, even after controlling for arousal, we found a condition-specific increase in facial gestures in infants who matched lipsmacking and tongue protrusion gestures. Thus, we found no support for the arousal hypothesis. Consistent with reports in human newborns, imitators’ propensity to match facial gestures is based on abilities that go beyond mere arousal. We discuss optimal testing conditions to minimize potentially confounding effects of arousal on measurements of neonatal imitation. PMID:28617816
Fang, Jing-Jing; Liu, Jia-Kuang; Wu, Tzu-Chieh; Lee, Jing-Wei; Kuo, Tai-Hong
2013-05-01
Computer-aided design has gained increasing popularity in clinical practice, and the advent of rapid prototyping technology has further enhanced the quality and predictability of surgical outcomes. It provides target guides for complex bony reconstruction during surgery. Therefore, surgeons can efficiently and precisely target fracture restorations. Based on three-dimensional models generated from a computed tomographic scan, precise preoperative planning simulation on a computer is possible. Combining the interdisciplinary knowledge of surgeons and engineers, this study proposes a novel surgical guidance method that incorporates a built-in occlusal wafer that serves as the positioning reference.Two patients with complex facial deformity suffering from severe facial asymmetry problems were recruited. In vitro facial reconstruction was first rehearsed on physical models, where a customized surgical guide incorporating a built-in occlusal stent as the positioning reference was designed to implement the surgery plan. This study is intended to present the authors' preliminary experience in a complex facial reconstruction procedure. It suggests that in regions with less information, where intraoperative computed tomographic scans or navigation systems are not available, our approach could be an effective, expedient, straightforward aid to enhance surgical outcome in a complex facial repair.
Appearance-Based Inferences Bias Source Memory
Cassidy, Brittany S.; Zebrowitz, Leslie A.; Gutchess, Angela H.
2012-01-01
Previous research varying the trustworthiness of appearance has demonstrated that facial characteristics contribute to source memory. Two studies extended this work by investigating the contribution to source memory of babyfaceness, a facial quality known to elicit strong spontaneous trait inferences. Young adult participants viewed younger and older babyfaced and mature-faced individuals paired with sentences that were either congruent or incongruent with the target's facial characteristics. Identifying a source as dominant or submissive was least accurate when participants chose between a target whose behavior was incongruent with facial characteristics and a lure whose face mismatched the target in appearance, but matched the source memory question. In Study 1, this effect held true when identifying older sources, but not own-age, younger sources. When task difficulty was increased in Study 2, the relationship between face-behavior congruence and lure facial characteristics persisted, but it was not moderated by target age even though participants continued to correctly identify fewer older than younger sources. Taken together, these results indicate that trait expectations associated with variations in facial maturity can bias source memory for both own- and other-age faces, although own-age faces are less vulnerable to this bias, as shown in the moderation by task difficulty. PMID:22806429
The assessment of facial variation in 4747 British school children.
Toma, Arshed M; Zhurov, Alexei I; Playle, Rebecca; Marshall, David; Rosin, Paul L; Richmond, Stephen
2012-12-01
The aim of this study is to identify key components contributing to facial variation in a large population-based sample of 15.5-year-old children (2514 females and 2233 males). The subjects were recruited from the Avon Longitudinal Study of Parents and Children. Three-dimensional facial images were obtained for each subject using two high-resolution Konica Minolta laser scanners. Twenty-one reproducible facial landmarks were identified and their coordinates were recorded. The facial images were registered using Procrustes analysis. Principal component analysis was then employed to identify independent groups of correlated coordinates. For the total data set, 14 principal components (PCs) were identified which explained 82 per cent of the total variance, with the first three components accounting for 46 per cent of the variance. Similar results were obtained for males and females separately with only subtle gender differences in some PCs. Facial features may be treated as a multidimensional statistical continuum with respect to the PCs. The first three PCs characterize the face in terms of height, width, and prominence of the nose. The derived PCs may be useful to identify and classify faces according to a scale of normality.
South Palomares, Jennifer K; Sutherland, Clare A M; Young, Andrew W
2017-12-17
Given the frequency of relationships nowadays initiated online, where impressions from face photographs may influence relationship initiation, it is important to understand how facial first impressions might be used in such contexts. We therefore examined the applicability of a leading model of verbally expressed partner preferences to impressions derived from real face images and investigated how the factor structure of first impressions based on potential partner preference-related traits might relate to a more general model of facial first impressions. Participants rated 1,000 everyday face photographs on 12 traits selected to represent (Fletcher, et al. 1999, Journal of Personality and Social Psychology, 76, 72) verbal model of partner preferences. Facial trait judgements showed an underlying structure that largely paralleled the tripartite structure of Fletcher et al.'s verbal preference model, regardless of either face gender or participant gender. Furthermore, there was close correspondence between the verbal partner preference model and a more general tripartite model of facial first impressions derived from a different literature (Sutherland et al., 2013, Cognition, 127, 105), suggesting an underlying correspondence between verbal conceptual models of romantic preferences and more general models of facial first impressions. © 2017 The British Psychological Society.
Peripheral facial weakness (Bell's palsy).
Basić-Kes, Vanja; Dobrota, Vesna Dermanović; Cesarik, Marijan; Matovina, Lucija Zadro; Madzar, Zrinko; Zavoreo, Iris; Demarin, Vida
2013-06-01
Peripheral facial weakness is a facial nerve damage that results in muscle weakness on one side of the face. It may be idiopathic (Bell's palsy) or may have a detectable cause. Almost 80% of peripheral facial weakness cases are primary and the rest of them are secondary. The most frequent causes of secondary peripheral facial weakness are systemic viral infections, trauma, surgery, diabetes, local infections, tumor, immune disorders, drugs, degenerative diseases of the central nervous system, etc. The diagnosis relies upon the presence of typical signs and symptoms, blood chemistry tests, cerebrospinal fluid investigations, nerve conduction studies and neuroimaging methods (cerebral MRI, x-ray of the skull and mastoid). Treatment of secondary peripheral facial weakness is based on therapy for the underlying disorder, unlike the treatment of Bell's palsy that is controversial due to the lack of large, randomized, controlled, prospective studies. There are some indications that steroids or antiviral agents are beneficial but there are also studies that show no beneficial effect. Additional treatments include eye protection, physiotherapy, acupuncture, botulinum toxin, or surgery. Bell's palsy has a benign prognosis with complete recovery in about 80% of patients, 15% experience some mode of permanent nerve damage and severe consequences remain in 5% of patients.
Cysts of the oro-facial region: A Nigerian experience
Lawal, AO; Adisa, AO; Sigbeku, OF
2012-01-01
Aim: Though many studies have examined cysts of the jaws, most of them focused on a group of cysts and only few have examined cysts based on a particular classification. The aim of this study is to review cysts of the oro-facial region seen at a tertiary health centre in Ibadan and to categorize these cases based on Lucas, Killey and Kay and WHO classifications. Materials and Methods: All histologically diagnosed oro-facial cysts were retrieved from the oral pathology archives. Information concerning cyst type, topography, age at time of diagnosis and gender of patients was gathered. Data obtained was analyzed with the SPSS 18.0.1 version software. Results: A total of 92 histologically diagnosed oro-facial cysts comprising 60 (65.2%) males and 32 (34.8%) females were seen. The age range was 4 to 73 years with a mean age of 27.99 ± 15.26 years. The peak incidence was in the third decade. The mandible/ maxilla ratio was 1.5:1. Apical periodontal was the most common type of cyst accounting for 50% (n = 46) of total cysts observed. Using the WHO classification, cysts of the soft tissues of head, face and neck were overwhelmingly more common in males than females with a ratio of 14:3, while non-epithelial cysts occurred at a 3:1 male/female ratio. Conclusion: This study showed similar findings in regard to type, site and age incidence of oro-facial cysts compared to previous studies and also showed that the WHO classification protocol was the most comprehensive classification method for oro-facial cysts. PMID:22923885
3D landmarking in multiexpression face analysis: a preliminary study on eyebrows and mouth.
Vezzetti, Enrico; Marcolin, Federica
2014-08-01
The application of three-dimensional (3D) facial analysis and landmarking algorithms in the field of maxillofacial surgery and other medical applications, such as diagnosis of diseases by facial anomalies and dysmorphism, has gained a lot of attention. In a previous work, we used a geometric approach to automatically extract some 3D facial key points, called landmarks, working in the differential geometry domain, through the coefficients of fundamental forms, principal curvatures, mean and Gaussian curvatures, derivatives, shape and curvedness indexes, and tangent map. In this article we describe the extension of our previous landmarking algorithm, which is now able to extract eyebrows and mouth landmarks using both old and new meshes. The algorithm has been tested on our face database and on the public Bosphorus 3D database. We chose to work on the mouth and eyebrows as a separate study because of the role that these parts play in facial expressions. In fact, since the mouth is the part of the face that moves the most and affects mainly facial expressions, extracting mouth landmarks from various facial poses means that the newly developed algorithm is pose-independent. This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors http://www.springer.com/00266 .
Zhao, Ya-e; Ma, Jun-xian; Hu, Li; Wu, Li-ping; De Rojas, Manuel
2013-01-01
For a long time, classification of Demodex mites has been based mainly on their hosts and phenotypic characteristics. A new subspecies of Demodex folliculorum has been proposed, but not confirmed. Here, cox1 partial sequences of nine isolates of three Demodex species from two geographical sources (China and Spain) were studied to conduct molecular identification of D. folliculorum. Sequencing showed that the mitochondrial cox1 fragments of five D. folliculorum isolates from the facial skin of Chinese individuals were 429 bp long and that their sequence identity was 97.4%. The average sequence divergence was 1.24% among the five Chinese isolates, 0.94% between the two geographical isolate groups (China (5) and Spain (1)), and 2.15% between the two facial tissue sources (facial skin (6) and eyelids (1)). The genetic distance and rate of third-position nucleotide transition/transversion were 0.0125, 2.7 (3/1) among the five Chinese isolates, 0.0094, 3.1 (3/1) between the two geographical isolate groups, and 0.0217, 4.4 (3/1) between the two facial tissue sources. Phylogenetic trees showed that D. folliculorum from the two geographical isolate groups did not form sister clades, while those from different facial tissue sources did. According to the molecular characteristics, it appears that subspecies differentiation might not have occurred and that D. folliculorum isolates from the two geographical sources are of the same population. However, population differentiation might be occurring between isolates from facial skin and eyelids. PMID:24009203
Facial Contouring by Targeted Restoration of Facial Fat Compartment Volume: The Midface.
Wang, Wenjin; Xie, Yun; Huang, Ru-Lin; Zhou, Jia; Tanja, Herrler; Zhao, Peijuan; Cheng, Chen; Zhou, Sizheng; Pu, Lee L Q; Li, Qingfeng
2017-03-01
Recent anatomical findings have suggested that facial fat distribution is complex and changes with age. Here, the authors developed a grafting technique based on the physiologic distribution and volume changes of facial fat compartments to achieve a youthful and natural-appearing face. Forty cadaveric hemifaces were used for the dissection of fat compartments and neurovascular structures in the midface area. Seventy-eight patients were treated for cheek atrophy using the authors' targeted restoration of midface fat compartment volume. The outcome was evaluated by a two-dimensional assessment, malar lipoatrophy assessment, and a satisfaction survey. The medial and lateral parts of the deep medial cheek fat compartment were separated by a septum arising from the lateral border of the levator anguli oris muscle. The angular vein traveled between the deep medial cheek fat compartment and the buccal fat pad, 12 mm from the maxilla. A total volume of 29.3 ml of fat was grafted per cheek for each patient. A 12-month follow-up revealed an average volume augmentation rate of 27.1 percent. Pleasing and elevated anterior projection of the cheek and ameliorated nasolabial groove were still obvious by 12 months after the procedure. In total, 95.2 percent of the patients were satisfied with their results. The present study provides the anatomical and clinical basis for the concept of compartmentally based fat grafting. It allows for the restoration of facial fat volume close to the physiologic state. With this procedure, a natural and youthful facial contour could be rebuilt with a high satisfaction rate. Therapeutic, IV.
Prigent, Elise; Amorim, Michel-Ange; de Oliveira, Armando Mónica
2018-01-01
Humans have developed a specific capacity to rapidly perceive and anticipate other people's facial expressions so as to get an immediate impression of their emotional state of mind. We carried out two experiments to examine the perceptual and memory dynamics of facial expressions of pain. In the first experiment, we investigated how people estimate other people's levels of pain based on the perception of various dynamic facial expressions; these differ both in terms of the amount and intensity of activated action units. A second experiment used a representational momentum (RM) paradigm to study the emotional anticipation (memory bias) elicited by the same facial expressions of pain studied in Experiment 1. Our results highlighted the relationship between the level of perceived pain (in Experiment 1) and the direction and magnitude of memory bias (in Experiment 2): When perceived pain increases, the memory bias tends to be reduced (if positive) and ultimately becomes negative. Dynamic facial expressions of pain may reenact an "immediate perceptual history" in the perceiver before leading to an emotional anticipation of the agent's upcoming state. Thus, a subtle facial expression of pain (i.e., a low contraction around the eyes) that leads to a significant positive anticipation can be considered an adaptive process-one through which we can swiftly and involuntarily detect other people's pain.
Borsody, Mark K; Yamada, Chisa; Bielawski, Dawn; Heaton, Tamara; Castro Prado, Fernando; Garcia, Andrea; Azpiroz, Joaquín; Sacristan, Emilio
2014-04-01
Facial nerve stimulation has been proposed as a new treatment of ischemic stroke because autonomic components of the nerve dilate cerebral arteries and increase cerebral blood flow when activated. A noninvasive facial nerve stimulator device based on pulsed magnetic stimulation was tested in a dog middle cerebral artery occlusion model. We used an ischemic stroke dog model involving injection of autologous blood clot into the internal carotid artery that reliably embolizes to the middle cerebral artery. Thirty minutes after middle cerebral artery occlusion, the geniculate ganglion region of the facial nerve was stimulated for 5 minutes. Brain perfusion was measured using gadolinium-enhanced contrast MRI, and ATP and total phosphate levels were measured using 31P spectroscopy. Separately, a dog model of brain hemorrhage involving puncture of the intracranial internal carotid artery served as an initial examination of facial nerve stimulation safety. Facial nerve stimulation caused a significant improvement in perfusion in the hemisphere affected by ischemic stroke and a reduction in ischemic core volume in comparison to sham stimulation control. The ATP/total phosphate ratio showed a large decrease poststroke in the control group versus a normal level in the stimulation group. The same stimulation administered to dogs with brain hemorrhage did not cause hematoma enlargement. These results support the development and evaluation of a noninvasive facial nerve stimulator device as a treatment of ischemic stroke.
Donges, Uta-Susan; Kersting, Anette; Suslow, Thomas
2012-01-01
There is evidence that women are better in recognizing their own and others' emotions. The female advantage in emotion recognition becomes even more apparent under conditions of rapid stimulus presentation. Affective priming paradigms have been developed to examine empirically whether facial emotion stimuli presented outside of conscious awareness color our impressions. It was observed that masked emotional facial expression has an affect congruent influence on subsequent judgments of neutral stimuli. The aim of the present study was to examine the effect of gender on affective priming based on negative and positive facial expression. In our priming experiment sad, happy, neutral, or no facial expression was briefly presented (for 33 ms) and masked by neutral faces which had to be evaluated. 81 young healthy volunteers (53 women) participated in the study. Subjects had no subjective awareness of emotional primes. Women did not differ from men with regard to age, education, intelligence, trait anxiety, or depressivity. In the whole sample, happy but not sad facial expression elicited valence congruent affective priming. Between-group analyses revealed that women manifested greater affective priming due to happy faces than men. Women seem to have a greater ability to perceive and respond to positive facial emotion at an automatic processing level compared to men. High perceptual sensitivity to minimal social-affective signals may contribute to women's advantage in understanding other persons' emotional states.
Donges, Uta-Susan; Kersting, Anette; Suslow, Thomas
2012-01-01
There is evidence that women are better in recognizing their own and others' emotions. The female advantage in emotion recognition becomes even more apparent under conditions of rapid stimulus presentation. Affective priming paradigms have been developed to examine empirically whether facial emotion stimuli presented outside of conscious awareness color our impressions. It was observed that masked emotional facial expression has an affect congruent influence on subsequent judgments of neutral stimuli. The aim of the present study was to examine the effect of gender on affective priming based on negative and positive facial expression. In our priming experiment sad, happy, neutral, or no facial expression was briefly presented (for 33 ms) and masked by neutral faces which had to be evaluated. 81 young healthy volunteers (53 women) participated in the study. Subjects had no subjective awareness of emotional primes. Women did not differ from men with regard to age, education, intelligence, trait anxiety, or depressivity. In the whole sample, happy but not sad facial expression elicited valence congruent affective priming. Between-group analyses revealed that women manifested greater affective priming due to happy faces than men. Women seem to have a greater ability to perceive and respond to positive facial emotion at an automatic processing level compared to men. High perceptual sensitivity to minimal social-affective signals may contribute to women's advantage in understanding other persons' emotional states. PMID:22844519
Lee, I-Jui; Chen, Chien-Hsu; Lin, Ling-Yi
2016-01-01
Autism spectrum disorders (ASD) are characterized by a reduced ability to understand the emotional expressions on other people's faces. Increasing evidence indicates that children with ASD might not recognize or understand crucial nonverbal behaviors, which likely causes them to ignore nonverbal gestures and social cues, like facial expressions, that usually aid social interaction. In this study, we used software technology to create half-static and dynamic video materials to teach adolescents with ASD how to become aware of six basic facial expressions observed in real situations. This intervention system provides a half-way point via a dynamic video of a specific element within a static-surrounding frame to strengthen the ability of the six adolescents with ASD to attract their attention on the relevant dynamic facial expressions and ignore irrelevant ones. Using a multiple baseline design across participants, we found that the intervention learning system provided a simple yet effective way for adolescents with ASD to attract their attention on the nonverbal facial cues; the intervention helped them better understand and judge others' facial emotions. We conclude that the limited amount of information with structured and specific close-up visual social cues helped the participants improve judgments of the emotional meaning of the facial expressions of others.
Use of 3-dimensional surface acquisition to study facial morphology in 5 populations.
Kau, Chung How; Richmond, Stephen; Zhurov, Alexei; Ovsenik, Maja; Tawfik, Wael; Borbely, Peter; English, Jeryl D
2010-04-01
The aim of this study was to assess the use of 3-dimensional facial averages for determining morphologic differences from various population groups. We recruited 473 subjects from 5 populations. Three-dimensional images of the subjects were obtained in a reproducible and controlled environment with a commercially available stereo-photogrammetric camera capture system. Minolta VI-900 (Konica Minolta, Tokyo, Japan) and 3dMDface (3dMD LLC, Atlanta, Ga) systems were used. Each image was obtained as a facial mesh and orientated along a triangulated axis. All faces were overlaid, one on top of the other, and a complex mathematical algorithm was performed until average composite faces of 1 man and 1 woman were achieved for each subgroup. These average facial composites were superimposed based on a previously validated superimposition method, and the facial differences were quantified. Distinct facial differences were observed among the groups. The linear differences between surface shells ranged from 0.37 to 1.00 mm for the male groups. The linear differences ranged from 0.28 and 0.87 mm for the women. The color histograms showed that the similarities in facial shells between the subgroups by sex ranged from 26.70% to 70.39% for men and 36.09% to 79.83% for women. The average linear distance from the signed color histograms for the male subgroups ranged from -6.30 to 4.44 mm. The female subgroups ranged from -6.32 to 4.25 mm. Average faces can be efficiently and effectively created from a sample of 3-dimensional faces. Average faces can be used to compare differences in facial morphologies for various populations and sexes. Facial morphologic differences were greatest when totally different ethnic variations were compared. Facial morphologic similarities were present in comparable groups, but there were large variations in concentrated areas of the face. Copyright 2010 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
Experience-based human perception of facial expressions in Barbary macaques (Macaca sylvanus).
Maréchal, Laëtitia; Levy, Xandria; Meints, Kerstin; Majolo, Bonaventura
2017-01-01
Facial expressions convey key cues of human emotions, and may also be important for interspecies interactions. The universality hypothesis suggests that six basic emotions (anger, disgust, fear, happiness, sadness, and surprise) should be expressed by similar facial expressions in close phylogenetic species such as humans and nonhuman primates. However, some facial expressions have been shown to differ in meaning between humans and nonhuman primates like macaques. This ambiguity in signalling emotion can lead to an increased risk of aggression and injuries for both humans and animals. This raises serious concerns for activities such as wildlife tourism where humans closely interact with wild animals. Understanding what factors (i.e., experience and type of emotion) affect ability to recognise emotional state of nonhuman primates, based on their facial expressions, can enable us to test the validity of the universality hypothesis, as well as reduce the risk of aggression and potential injuries in wildlife tourism. The present study investigated whether different levels of experience of Barbary macaques, Macaca sylvanus , affect the ability to correctly assess different facial expressions related to aggressive, distressed, friendly or neutral states, using an online questionnaire. Participants' level of experience was defined as either: (1) naïve: never worked with nonhuman primates and never or rarely encountered live Barbary macaques; (2) exposed: shown pictures of the different Barbary macaques' facial expressions along with the description and the corresponding emotion prior to undertaking the questionnaire; (3) expert: worked with Barbary macaques for at least two months. Experience with Barbary macaques was associated with better performance in judging their emotional state. Simple exposure to pictures of macaques' facial expressions improved the ability of inexperienced participants to better discriminate neutral and distressed faces, and a trend was found for aggressive faces. However, these participants, even when previously exposed to pictures, had difficulties in recognising aggressive, distressed and friendly faces above chance level. These results do not support the universality hypothesis as exposed and naïve participants had difficulties in correctly identifying aggressive, distressed and friendly faces. Exposure to facial expressions improved their correct recognition. In addition, the findings suggest that providing simple exposure to 2D pictures (for example, information signs explaining animals' facial signalling in zoos or animal parks) is not a sufficient educational tool to reduce tourists' misinterpretations of macaque emotion. Additional measures, such as keeping a safe distance between tourists and wild animals, as well as reinforcing learning via videos or supervised visits led by expert guides, could reduce such issues and improve both animal welfare and tourist experience.
Evaluation of the smile: facial and dental considerations.
Panossian, Antoine J; Block, Michael S
2010-03-01
The purpose of this article is to establish an evidence-based evaluation of the esthetic region of the mouth, by reviewing normal values for the face, the smile line, and the teeth. A Medline search was performed to find evidence-based data on accepted normal ranges of facial and dental proportions. The information found was organized following a sequence of physical examinations, which then was used to develop a decision tree for diagnosis and treatment planning. By following this evaluation algorithm, clinicians will be able to document a standard set of data that will reveal skeletal and dental dysmorphia, which can then follow a well-organized sequence of treatment to re-establish facial and dental harmony. Copyright (c) 2010 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Diogo, R; Wood, B A; Aziz, M A; Burrows, A
2009-01-01
The mammalian facial muscles are a subgroup of hyoid muscles (i.e. muscles innervated by cranial nerve VII). They are usually attached to freely movable skin and are responsible for facial expressions. In this study we provide an account of the origin, homologies and evolution of the primate facial muscles, based on dissections of various primate and non-primate taxa and a review of the literature. We provide data not previously reported, including photographs showing in detail the facial muscles of primates such as gibbons and orangutans. We show that the facial muscles usually present in strepsirhines are basically the same muscles that are present in non-primate mammals such as tree-shrews. The exceptions are that strepsirhines often have a muscle that is usually not differentiated in tree-shrews, the depressor supercilii, and lack two muscles that are usually differentiated in these mammals, the zygomatico-orbicularis and sphincter colli superficialis. Monkeys such as macaques usually lack two muscles that are often present in strepsirhines, the sphincter colli profundus and mandibulo-auricularis, but have some muscles that are usually absent as distinct structures in non-anthropoid primates, e.g. the levator labii superioris alaeque nasi, levator labii superioris, nasalis, depressor septi nasi, depressor anguli oris and depressor labii inferioris. In turn, macaques typically lack a risorius, auricularis anterior and temporoparietalis, which are found in hominoids such as humans, but have muscles that are usually not differentiated in members of some hominoid taxa, e.g. the platysma cervicale (usually not differentiated in orangutans, panins and humans) and auricularis posterior (usually not differentiated in orangutans). Based on our observations, comparisons and review of the literature, we propose a unifying, coherent nomenclature for the facial muscles of the Mammalia as a whole and provide a list of more than 300 synonyms that have been used in the literature to designate the facial muscles of primates and other mammals. A main advantage of this nomenclature is that it combines, and thus creates a bridge between, those names used by human anatomists and the names often employed in the literature dealing with non-human primates and non-primate mammals. PMID:19531159
2018-01-01
Objectives Airway management in patients with panfacial trauma is complicated. In addition to involving facial lesions, such trauma compromises the airway, and the use of intermaxillary fixation makes it difficult to secure ventilation by usual approaches (nasotracheal or endotracheal intubation). Submental airway derivation is an alternative to tracheostomy and nasotracheal intubation, allowing a permeable airway with minimal complications in complex patients. Materials and Methods This is a descriptive, retrospective study based on a review of medical records of all patients with facial trauma from January 2003 to May 2015. In total, 31 patients with complex factures requiring submental airway derivation were included. No complications such as bleeding, infection, vascular, glandular, or nervous lesions were presented in any of the patients. Results The use of submental airway derivation is a simple, safe, and easy method to ensure airway management. Moreover, it allows an easier reconstruction. Conclusion Based on these results, we concluded that, if the relevant steps are followed, the use of submental intubation in the treatment of patients with complex facial trauma is a safe and effective option. PMID:29535964
[Establishment of the database of the 3D facial models for the plastic surgery based on network].
Liu, Zhe; Zhang, Hai-Lin; Zhang, Zheng-Guo; Qiao, Qun
2008-07-01
To collect the three-dimensional (3D) facial data of 30 facial deformity patients by the 3D scanner and establish a professional database based on Internet. It can be helpful for the clinical intervention. The primitive point data of face topography were collected by the 3D scanner. Then the 3D point cloud was edited by reverse engineering software to reconstruct the 3D model of the face. The database system was divided into three parts, including basic information, disease information and surgery information. The programming language of the web system is Java. The linkages between every table of the database are credibility. The query operation and the data mining are convenient. The users can visit the database via the Internet and use the image analysis system to observe the 3D facial models interactively. In this paper we presented a database and a web system adapt to the plastic surgery of human face. It can be used both in clinic and in basic research.
Non-lambertian reflectance modeling and shape recovery of faces using tensor splines.
Kumar, Ritwik; Barmpoutis, Angelos; Banerjee, Arunava; Vemuri, Baba C
2011-03-01
Modeling illumination effects and pose variations of a face is of fundamental importance in the field of facial image analysis. Most of the conventional techniques that simultaneously address both of these problems work with the Lambertian assumption and thus fall short of accurately capturing the complex intensity variation that the facial images exhibit or recovering their 3D shape in the presence of specularities and cast shadows. In this paper, we present a novel Tensor-Spline-based framework for facial image analysis. We show that, using this framework, the facial apparent BRDF field can be accurately estimated while seamlessly accounting for cast shadows and specularities. Further, using local neighborhood information, the same framework can be exploited to recover the 3D shape of the face (to handle pose variation). We quantitatively validate the accuracy of the Tensor Spline model using a more general model based on the mixture of single-lobed spherical functions. We demonstrate the effectiveness of our technique by presenting extensive experimental results for face relighting, 3D shape recovery, and face recognition using the Extended Yale B and CMU PIE benchmark data sets.
Sekelj, Alen; Đanić, Davorin
2017-09-01
Lyme borreliosis is a vector-borne infectious disease characterized by three disease stages. In the areas endemic for borreliosis, every acute facial palsy indicates serologic testing and implies specific approach to the disease. Th e aim of the study was to identify and confirm the value of acoustic refl ex and House-Brackman (HB) grading scale as prognostic indicators of facial palsy in neuroborreliosis. Th e study included 176 patients with acute facial palsy divided into three groups based on serologic testing: borreliosis, Bell's palsy, and facial palsy caused by herpes simplex virus type 1 (HSV-1). Study patients underwent baseline audiometry with tympanometry and acoustic reflex, whereas current state of facial palsy was assessed by the HB scale. Subsequently, the same tests were obtained on three occasions, i.e. in week 3, 6 and 12 of presentation. Th e patients diagnosed with borreliosis, Bell's palsy and HSV-1 differed according to the time to acoustic refl ex recovery, which took longest time in patients with borreliosis. Th ese patients had the highest percentage of suprastapedial lesions at all time points and recovery was achieved later as compared with the other two diagnoses. Th e mean score on the HB scale declined with time, also at a slower rate in borreliosis patients. Th e prognosis of acoustic refl ex and facial palsy recovery according to HB scale was not associated with the length of elapsed time. The results obtained in the present study strongly confirmed the role of acoustic reflex and HB grading scale as prognostic indicators of facial palsy in neuroborreliosis.
Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation
Cid, Felipe; Moreno, Jose; Bustos, Pablo; Núñez, Pedro
2014-01-01
This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions. PMID:24787636
[Analysis of volcanic-ash-based insoluble ingredients of facial cleansers].
Ikarashi, Yoshiaki; Uchino, Tadashi; Nishimura, Tetsuji
2011-01-01
The substance termed "Shirasu balloons", produced by the heat treatment of volcanic silicates, is in the form of hollow glass microspheres. Recently, this substance has gained popularity as an ingredient of facial cleansers currently available in the market, because it lends a refreshing and smooth feeling after use. However, reports of eye injury after use of a facial cleanser containing a substance made from volcanic ashes are on the rise. We presumed that the shape and size of these volcanic-ash-based ingredients would be the cause of such injuries. Therefore, in this study, we first developed a method for extracting water-insoluble ingredients such as "Shirasu balloons" from the facial cleansers, and then, we examined their shapes and sizes. The insoluble ingredients extracted from the cleansers were mainly those derived from volcanic silicates. A part of the ingredients remained in the form of glass microspheres, but for the most part, the ingredients were present in various forms, such as fragments of broken glass. Some of the fragments were larger than 75 microm in length. Foreign objects having a certain hardness, shape, and size (e.g., size greater than 75 microm) can possibly cause eye injury. We further examined insoluble ingredients of facial scrubs, such as artificial mineral complexes, mud, charcoal, and polymers, except for volcanic-silicate-based ingredients. The amounts of insoluble ingredients extracted from these scrubs were small and did not have a sharp edge. Some scrubs had ingredients with particles larger than 75 microm in size, but their specific gravities were small and their hardness values were much lower than those of glass microspheres of ingredients such as "Shirasu balloons". Because the fragments of glass microspheres can possibly cause eye injury, the facial cleansers containing large insoluble ingredients derived from volcanic ashes should be avoided to use around eyes.
Van Houtte, Evelyne; Casselman, Jan; Janssens, Sandra; De Kegel, Alexandra; Maes, Leen; Dhooge, Ingeborg
2014-11-01
Valproic acid (VPA) is a known teratogenic drug. Exposure to VPA during the pregnancy can lead to a distinct facial appearance, a cluster of major and minor anomalies and developmental delay. In this case report, two siblings with fetal valproate syndrome and a mild conductive hearing loss were investigated. Radiologic evaluation showed middle and inner ear malformations in both children. Audiologic, vestibular and motor examination was performed. This is the first case report to describe middle and inner ear malformations in children exposed to VPA. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
The shared neural basis of empathy and facial imitation accuracy.
Braadbaart, L; de Grauw, H; Perrett, D I; Waiter, G D; Williams, J H G
2014-01-01
Empathy involves experiencing emotion vicariously, and understanding the reasons for those emotions. It may be served partly by a motor simulation function, and therefore share a neural basis with imitation (as opposed to mimicry), as both involve sensorimotor representations of intentions based on perceptions of others' actions. We recently showed a correlation between imitation accuracy and Empathy Quotient (EQ) using a facial imitation task and hypothesised that this relationship would be mediated by the human mirror neuron system. During functional Magnetic Resonance Imaging (fMRI), 20 adults observed novel 'blends' of facial emotional expressions. According to instruction, they either imitated (i.e. matched) the expressions or executed alternative, pre-prescribed mismatched actions as control. Outside the scanner we replicated the association between imitation accuracy and EQ. During fMRI, activity was greater during mismatch compared to imitation, particularly in the bilateral insula. Activity during imitation correlated with EQ in somatosensory cortex, intraparietal sulcus and premotor cortex. Imitation accuracy correlated with activity in insula and areas serving motor control. Overlapping voxels for the accuracy and EQ correlations occurred in premotor cortex. We suggest that both empathy and facial imitation rely on formation of action plans (or a simulation of others' intentions) in the premotor cortex, in connection with representations of emotional expressions based in the somatosensory cortex. In addition, the insula may play a key role in the social regulation of facial expression. © 2013.
Zheng, Lijun; Zheng, Yong
2015-07-01
Previous studies have documented the correlation between preferences for male facial masculinity and perceived masculinity: women who rate their male partner as more masculine tend to prefer more masculine faces. Men's self-rated masculinity predicts their female partner's preference for masculinity. This study examined the association between other trait preferences and preference for male facial masculinity among 556 gay and bisexual men across multiple cities in China. Participants were asked to choose the three most important traits in a romantic partner from a list of 23 traits. Each participant was then asked to choose a preferred face in each of 10 pairs of male faces presented sequentially, with each pair consisting of a masculinized and feminized version of the same base face. The results indicated that preferences for health and status-related traits were correlated with preferences for male facial masculinity in gay and bisexual men in China; individuals who were more health- or status-oriented in their preferences for a romantic partner preferred more masculine male faces than individuals with lower levels of these orientations. The findings have implications for the correlated preferences for facial masculinity and health- and status-related traits and may be related to perceived health and dominance/aggression of masculine faces based on a sample of non-Western gay and bisexual men.
New insights into the phenotypic covariance structure of the anthropoid cranium
Makedonska, Jana
2014-01-01
In complex organisms, suites of non-random, highly intercorrelated phenotypic traits, organized according to their developmental history and forming semi-autonomous units (i.e. modules), have the potential to impose constraints on morphological diversification or to improve evolvability. Because of its structural, developmental and functional complexity, the cranium is arguably one of the best models for studying the interplay between developmental history and the need for various parts of a structure to specialize in different functions. This study evaluated the significance of two specific types of developmental imprints in the adult anthropoid cranium, those imposed by ossification pattern (i.e. ossification with and without a pre-existing cartilaginous phase) and those imposed by tissue origin (i.e. tissues derived principally from neural-crest vs. those derived from paraxial mesoderm). Specifically, this study tests the hypothesis that the face and the basicranium form two distinct modules with higher within-unit trait integration magnitudes compared with the cranium as a whole. Data on 12 anthropoid primate species were collected in the form of 23-dimensional landmarks digitized on cranial surface models that sample the basicranium as well as regions of functional importance during feeding. The presence of a significant modularity imprint in the adult cranium was assessed using a between-region within-species comparison of multivariate correlations (RV coefficients) obtained with partial least-squares, using within-module within-species eigenvalue variance (EV), and using cluster analyses and non-metric multidimensional scaling. In addition to addressing the validity of the cranial modularity hypothesis in anthropoids, this study addressed methodological aspects of the interspecific comparison of morphological integration, namely the effect of sample size and the effect of landmark number on integration magnitudes. Two methodological findings that are of significance to research in morphological integration are that: (i) a smaller sample size increases integration magnitude, but preserves the pattern of variation of integration magnitudes from block to block within species; and that (ii) the number of landmarks per cranial block does not significantly impact block integration magnitude measured as EV. Results from the analyses testing for cranial modularity imprints in the adult anthropoid cranium show that some facial landmarks covary more strongly with basicranial landmarks than with other facial landmarks. Cluster methods, non-metric multidimensional scaling and, to an extent, RV results show that the rostral and the zygomatic landmarks covary more strongly with the basicranial landmarks than they do with the molar landmarks. However, the rostral–zygomatic–basicranial block, the molar block, the facial block, the basicranial block and the other analyzed cranial and facial blocks are not more integrated than the cranium. Thus, the morphological variation in the adult anthropoid cranium is not significantly constrained by at least two of the potential developmental sources of its covariance structure. PMID:25406861
A systematic review of filler agents for aesthetic treatment of HIV facial lipoatrophy (FLA).
Jagdeo, Jared; Ho, Derek; Lo, Alex; Carruthers, Alastair
2015-12-01
HIV facial lipoatrophy (FLA) is characterized by facial volume loss. HIV FLA affects the facial contours of the cheeks, temples, and orbits, and is associated with social stigma. Although new highly active antiretroviral therapy medications are associated with less severe FLA, the prevalence of HIV FLA among treated individuals exceeds 50%. The goal of our systematic review is to examine published clinical studies involving the use of filler agents for aesthetic treatment of HIV FLA and to provide evidence-based recommendations based on published efficacy and safety data. A systematic review of the published literature was performed on July 1, 2015, on filler agents for aesthetic treatment of HIV FLA. Based on published studies, poly-L-lactic acid is the only filler agent with grade of recommendation: B. Other reviewed filler agents received grade of recommendation: C or D. Poly-L-lactic acid may be best for treatment over temples and cheeks, whereas calcium hydroxylapatite, with a Food and Drug Administration indication of subdermal implantation, may be best used deeply over bone for focal enhancement. Additional long-term randomized controlled trials are necessary to elucidate the advantages and disadvantages of fillers that have different biophysical properties, in conjunction with cost-effectiveness analysis, for treatment of HIV FLA. Published by Elsevier Inc.
Valentova, Jaroslava Varella; Havlíček, Jan
2013-01-01
Previous research has shown that lay people can accurately assess male sexual orientation based on limited information, such as face, voice, or behavioral display. Gender-atypical traits are thought to serve as cues to sexual orientation. We investigated the presumed mechanisms of sexual orientation attribution using a standardized set of facial and vocal stimuli of Czech men. Both types of stimuli were rated for sexual orientation and masculinity-femininity by non-student heterosexual women and homosexual men. Our data showed that by evaluating vocal stimuli both women and homosexual men can judge sexual orientation of the target men in agreement with their self-reported sexual orientation. Nevertheless, only homosexual men accurately attributed sexual orientation of the two groups from facial images. Interestingly, facial images of homosexual targets were rated as more masculine than heterosexual targets. This indicates that attributions of sexual orientation are affected by stereotyped association between femininity and male homosexuality; however, reliance on such cues can lead to frequent misjudgments as was the case with the female raters. Although our study is based on a community sample recruited in a non-English speaking country, the results are generally consistent with the previous research and thus corroborate the validity of sexual orientation attributions. PMID:24358180
Ha, Grace K; Parikh, Shivani; Huang, Zhi; Petitto, John M
2008-08-13
The temporal relationship between severity of peripheral axonal injury and T lymphocyte trafficking to the neuronal cell bodies of origin in the brain has been unclear. We sought to test the hypothesis that greater neuronal death induced by disparate forms of peripheral nerve injury would result in differential patterns of T cell infiltration and duration at the cell bodies of origin in the brain and that these measures would correlate with the magnitude of neuronal death over time and cumulative neuronal loss. To test this hypothesis, we compared the time course of CD3(+) T cell infiltration and neuronal death (assessed by CD11b(+) perineuronal microglial phagocytic clusters) following axonal crush versus axonal resection injuries, two extreme variations of facial nerve axotomy that result in mild versus severe neuronal loss, respectively, in the facial motor nucleus. We also quantified the number of facial motor neurons present at 49 days post-injury to determine whether differences in the levels of neuronal death between nerve crush and resection correlated with differences in cumulative neuronal loss. Between 1 and 7 days post-injury when levels of neuronal death were minimal, we found that the rate of accumulation and magnitude of the T cell response was similar following nerve crush and resection. Differences in the T cell response were apparent by 14 days post-injury when the level of neuronal death following resection was substantially greater than that seen in crush injury. For nerve resection, the peak of neuronal death at 14 days post-resection was followed by a maximal T cell response one week later at 21 days. Differences in the level of neuronal death between the two injuries across the time course tested reflected differences in cumulative neuronal loss at 49 days post-injury. Altogether, these data suggest that the trafficking of T cells to the injured FMN is dependent upon the severity of peripheral nerve injury and associated neuronal death.
Cheung, Melody J.; Taher, Muba; Lauzon, Gilles J.
2005-01-01
OBJECTIVE To summarize clinical recognition and current management strategies for four types of acneiform facial eruptions common in young women: acne vulgaris, rosacea, folliculitis, and perioral dermatitis. QUALITY OF EVIDENCE Many randomized controlled trials (level I evidence) have studied treatments for acne vulgaris over the years. Treatment recommendations for rosacea, folliculitis, and perioral dermatitis are based predominantly on comparison and open-label studies (level II evidence) as well as expert opinion and consensus statements (level III evidence). MAIN MESSAGE Young women with acneiform facial eruptions often present in primary care. Differentiating between morphologically similar conditions is often difficult. Accurate diagnosis is important because treatment approaches are different for each disease. CONCLUSION Careful visual assessment with an appreciation for subtle morphologic differences and associated clinical factors will help with diagnosis of these common acneiform facial eruptions and lead to appropriate management. PMID:15856972
Santosa, Katherine B.; Fattah, Adel; Gavilán, Javier; Hadlock, Tessa A.; Snyder-Warwick, Alison K.
2017-01-01
IMPORTANCE There is no widely accepted assessment tool or common language used by clinicians caring for patients with facial palsy, making exchange of information challenging. Standardized photography may represent such a language and is imperative for precise exchange of information and comparison of outcomes in this special patient population. OBJECTIVES To review the literature to evaluate the use of facial photography in the management of patients with facial palsy and to examine the use of photography in documenting facial nerve function among members of the Sir Charles Bell Society—a group of medical professionals dedicated to care of patients with facial palsy. DESIGN, SETTING, AND PARTICIPANTS A literature search was performed to review photographic standards in patients with facial palsy. In addition, a cross-sectional survey of members of the Sir Charles Bell Society was conducted to examine use of medical photography in documenting facial nerve function. The literature search and analysis was performed in August and September 2015, and the survey was conducted in August and September 2013. MAIN OUTCOMES AND MEASURES The literature review searched EMBASE, CINAHL, and MEDLINE databases from inception of each database through September 2015. Additional studies were identified by scanning references from relevant studies. Only English-language articles were eligible for inclusion. Articles that discussed patients with facial palsy and outlined photographic guidelines for this patient population were included in the study. The survey was disseminated to the Sir Charles Bell Society members in electronic form. It consisted of 10 questions related to facial grading scales, patient-reported outcome measures, other psychological assessment tools, and photographic and videographic recordings. RESULTS In total, 393 articles were identified in the literature search, 7 of which fit the inclusion criteria. Six of the 7 articles discussed or proposed views specific to patients with facial palsy. However, none of the articles specifically focused on photographic standards for the population with facial palsy. Eighty-three of 151 members (55%) of the Sir Charles Bell Society responded to the survey. All survey respondents used photographic documentation, but there was variability in which facial expressions were used. Eighty-two percent (68 of 83) used some form of videography. From these data, we propose a set of minimum photographic standards for patients with facial palsy, including the following 10 static views: at rest or repose, small closed-mouth smile, large smile showing teeth, elevation of eyebrows, closure of eyes gently, closure of eyes tightly, puckering of lips, showing bottom teeth, snarling or wrinkling of the nose, and nasal base view. CONCLUSIONS AND RELEVANCE There is no consensus on photographic standardization to report outcomes for patients with facial palsy. Minimum photographic standards for facial paralysis publications are proposed. Videography of the dynamic movements of these views should also be recorded. LEVEL OF EVIDENCE NA. PMID:28125753
Metzger, Marc C; Vogel, Mathias; Hohlweg-Majert, Bettina; Mast, Hansjörg; Fan, Xianqun; Rüdell, Alexandra; Schlager, Stefan
2011-09-01
The purpose of this study was to evaluate and analyze statistical shapes of the outer mandible contour of Caucasian and Chinese people, offering data for the production of preformed mandible reconstruction plates. A CT-database of 925 Caucasians (male: n=463, female: n=462) and 960 Chinese (male: n=469, female: n=491) including scans of unaffected mandibles were used and imported into the 3D modeling software Voxim (IVS-Solutions, Chemnitz, Germany). Anatomical landmarks (n=22 points for both sides) were set using the 3D view along the outer contour of the mandible at the area where reconstruction plates are commonly located. We used morphometric methods for statistical shape analysis. We found statistical relevant differences between populations including a distinct discrimination given by the landmarks at the mandible. After generating a metric model this shape information which separated the populations appeared to be of no clinical relevance. The metric size information given by ramus length however provided a profound base for the production of standard reconstruction plates. Clustering by ramus length into three sizes and calculating means of these size-clusters seem to be a good solution for constructing preformed reconstruction plates that will fit a vast majority. Copyright © 2010 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Sui, Jie; Chechlacz, Magdalena; Humphreys, Glyn W.
2012-01-01
Facial self-awareness is a basic human ability dependent on a distributed bilateral neural network and revealed through prioritized processing of our own over other faces. Using non-prosopagnosic patients we show, for the first time, that facial self-awareness can be fractionated into different component processes. Patients performed two face…
Differences between Children and Adults in the Recognition of Enjoyment Smiles
ERIC Educational Resources Information Center
Del Giudice, Marco; Colle, Livia
2007-01-01
The authors investigated the differences between 8-year-olds (n = 80) and adults (n = 80) in recognition of felt versus faked enjoyment smiles by using a newly developed picture set that is based on the Facial Action Coding System. The authors tested the effect of different facial action units (AUs) on judgments of smile authenticity. Multiple…
ERIC Educational Resources Information Center
Wang, Shirley S.; Treat, Teresa A.; Brownell, Kelly D.
2008-01-01
This study examines 2 aspects of cognitive processing in person perception--attention and decision making--in classroom-relevant contexts. Teachers completed 2 implicit, performance-based tasks that characterized attention to and utilization of 4 student characteristics of interest: ethnicity, facial affect, body size, and attractiveness. Stimuli…
Facial image of Biblical Jews from Israel.
Kobyliansky, E; Balueva, T; Veselovskaya, E; Arensburg, B
2008-06-01
The present report deals with reconstructing the facial shapes of ancient inhabitants of Israel based on their cranial remains. The skulls of a male from the Hellenistic period and a female from the Roman period have been reconstructed. They were restored using the most recently developed programs in anthropological facial reconstruction, especially that of the Institute of Ethnology and Anthropology of the Russian Academy of Sciences (Balueva & Veselovskaya 2004). The basic craniometrical measurements of the two skulls were measured according to Martin & Saller (1957) and compared to the data from three ancient populations of Israel described by Arensburg et al. (1980): that of the Hellenistic period dating from 332 to 37 B.C., that of the Roman period, from 37 B.C. to 324 C.E., and that of the Byzantine period that continued until the Arab conquest in 640 C.E. Most of this osteological material was excavated in the Jordan River and the Dead Sea areas. A sample from the XVIIth century Jews from Prague (Matiegka 1926) was also used for osteometrical comparisons. The present study will characterize not only the osteological morphology of the material, but also the facial appearance of ancient inhabitants of Israel. From an anthropometric point of view, the two skulls studied here definitely belong to the same sample from the Hellenistic, Roman, and Byzantine populations of Israel as well as from Jews from Prague. Based on its facial reconstruction, the male skull may belong to the large Mediterranean group that inhabited this area from historic to modern times. The female skull also exhibits all the Mediterranean features but, in addition, probably some equatorial (African) mixture manifested by the shape of the reconstructed nose and the facial prognatism.
Longitudinal Analysis of Superficial Midfacial Fat Volumes Over a 10-Year Period.
Tower, Jacob; Seifert, Kimberly; Paskhover, Boris
2018-04-11
Volumetric changes to facial fat that occur with aging remain poorly understood. The aim of this study was to evaluate for longitudinal changes to midfacial fat volumes in a group of individuals. We conducted a retrospective longitudinal study of adult subjects who underwent multiple facial computed tomographic (CT) scans timed at least 8 years apart. Subjects who underwent facial surgery or suffered facial trauma were excluded. Facial CT scans were analyzed, and superficial cheek fat volumes were measured and compared to track changes that occurred with aging. Fourteen subjects were included in our analysis of facial aging (5 male, 9 female; mean initial age 50.9 years; mean final age 60.4 years). In the right superficial cheek there was an increase in mean (SD) superficial fat volume from 10.33 (2.01) to 10.50 (1.80) cc, which was not statistically significant (P = 0.75). Similar results were observed in the left cheek. There were no statistically significant longitudinal changes to caudal, middle, or cephalad subdivisions of bilateral superficial cheek fat. A simple linear regression was performed to predict superficial cheek fat pad volume based on age which did not reach statistical significance (P = 0.31), with an R 2 of 0.039. This study is the first to quantitatively assess for longitudinal changes to midfacial fat in a group of individuals. Superficial cheek fat remained stable as subjects aged from approximately 50 to 60 years old, with no change in total volume or redistribution within a radiographically defined compartment. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition
NASA Astrophysics Data System (ADS)
Rouabhia, C.; Tebbikh, H.
2008-06-01
Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).
Vibration anesthesia for the reduction of pain with facial dermal filler injections.
Mally, Pooja; Czyz, Craig N; Chan, Norman J; Wulc, Allan E
2014-04-01
Vibration anesthesia is an effective pain-reduction technique for facial cosmetic injections. The analgesic effect of this method was tested in this study during facial dermal filler injections. The study aimed to evaluate the safety and efficacy of vibration anesthesia for these facial injections. This prospective study analyzed 41 patients who received dermal filler injections to the nasolabial folds, tear troughs, cheeks, and other facial sites. The injections were administered in a randomly assigned split-face design. One side of the patient's face received vibration together with dermal filler injections, whereas the other side received dermal filler injections alone. The patients completed a posttreatment questionnaire pertaining to injection pain, adverse effects, and preference for vibration with future dermal filler injections. The patients experienced both clinically and statistically significant pain reduction when a vibration stimulus was co-administered with the dermal filler injections. No adverse events were reported. The majority of the patients (95 %) reported a preference for vibration anesthesia with subsequent dermal filler injections. Vibration is a safe and effective method of achieving anesthesia during facial dermal filler injections. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Joint Facial Action Unit Detection and Feature Fusion: A Multi-conditional Learning Approach.
Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja
2016-10-05
Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in specific patterns, and rarely in isolation. Yet, most existing methods for automatic action unit detection fail to exploit dependencies among them, and the corresponding facial features. To address this, we propose a novel multi-conditional latent variable model for simultaneous fusion of facial features and joint action unit detection. Specifically, the proposed model performs feature fusion in a generative fashion via a low-dimensional shared subspace, while simultaneously performing action unit detection using a discriminative classification approach. We show that by combining the merits of both approaches, the proposed methodology outperforms existing purely discriminative/generative methods for the target task. To reduce the number of parameters, and avoid overfitting, a novel Bayesian learning approach based on Monte Carlo sampling is proposed, to integrate out the shared subspace. We validate the proposed method on posed and spontaneous data from three publicly available datasets (CK+, DISFA and Shoulder-pain), and show that both feature fusion and joint learning of action units leads to improved performance compared to the state-of-the-art methods for the task.
A novel three-dimensional smile analysis based on dynamic evaluation of facial curve contour
Lin, Yi; Lin, Han; Lin, Qiuping; Zhang, Jinxin; Zhu, Ping; Lu, Yao; Zhao, Zhi; Lv, Jiahong; Lee, Mln Kyeong; Xu, Yue
2016-01-01
The influence of three-dimensional facial contour and dynamic evaluation decoding on factors of smile esthetics is essential for facial beauty improvement. However, the kinematic features of the facial smile contour and the contribution from the soft tissue and underlying skeleton are uncharted. Here, the cheekbone-maxilla contour and nasolabial fold were combined into a “smile contour” delineating the overall facial topography emerges prominently in smiling. We screened out the stable and unstable points on the smile contour using facial motion capture and curve fitting, before analyzing the correlation between soft tissue coordinates and hard tissue counterparts of the screened points. Our finding suggests that the mouth corner region was the most mobile area characterizing smile expression, while the other areas remained relatively stable. Therefore, the perioral area should be evaluated dynamically while the static assessment outcome of other parts of the smile contour contribute partially to their dynamic esthetics. Moreover, different from the end piece, morphologies of the zygomatic area and the superior part of the nasolabial crease were determined largely by the skeleton in rest, implying the latter can be altered by orthopedic or orthodontic correction and the former better improved by cosmetic procedures to improve the beauty of smile. PMID:26911450
A novel three-dimensional smile analysis based on dynamic evaluation of facial curve contour
NASA Astrophysics Data System (ADS)
Lin, Yi; Lin, Han; Lin, Qiuping; Zhang, Jinxin; Zhu, Ping; Lu, Yao; Zhao, Zhi; Lv, Jiahong; Lee, Mln Kyeong; Xu, Yue
2016-02-01
The influence of three-dimensional facial contour and dynamic evaluation decoding on factors of smile esthetics is essential for facial beauty improvement. However, the kinematic features of the facial smile contour and the contribution from the soft tissue and underlying skeleton are uncharted. Here, the cheekbone-maxilla contour and nasolabial fold were combined into a “smile contour” delineating the overall facial topography emerges prominently in smiling. We screened out the stable and unstable points on the smile contour using facial motion capture and curve fitting, before analyzing the correlation between soft tissue coordinates and hard tissue counterparts of the screened points. Our finding suggests that the mouth corner region was the most mobile area characterizing smile expression, while the other areas remained relatively stable. Therefore, the perioral area should be evaluated dynamically while the static assessment outcome of other parts of the smile contour contribute partially to their dynamic esthetics. Moreover, different from the end piece, morphologies of the zygomatic area and the superior part of the nasolabial crease were determined largely by the skeleton in rest, implying the latter can be altered by orthopedic or orthodontic correction and the former better improved by cosmetic procedures to improve the beauty of smile.
Facial Performance Transfer via Deformable Models and Parametric Correspondence.
Asthana, Akshay; de la Hunty, Miles; Dhall, Abhinav; Goecke, Roland
2012-09-01
The issue of transferring facial performance from one person's face to another's has been an area of interest for the movie industry and the computer graphics community for quite some time. In recent years, deformable face models, such as the Active Appearance Model (AAM), have made it possible to track and synthesize faces in real time. Not surprisingly, deformable face model-based approaches for facial performance transfer have gained tremendous interest in the computer vision and graphics community. In this paper, we focus on the problem of real-time facial performance transfer using the AAM framework. We propose a novel approach of learning the mapping between the parameters of two completely independent AAMs, using them to facilitate the facial performance transfer in a more realistic manner than previous approaches. The main advantage of modeling this parametric correspondence is that it allows a "meaningful" transfer of both the nonrigid shape and texture across faces irrespective of the speakers' gender, shape, and size of the faces, and illumination conditions. We explore linear and nonlinear methods for modeling the parametric correspondence between the AAMs and show that the sparse linear regression method performs the best. Moreover, we show the utility of the proposed framework for a cross-language facial performance transfer that is an area of interest for the movie dubbing industry.
Analysis of the efficacy of marketing tools in facial plastic surgery.
Zavod, Matthew B; Adamson, Peter A
2008-06-01
To compare referral sources to a facial plastic surgery practice and to develop models correlating the referral source with the decision for surgery. Retrospective descriptive study. Well-established, metropolitan, private facial plastic surgery practice with training fellowship affiliated with an academic centre. One-thousand eighty-nine new consecutive patients presenting between January 2001 and December 2005 recorded intake data including age, gender, and chief complaint. Final data input was their decision for or against surgery. Main outcome measures included differences in referral sources based on data collected and how those sources related to decision for surgery. A 50% conversion rate was found. Women and older patients were more likely to be referred from magazines, television, and newspapers and for facial rejuvenation. Men and younger patients were more likely to be referred from the website and for rhinoplasty. For facial rejuvenation, both the number of patients interested in and the probability that they agreed to the procedure increased with age. For rhinoplasty, the converse was true. The most likely patients to schedule surgery were those who were referred from other patients, friends, or family members in our practice. The data confirm that word-of-mouth referrals are the most important source for predicting which patients will elect to proceed with surgery in this established facial cosmetic surgery practice.
Kakudo, Natsuko; Kushida, Satoshi; Tanaka, Nobuko; Minakata, Tatsuya; Suzuki, Kenji; Kusumoto, Kenji
2011-11-01
Chemical peeling is becoming increasingly popular for skin rejuvenation in dermatological esthetic surgery. Conspicuous facial pores are one of the most frequently encountered skin problems in women of all ages. This study was performed to analyze the effectiveness of reducing conspicuous facial pores using glycolic acid chemical peeling (GACP) based on a novel computer analysis of digital-camera-captured images. GACP was performed a total of five times at 2-week intervals in 22 healthy women. Computerized image analysis of conspicuous, open, and darkened facial pores was performed using the Robo Skin Analyzer CS 50. The number of conspicuous facial pores decreased significantly in 19 (86%) of the 22 subjects, with a mean improvement rate of 34.6%. The number of open pores decreased significantly in 16 (72%) of the subjects, with a mean improvement rate of 11.0%. The number of darkened pores decreased significantly in 18 (81%) of the subjects, with a mean improvement rate of 34.3%. GACP significantly reduces the number of conspicuous facial pores. The Robo Skin Analyzer CS 50 is useful for the quantification and analysis of 'pore enlargement', a subtle finding in dermatological esthetic surgery. © 2011 John Wiley & Sons A/S.
Chen, Zhongting; Poon, Kai-Tak; Cheng, Cecilia
2017-08-01
Studies have examined social maladjustment among individuals with Internet addiction, but little is known about their deficits in specific social skills and the underlying psychological mechanisms. The present study filled these gaps by (a) establishing a relationship between deficits in facial expression recognition and Internet addiction, and (b) examining the mediating role of perceived stress that explains this hypothesized relationship. Ninety-seven participants completed validated questionnaires that assessed their levels of Internet addiction and perceived stress, and performed a computer-based task that measured their facial expression recognition. The results revealed a positive relationship between deficits in recognizing disgust facial expression and Internet addiction, and this relationship was mediated by perceived stress. However, the same findings did not apply to other facial expressions. Ad hoc analyses showed that recognizing disgust was more difficult than recognizing other facial expressions, reflecting that the former task assesses a social skill that requires cognitive astuteness. The present findings contribute to the literature by identifying a specific social skill deficit related to Internet addiction and by unveiling a psychological mechanism that explains this relationship, thus providing more concrete guidelines for practitioners to strengthen specific social skills that mitigate both perceived stress and Internet addiction. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
2010-01-01
Background Symptom control is an important consideration in the choice of treatment for patients with recurrent and/or metastatic squamous cell carcinoma of the head and neck (SCCHN). Patients who demonstrate objective tumour responses to platinum-based chemotherapy are more likely to have symptom relief than those who do not have such responses. A phase III trial (EXTREME) showed that adding the epidermal growth factor receptor (EGFR)-targeting IgG1 monoclonal antibody cetuximab to first-line platinum-based chemotherapy significantly prolongs progression-free and overall survival and increases response rate compared with platinum-based chemotherapy alone. We report here the case of a 60-year old female with recurrent squamous cell carcinoma of the gum who had rapid palliation of symptoms and reduction of facial disease mass following treatment with a combination of carboplatin/5-fluorouracil (5-FU) and cetuximab. Case presentation The patient was diagnosed with T4N0 M0 disease of the oral cavity in November 2006 and underwent surgery, with R0 resection, followed by adjuvant radiotherapy and concomitant cisplatin chemotherapy. Around 3 months later, the disease recurred and the patient had severe pain (9/10 on a visual pain scale), marked facial oedema and a palpable facial mass of 89 mm. The patient received 4 21-day cycles of carboplatin (AUC 5), 5-FU (1,000 mg/m2/day for 4 days) and cetuximab (400 mg/m2 initial dose followed by subsequently weekly doses of 250 mg/m2), with continuation of cetuximab monotherapy at the end of this time, and pain relief with topical fentanyl and oral morphine. After 7 days of treatment, pain had reduced to 2/10, with discontinuation of morphine after 4 days, and the facial mass had reduced to 70 mm. After 2 cycles of treatment, the facial mass had decreased to 40 mm. After 3 cycles of treatment, pain and facial oedema had resolved completely and a cervical computed tomography scan showed a marked reduction in tumour mass. Cetuximab monotherapy was continued uninterrupted for 7 months. Conclusion This case illustrates the rapid reduction of tumour mass and disease-associated pain and oedema that can be achieved with a combination of platinum-based chemotherapy and cetuximab in recurrent and/or metastatic SCCHN. PMID:20181021
Top-down guidance in visual search for facial expressions.
Hahn, Sowon; Gronlund, Scott D
2007-02-01
Using a visual search paradigm, we investigated how a top-down goal modified attentional bias for threatening facial expressions. In two experiments, participants searched for a facial expression either based on stimulus characteristics or a top-down goal. In Experiment 1 participants searched for a discrepant facial expression in a homogenous crowd of faces. Consistent with previous research, we obtained a shallower response time (RT) slope when the target face was angry than when it was happy. In Experiment 2, participants searched for a specific type of facial expression (allowing a top-down goal). When the display included a target, we found a shallower RT slope for the angry than for the happy face search. However, when an angry or happy face was present in the display in opposition to the task goal, we obtained equivalent RT slopes, suggesting that the mere presence of an angry face in opposition to the task goal did not support the well-known angry face superiority effect. Furthermore, RT distribution analyses supported the special status of an angry face only when it was combined with the top-down goal. On the basis of these results, we suggest that a threatening facial expression may guide attention as a high-priority stimulus in the absence of a specific goal; however, in the presence of a specific goal, the efficiency of facial expression search is dependent on the combined influence of a top-down goal and the stimulus characteristics.
We look like our names: The manifestation of name stereotypes in facial appearance.
Zwebner, Yonat; Sellier, Anne-Laure; Rosenfeld, Nir; Goldenberg, Jacob; Mayo, Ruth
2017-04-01
Research demonstrates that facial appearance affects social perceptions. The current research investigates the reverse possibility: Can social perceptions influence facial appearance? We examine a social tag that is associated with us early in life-our given name. The hypothesis is that name stereotypes can be manifested in facial appearance, producing a face-name matching effect , whereby both a social perceiver and a computer are able to accurately match a person's name to his or her face. In 8 studies we demonstrate the existence of this effect, as participants examining an unfamiliar face accurately select the person's true name from a list of several names, significantly above chance level. We replicate the effect in 2 countries and find that it extends beyond the limits of socioeconomic cues. We also find the effect using a computer-based paradigm and 94,000 faces. In our exploration of the underlying mechanism, we show that existing name stereotypes produce the effect, as its occurrence is culture-dependent. A self-fulfilling prophecy seems to be at work, as initial evidence shows that facial appearance regions that are controlled by the individual (e.g., hairstyle) are sufficient to produce the effect, and socially using one's given name is necessary to generate the effect. Together, these studies suggest that facial appearance represents social expectations of how a person with a specific name should look. In this way a social tag may influence one's facial appearance. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Trisomy 21 and Facial Developmental Instability
Starbuck, John M.; Cole, Theodore M.; Reeves, Roger H.; Richtsmeier, Joan T.
2013-01-01
The most common live-born human aneuploidy is trisomy 21, which causes Down syndrome (DS). Dosage imbalance of genes on chromosome 21 (Hsa21) affects complex gene-regulatory interactions and alters development to produce a wide range of phenotypes, including characteristic facial dysmorphology. Little is known about how trisomy 21 alters craniofacial morphogenesis to create this characteristic appearance. Proponents of the “amplified developmental instability” hypothesis argue that trisomy 21 causes a generalized genetic imbalance that disrupts evolutionarily conserved developmental pathways by decreasing developmental homeostasis and precision throughout development. Based on this model, we test the hypothesis that DS faces exhibit increased developmental instability relative to euploid individuals. Developmental instability was assessed by a statistical analysis of fluctuating asymmetry. We compared the magnitude and patterns of fluctuating asymmetry among siblings using three-dimensional coordinate locations of 20 anatomic landmarks collected from facial surface reconstructions in four age-matched samples ranging from 4 to 12 years: 1) DS individuals (n=55); 2) biological siblings of DS individuals (n=55); 3) and 4) two samples of typically developing individuals (n=55 for each sample), who are euploid siblings and age-matched to the DS individuals and their euploid siblings (samples 1 and 2). Identification in the DS sample of facial prominences exhibiting increased fluctuating asymmetry during facial morphogenesis provides evidence for increased developmental instability in DS faces. We found the highest developmental instability in facial structures derived from the mandibular prominence and lowest in facial regions derived from the frontal prominence. PMID:23505010
Trisomy 21 and facial developmental instability.
Starbuck, John M; Cole, Theodore M; Reeves, Roger H; Richtsmeier, Joan T
2013-05-01
The most common live-born human aneuploidy is trisomy 21, which causes Down syndrome (DS). Dosage imbalance of genes on chromosome 21 (Hsa21) affects complex gene-regulatory interactions and alters development to produce a wide range of phenotypes, including characteristic facial dysmorphology. Little is known about how trisomy 21 alters craniofacial morphogenesis to create this characteristic appearance. Proponents of the "amplified developmental instability" hypothesis argue that trisomy 21 causes a generalized genetic imbalance that disrupts evolutionarily conserved developmental pathways by decreasing developmental homeostasis and precision throughout development. Based on this model, we test the hypothesis that DS faces exhibit increased developmental instability relative to euploid individuals. Developmental instability was assessed by a statistical analysis of fluctuating asymmetry. We compared the magnitude and patterns of fluctuating asymmetry among siblings using three-dimensional coordinate locations of 20 anatomic landmarks collected from facial surface reconstructions in four age-matched samples ranging from 4 to 12 years: (1) DS individuals (n = 55); (2) biological siblings of DS individuals (n = 55); 3) and 4) two samples of typically developing individuals (n = 55 for each sample), who are euploid siblings and age-matched to the DS individuals and their euploid siblings (samples 1 and 2). Identification in the DS sample of facial prominences exhibiting increased fluctuating asymmetry during facial morphogenesis provides evidence for increased developmental instability in DS faces. We found the highest developmental instability in facial structures derived from the mandibular prominence and lowest in facial regions derived from the frontal prominence. Copyright © 2013 Wiley Periodicals, Inc.
Rongo, Roberto; Antoun, Joseph Saswat; Lim, Yi Xin; Dias, George; Valletta, Rosa; Farella, Mauro
2014-09-01
To evaluate the relationship between mandibular divergence and vertical and transverse dimensions of the face. A sample was recruited from the orthodontic clinic of the University of Otago, New Zealand. The recruited participants (N = 60) were assigned to three different groups based on the mandibular plane angle (hyperdivergent, n = 20; normodivergent, n = 20; and hypodivergent, n = 20). The sample consisted of 31 females and 29 males, with a mean age of 21.1 years (SD ± 5.0). Facial scans were recorded for each participant using a three-dimensional (3D) white-light scanner and then merged to form a single 3D image of the face. Vertical and transverse measurements of the face were assessed from the 3D facial image. The hyperdivergent sample had a significantly larger total and lower anterior facial height than the other two groups (P < .05), although no difference was found for the middle facial height (P > .05). Similarly, there were no significant differences in the transverse measurements of the three study groups (P > .05). Both gender and body mass index (BMI) had a greater influence on the transverse dimension. Hyperdivergent facial types are associated with a long face but not necessarily a narrow face. Variations in facial soft tissue vertical and transversal dimensions are more likely to be due to gender. Body mass index has a role in mandibular width (GoGo) assessment.
A standardized nomenclature for craniofacial and facial anthropometry.
Caple, Jodi; Stephan, Carl N
2016-05-01
Standardized terms and methods have long been recognized as crucial to reduce measurement error and increase reliability in anthropometry. The successful prior use of craniometric landmarks makes extrapolation of these landmarks to the soft tissue context, as analogs, intuitive for forensic craniofacial analyses and facial photogrammetry. However, this extrapolation has not, so far, been systematic. Instead, varied nomenclature and definitions exist for facial landmarks, and photographic analyses are complicated by the generalization of 3D craniometric landmarks to the 2D face space where analogy is subsequently often lost, complicating anatomical assessments. For example, landmarks requiring palpation of the skull or the examination of the 3D surface typology are impossible to legitimately position; similar applies to median landmarks not visible in lateral photographs. To redress these issues without disposing of the craniometric framework that underpins many facial landmarks, we provide an updated and transparent nomenclature for facial description. This nomenclature maintains the original craniometric intent (and base abbreviations) but provides clear distinction of ill-defined (quasi) landmarks in photographic contexts, as produced when anatomical points are subjectively inferred from shape-from-shading information alone.
The Influence of Facial Signals on the Automatic Imitation of Hand Actions
Butler, Emily E.; Ward, Robert; Ramsey, Richard
2016-01-01
Imitation and facial signals are fundamental social cues that guide interactions with others, but little is known regarding the relationship between these behaviors. It is clear that during expression detection, we imitate observed expressions by engaging similar facial muscles. It is proposed that a cognitive system, which matches observed and performed actions, controls imitation and contributes to emotion understanding. However, there is little known regarding the consequences of recognizing affective states for other forms of imitation, which are not inherently tied to the observed emotion. The current study investigated the hypothesis that facial cue valence would modulate automatic imitation of hand actions. To test this hypothesis, we paired different types of facial cue with an automatic imitation task. Experiments 1 and 2 demonstrated that a smile prompted greater automatic imitation than angry and neutral expressions. Additionally, a meta-analysis of this and previous studies suggests that both happy and angry expressions increase imitation compared to neutral expressions. By contrast, Experiments 3 and 4 demonstrated that invariant facial cues, which signal trait-levels of agreeableness, had no impact on imitation. Despite readily identifying trait-based facial signals, levels of agreeableness did not differentially modulate automatic imitation. Further, a Bayesian analysis showed that the null effect was between 2 and 5 times more likely than the experimental effect. Therefore, we show that imitation systems are more sensitive to prosocial facial signals that indicate “in the moment” states than enduring traits. These data support the view that a smile primes multiple forms of imitation including the copying actions that are not inherently affective. The influence of expression detection on wider forms of imitation may contribute to facilitating interactions between individuals, such as building rapport and affiliation. PMID:27833573
The Influence of Facial Signals on the Automatic Imitation of Hand Actions.
Butler, Emily E; Ward, Robert; Ramsey, Richard
2016-01-01
Imitation and facial signals are fundamental social cues that guide interactions with others, but little is known regarding the relationship between these behaviors. It is clear that during expression detection, we imitate observed expressions by engaging similar facial muscles. It is proposed that a cognitive system, which matches observed and performed actions, controls imitation and contributes to emotion understanding. However, there is little known regarding the consequences of recognizing affective states for other forms of imitation, which are not inherently tied to the observed emotion. The current study investigated the hypothesis that facial cue valence would modulate automatic imitation of hand actions. To test this hypothesis, we paired different types of facial cue with an automatic imitation task. Experiments 1 and 2 demonstrated that a smile prompted greater automatic imitation than angry and neutral expressions. Additionally, a meta-analysis of this and previous studies suggests that both happy and angry expressions increase imitation compared to neutral expressions. By contrast, Experiments 3 and 4 demonstrated that invariant facial cues, which signal trait-levels of agreeableness, had no impact on imitation. Despite readily identifying trait-based facial signals, levels of agreeableness did not differentially modulate automatic imitation. Further, a Bayesian analysis showed that the null effect was between 2 and 5 times more likely than the experimental effect. Therefore, we show that imitation systems are more sensitive to prosocial facial signals that indicate "in the moment" states than enduring traits. These data support the view that a smile primes multiple forms of imitation including the copying actions that are not inherently affective. The influence of expression detection on wider forms of imitation may contribute to facilitating interactions between individuals, such as building rapport and affiliation.
Investigating Psychosocial Causes of the Tendency for Facial Cosmetic Surgery.
Babadi, Hadis; Fereidooni-Moghadam, Malek; Dashtbozorgi, Bahman; Cheraghian, Bahman
2018-01-22
Despite the importance of cosmetic surgery in improving body image and promoting individuals' physical and mental health, it is accompanied with some physical, mental, and economic problems, because it is an invasive procedure. Considering such extensive consequences and given the rising demand for performing such surgeries, it is essential to consider programs for reducing such requests. The present study aimed to investigate the psychosocial causes of the tendency for facial cosmetic surgery in patients referred to medical centers in Ahvaz in 2016-2017. This study was conducted on 385 facial cosmetic surgery applicants referred to medical centers in Ahvaz and were selected using a sequential non-probability sampling method. The data collection tool was a questionnaire divided into two sections namely: (1) demographic questions and (2) a questionnaire on the psychosocial causes of the tendency for facial cosmetic surgery. The mean scores of the psychological and social causes of the tendency for facial cosmetic surgery were 4.46 (SD = 1.67) and 3.44 (SD = 2.57), respectively. "Being interested in being beautiful" was the most frequent positive response of the participants regarding the cause of tending to undergo facial cosmetic surgery (88.6%) and the least frequent response was estimated to be 35.1% for the "inappropriate psychological state" cause. The results of this study showed that psychological factors affected the participants' tendency for facial cosmetic surgery more than social factors. Determining and identifying such psychological pressures and providing individual training and psychological support can prevent individuals from undergoing facial cosmetic surgery. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Impact of facial defect reconstruction on attractiveness and negative facial perception.
Dey, Jacob K; Ishii, Masaru; Boahene, Kofi D O; Byrne, Patrick; Ishii, Lisa E
2015-06-01
Measure the impact of facial defect reconstruction on observer-graded attractiveness and negative facial perception. Prospective, randomized, controlled experiment. One hundred twenty casual observers viewed images of faces with defects of varying sizes and locations before and after reconstruction as well as normal comparison faces. Observers rated attractiveness, defect severity, and how disfiguring, bothersome, and important to repair they considered each face. Facial defects decreased attractiveness -2.26 (95% confidence interval [CI]: -2.45, -2.08) on a 10-point scale. Mixed effects linear regression showed this attractiveness penalty varied with defect size and location, with large and central defects generating the greatest penalty. Reconstructive surgery increased attractiveness 1.33 (95% CI: 1.18, 1.47), an improvement dependent upon size and location, restoring some defect categories to near normal ranges of attractiveness. Iterated principal factor analysis indicated the disfiguring, important to repair, bothersome, and severity variables were highly correlated and measured a common domain; thus, they were combined to create the disfigured, important to repair, bothersome, severity (DIBS) factor score, representing negative facial perception. The DIBS regression showed defect faces have a 1.5 standard deviation increase in negative perception (DIBS: 1.69, 95% CI: 1.61, 1.77) compared to normal faces, which decreased by a similar magnitude after surgery (DIBS: -1.44, 95% CI: -1.49, -1.38). These findings varied with defect size and location. Surgical reconstruction of facial defects increased attractiveness and decreased negative social facial perception, an impact that varied with defect size and location. These new social perception data add to the evidence base demonstrating the value of high-quality reconstructive surgery. NA. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.
Kim, Sung-Chan; Kim, Hyung Bae; Jeong, Woo Shik; Koh, Kyung S; Huh, Chang Hun; Kim, Hee Jin; Lee, Woo Shun; Choi, Jong Woo
2018-06-01
Although the harmony of facial proportions is traditionally perceived as an important element of facial attractiveness, there have been few objective studies that have investigated this esthetic balance using three-dimensional photogrammetric analysis. To better understand why some women appear more beautiful, we investigated differences in facial proportions between beauty pageant contestants and ordinary young women of Korean ethnicity using three-dimensional (3D) photogrammetric analyses. A total of 43 prize-winning beauty pageant contestants (group I) and 48 ordinary young women (group II) of Korean ethnicity were photographed using 3D photography. Numerous soft tissue landmarks were identified, and 3D photogrammetric analyses were performed to evaluate 13 absolute lengths, 5 angles, 3 volumetric proportions, and 12 length proportions between soft tissue landmarks. Group I had a greater absolute length of the middle face, nose height, and eye height and width; a smaller absolute length of the lower face, intercanthal width, and nasal width; a larger nasolabial angle; a greater proportion of the upper and middle facial volume, nasal height, and eye height and width; and a lower proportion of the lower facial volume, lower face height, intercanthal width, nasal width, and mouth width. All these differences were statistically significant. These results indicate that there are significant differences between the faces of beauty pageant contestants and ordinary young women, and help elucidate which factors contribute to facial beauty. The group I mean values could be used as reference values for attractive facial profiles. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Literature study on clinical treatment of facial paralysis in the last 20 years using Web of Science
Zhang, Xiaoge; Feng, Ling; Du, Liang; Zhang, Anxiang; Tang, Tian
2012-01-01
BACKGROUND: Facial paralysis is defined as severe or complete loss of facial muscle motor function. OBJECTIVE: The study was undertaken to explore a bibliometric approach to quantitatively assess the research on clinical treatment of facial paralysis using rehabilitation, physiotherapy and acupuncture using Web of Science from 1992 to 2011. DESIGN: Bibliometric approach. DATA RETRIEVAL: A bibliometric analysis based on the publications on Web of Science was performed using key words such as “facial paralysis”, “rehabilitation”, “physiotherapy” and “acupuncture”. INCLUSIVE CRITERIA: (1) Research articles on the clinical treatment of facial paralysis using acupuncture or physiotherapy (e.g. exercise, electro-stimulation) and other rehabilitation methods; (2) researches on human and animal fundamentals, clinical trials and case reports; (3) Article types: article, review, proceedings paper, note, letter, editorial material, discussion, book chapter. (4) Publication year: 1992–2011 inclusive. Exclusion criteria: (1) Articles on the causes and diagnosis on facial paralysis; (2) Type of articles: correction; (3) Articles from following databases: all databases related to social science and chemical databases in Web of Science. MAIN OUTCOME MEASURES: (1) Overall number of publications; (2) number of publications annually; (3) number of citations received annually; (4) top cited paper; (5) subject categories of publication; (6) the number of countries in which the article is published; (7) distribution of output in journals. RESULTS: Overall population stands at 3 543 research articles addressing the clinical treatment of facial paralysis in Web of Science during the study period. There is also a markedly increase in the number of publications on the subject “facial paralysis treatments using rehabilitation” during the first decade of the 21st century, except in 2004 and 2006 when there are perceptible drops in the number of articles published. The only other year during the study period saw such a drop is 1993. Specifically, there are 192 published articles on facial paralysis treated by rehabilitation in the past two decades, far more than the output of physiotherapy treatment. Physiotherapy treatment scored only 25 articles including acupuncture treatment, with over 80% of these written by Chinese researchers and clinicians. Ranked by regions, USA is by far the most productive country in terms of the number of publications on facial paralysis rehabilitation and physiotherapy research. Seeing from another angle, the journals that focus on otolaryngology published the most number of articles in rehabilitation and physiotherapy studies, whereas most acupuncture studies on facial paralysis were published in the alternative and complementary medicine journals. CONCLUSION: Study of facial paralysis remains an area of active investigation and innovation. Further clinical studies in humans addressing the use of growth factors or stem cells continue to successful facial nerve regeneration. PMID:25767492
Zhang, Xiaoge; Feng, Ling; Du, Liang; Zhang, Anxiang; Tang, Tian
2012-01-15
Facial paralysis is defined as severe or complete loss of facial muscle motor function. The study was undertaken to explore a bibliometric approach to quantitatively assess the research on clinical treatment of facial paralysis using rehabilitation, physiotherapy and acupuncture using Web of Science from 1992 to 2011. Bibliometric approach. A bibliometric analysis based on the publications on Web of Science was performed using key words such as "facial paralysis", "rehabilitation", "physiotherapy" and "acupuncture". (1) Research articles on the clinical treatment of facial paralysis using acupuncture or physiotherapy (e.g. exercise, electro-stimulation) and other rehabilitation methods; (2) researches on human and animal fundamentals, clinical trials and case reports; (3) Article types: article, review, proceedings paper, note, letter, editorial material, discussion, book chapter. (4) Publication year: 1992-2011 inclusive. (1) Articles on the causes and diagnosis on facial paralysis; (2) Type of articles: correction; (3) Articles from following databases: all databases related to social science and chemical databases in Web of Science. (1) Overall number of publications; (2) number of publications annually; (3) number of citations received annually; (4) top cited paper; (5) subject categories of publication; (6) the number of countries in which the article is published; (7) distribution of output in journals. Overall population stands at 3 543 research articles addressing the clinical treatment of facial paralysis in Web of Science during the study period. There is also a markedly increase in the number of publications on the subject "facial paralysis treatments using rehabilitation" during the first decade of the 21(st) century, except in 2004 and 2006 when there are perceptible drops in the number of articles published. The only other year during the study period saw such a drop is 1993. Specifically, there are 192 published articles on facial paralysis treated by rehabilitation in the past two decades, far more than the output of physiotherapy treatment. Physiotherapy treatment scored only 25 articles including acupuncture treatment, with over 80% of these written by Chinese researchers and clinicians. Ranked by regions, USA is by far the most productive country in terms of the number of publications on facial paralysis rehabilitation and physiotherapy research. Seeing from another angle, the journals that focus on otolaryngology published the most number of articles in rehabilitation and physiotherapy studies, whereas most acupuncture studies on facial paralysis were published in the alternative and complementary medicine journals. Study of facial paralysis remains an area of active investigation and innovation. Further clinical studies in humans addressing the use of growth factors or stem cells continue to successful facial nerve regeneration.
Energy-Based Facial Rejuvenation: Advances in Diagnosis and Treatment.
Britt, Christopher J; Marcus, Benjamin
2017-01-01
The market for nonsurgical, energy-based facial rejuvenation techniques has increased exponentially since lasers were first used for skin rejuvenation in 1983. Advances in this area have led to a wide range of products that require the modern facial plastic surgeon to have a large repertoire of knowledge. To serve as a guide for current trends in the development of technology, applications, and outcomes of laser and laser-related technology over the past 5 years. We performed a review of PubMed from January 1, 2011, to March 1, 2016, and focused on randomized clinical trials, meta-analyses, systematic reviews, and clinical practice guidelines including case control, case studies and case reports when necessary, and included 14 articles we deemed landmark articles before 2011. Three broad categories of technology are leading non-energy-based rejuvenation technology: lasers, light therapy, and non-laser-based thermal tightening devices. Laser light therapy has continued to diversify with the use of ablative and nonablative resurfacing technologies, fractionated lasers, and their combined use. Light therapy has developed for use in combination with other technologies or stand alone. Finally, thermally based nonlaser skin-tightening devices, such as radiofrequency (RF) and intense focused ultrasonography (IFUS), are evolving technologies that have changed rapidly over the past 5 years. Improvements in safety and efficacy for energy-based treatment have expanded the patient base considering these therapies viable options. With a wide variety of options, the modern facial plastic surgeon can have a frank discussion with the patient regarding nonsurgical techniques that were never before available. Many of these patients can now derive benefit from treatments requiring significantly less downtime than before while the clinician can augment the treatment to maximize benefit to fit the patient's time schedule.
Profico, Antonio; Piras, Paolo; Buzi, Costantino; Di Vincenzo, Fabio; Lattarini, Flavio; Melchionna, Marina; Veneziano, Alessio; Raia, Pasquale; Manzi, Giorgio
2017-12-01
The evolutionary relationship between the base and face of the cranium is a major topic of interest in primatology. Such areas of the skull possibly respond to different selective pressures. Yet, they are often said to be tightly integrated. In this paper, we analyzed shape variability in the cranial base and the facial complex in Cercopithecoidea and Hominoidea. We used a landmark-based approach to single out the effects of size (evolutionary allometry), morphological integration, modularity, and phylogeny (under Brownian motion) on skull shape variability. Our results demonstrate that the cranial base and the facial complex exhibit different responses to different factors, which produces a little degree of morphological integration between them. Facial shape variation appears primarily influenced by body size and sexual dimorphism, whereas the cranial base is mostly influenced by functional factors. The different adaptations affecting the two modules suggest they are best studied as separate and independent units, and that-at least when dealing with Catarrhines-caution must be posed with the notion of strong cranial integration that is commonly invoked for the evolution of their skull shape. © 2017 Wiley Periodicals, Inc.
A facial attractiveness account of gender asymmetries in interracial marriage.
Lewis, Michael B
2012-01-01
In the US and UK, more Black men are married to White women than vice versa and there are more White men married to Asian women than vice versa. Models of interracial marriage, based on the exchange of racial status for other capital, cannot explain these asymmetries. A new explanation is offered based on the relative perceived facial attractiveness of the different race-by-gender groups. This explanation was tested using a survey of perceived facial attractiveness. This found that Black males are perceived as more attractive than White or East Asian males whereas among females, it is the East Asians that are perceived as most attractive on average. Incorporating these attractiveness patterns into the model of marriage decisions produces asymmetries in interracial marriage similar to those in the observed data in terms of direction and relative size. This model does not require differences in status between races nor different strategies based on gender. Predictions are also generated regarding the relative attractiveness of those engaging in interracial marriage.
Hujoel, P P; Bollen, A-M; Yuen, K C J; Hujoel, I A
2016-10-01
It has been suggested that facial traits are informative on the inherited susceptibility to tuberculosis and obesity, two current global health issues. Our aim was to compare the phenotypic characteristics of adolescents with dental markers for a concave (n=420), a convex (n=978), and a straight (n=3542) facial profile in a nationally representative sample of United States adolescents. The results show that adolescents with a concave facial profile, when compared to a straight facial profile, had an increased waist-to-height ratio (Δ, 1.1 [95% CI 0.5-1.7], p<0.003) and an increased acne prevalence (OR, 1.5 [95% CI 1.2-1.9], p<0.001). Adolescents with a convex facial profile, when compared to a straight facial profile, had an increased prevalence of tuberculosis (OR, 4.3 [95% CI 1.4-13.1], p<0.02), increased ectomorphy (Δ, 0.3 [95% CI 0.2-0.4], p<0.0001), increased left-handedness (OR, 1.4 [95% CI 1.1-1.7], p<0.007), increased color-blindness (OR, 1.7 [95% CI 1.3-2.3], p<0.004), and rhesus ee phenotype (OR, 1.3 [95% CI 1.1-1.5], p<0.008). Adolescents with a concave facial profile, when compared to a convex profile, had increased mesomorphy (Δ, 1.3 [95% CI 1.1-1.5], p<0.0001), increased endomorphy (Δ, 0.5 [95% CI 0.4-0.6], p<0.0001), lower ectomorphy (Δ, 0.5 [95% CI 0.4-0.6], p<0.0001), and lower vocabulary test scores (Δ, 2.3 [95% CI 0.8-3.8], p<0.008). It is concluded that population-based survey data confirm that distinct facial features are associated with distinct somatotypes and distinct disease susceptibilities. Copyright © 2016 Elsevier GmbH. All rights reserved.
Allareddy, Veerasathpurush; Itty, Abraham; Maiorini, Elyse; Lee, Min Kyeong; Rampa, Sankeerth; Allareddy, Veerajalandhar; Nalliah, Romesh P
2014-09-01
The objectives of this study were to provide nationally representative estimates of hospital-based emergency department (ED) visits for facial fractures in children and adolescents, examine the burden associated with such visits, identify common types of facial fracture, and examine the role of patient-related demographic factors on the causes of facial fractures. The Nationwide Emergency Department Sample for 2008 to 2010 was used. All ED visits with a diagnosis of facial fractures in those no older than 21 years were selected. Demographic characteristics, types of facial fracture, causes of injuries, and hospital charges were examined. During the study period, 336,124 ED visits were for facial fractures in those no older than 21 years. Late adolescents (18 to 21 yr old) and middle adolescents (15 to 17 yr old) comprised 45.6% and 26.6% of all ED visits, respectively. Male patients comprised 74.7% of ED visits. The most common facial fractures were those of the nasal bones and mandible. Younger children were more likely to have falls, pedal cycle accidents, pedestrian accidents, and transport accidents, whereas older groups were more likely to have firearm injuries, motor vehicle traffic accidents, and assaults (P < .05). Female patients were more likely to have falls, motor vehicle traffic accidents, and transport accidents, whereas male patients were more likely to have firearm injuries, pedal cycle accidents, and assaults (P < .05). Those residing at low annual income household levels were at a high risk for having firearm injuries, motor vehicle traffic accidents, and transport accidents (P < .05). Late adolescents, middle adolescents, and male patients comprise a significant proportion of these ED visits. Age, gender, and household income levels are significantly associated with the causes of facial fracture injuries. Copyright © 2014 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Zhao, Xiaoxin; Sui, Yuxiu; Yao, Jingjing; Lv, Yiding; Zhang, Xinyue; Jin, Zhuma; Chen, Lijun; Zhang, Xiangrong
2017-07-03
Facial emotion perception is impaired in schizophrenia. Although the pathology of schizophrenia is thought to involve abnormality in white matter (WM), few studies have examined the correlation between facial emotion perception and WM abnormalities in never-medicated patients with first-episode schizophrenia. The present study tested associations between facial emotion perception and WM integrity in order to investigate the neural basis of impaired facial emotion perception in schizophrenia. Sixty-three schizophrenic patients and thirty control subjects underwent facial emotion categorization (FEC). The FEC data was inserted into a logistic function model with subsequent analysis by independent-samples T test and the shift point and slope as outcome measurements. Severity of symptoms was measured using a five-factor model of the Positive and Negative Syndrome Scale (PANSS). Voxelwise group comparison of WM fractional anisotropy (FA) was operated using tract-based spatial statistics (TBSS). The correlation between impaired facial emotion perception and FA reduction was examined in patients using simple regression analysis within brain areas that showed a significant FA reduction in patients compared with controls. The same correlation analysis was also performed for control subjects in the whole brain. The patients with schizophrenia reported a higher shift point and a steeper slope than control subjects in FEC. The patients showed a significant FA reduction in left deep WM in the parietal, temporal and occipital lobes, a small portion of the corpus callosum (CC), and the corona radiata. In voxelwise correlation analysis, we found that facial emotion perception significantly correlated with reduced FA in various WM regions, including left forceps major (FM), inferior longitudinal fasciculus (ILF), inferior fronto-occipital fasciculus (IFOF), Left splenium of CC, and left ILF. The correlation analyses in healthy controls revealed no significant correlation of FA with FEC task. These results showed disrupted WM integrity in these regions constitutes a potential neural basis for the facial emotion perception impairments in schizophrenia. Copyright © 2017 Elsevier Inc. All rights reserved.
Perception of Age, Attractiveness, and Tiredness After Isolated and Combined Facial Subunit Aging.
Forte, Antonio Jorge; Andrew, Tom W; Colasante, Cesar; Persing, John A
2015-12-01
Patients often seek help to redress aging that affects various regions of the face (subunits). The purpose of this study was to determine how aging of different facial subunits impacts perception of age, attractiveness, and tiredness. Frontal and lateral view facial photographs of a middle-aged woman were modified using imaging software to independently age different facial features. Sixty-six subjects were administered with a questionnaire, and presented with a baseline unmodified picture and others containing different individual or grouped aging of facial subunits. Test subjects were asked to estimate the age of the subject in the image and quantify (0-10 scale) how "tired" and "attractive" they appeared. Facial subunits were organized following rank assignment regarding impact on perception of age, attractiveness, and tiredness. The correlation coefficient between age and attractiveness had a strong inverse relationship of approximately -0.95 in both lateral and frontal views. From most to least impact in age, the rank assignment for frontal view facial subunits was full facial aging, middle third, lower third, upper third, vertical lip rhytides, horizontal forehead rhytides, jowls, upper eyelid ptosis, loss of malar volume, lower lid fat herniation, deepening glabellar furrows, and deepening nasolabial folds. From most to least impact in age, the rank assignment for lateral view facial subunits was severe neck ptosis, jowls, moderate neck ptosis, vertical lip rhytides, crow's feet, lower lid fat herniation, loss of malar volume, and elongated earlobe. This study provides a preliminary template for further research to determine which anatomical subunit will have the most substantial effect on an aged appearance, as well as on the perception of tiredness and attractiveness. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.
Salagnac, Jean-Michel
2016-09-01
The mandible consists of different segments, each of which possess its own specific characteristics regarding emergence, ossification during growth and pathologies. Orthodontists need to be very familiar with these developmental anomalies if they are to avoid failure in their orthopedic or orthodontic treatments and in order to understand the reasons for the lack of success of "conventional" treatments. Each segment must develop correctly if the mandible is to achieve optimal development and occupy a normal position within the cranio-facial complex. The position of the mandible in the cranio-facial block is also conditioned by its attachment to the base of the skull. Combining a detailed semiologic study and a three-dimensional architectural and structural radiologic analysis of clinical cases, this article investigates the various anomalies affecting the mandibular segments and their impact on the craniofacial structure as a whole. An understanding of these anomalies and this analytical method can enable clinicians to perform early diagnosis, avoid undertaking orthopedic and orthodontic treatments which are likely to fail, understand the reasons for unsuccessful "conventional" treatments, provide an orthopedic-surgical guide and make it possible to inform patients correctly. Anomalies affecting the growth of the mandible and its position on the cranial base and their impact on cranio-facial skeletal balance are clearly revealed by structural and architectural analysis, which pinpoints the different clinical elements in skeletal Class II et III cases. In maxilla-dento-facial orthopedics when confronted with a pathology of mandibular origin, it is essential to carefully study the radiographs of each segment of the mandible, to seek out the minor forms of the anomalies and to calculate the position of the mandible on the cranial base relative to the neighboring structures; the skull, the cervical vertebrae and the maxilla. © EDP Sciences, SFODF, 2016.
Gibelli, Daniele; Pucciarelli, Valentina; Poppa, Pasquale; De Angelis, Danilo; Cummaudo, Marco; Pisoni, Luca; Codari, Marina; Cattaneo, Cristina; Sforza, Chiarella
2018-03-01
Distinction of one twin with respect to the other, based on external appearance, is challenging; nevertheless, facial morphology may provide individualizing features that may help distinguish twin siblings. This study aims at exposing an innovative method for the facial assessment in monozygotic twins for personal identification, based on the registration and comparison of 3D models of faces. Ten couples of monozygotic twins aged between 25 and 69 years were acquired twice by a stereophotogrammetric system (VECTRA-3D® M3: Canfield Scientific, Inc., Fairfield, NJ); the 3D reconstruction of each person was then registered and superimposed onto the model belonging to the same person (self-matches), the corresponding sibling (twin-matches) and to unrelated participants from the other couples (miss-matches); RMS (root mean square) point-to-point distances were automatically calculated for all the 220 superimpositions. One-way ANOVA was used to evaluate the differences among miss-matches, twin-matches and self-matches (p < .05). RMS values for self-matches, twin-matches and miss-matches were respectively 1.0 mm (SD: 0.3 mm), 1.9 mm (0.5 mm) and 3.4 mm (0.70 mm). Statistically significant differences were found among the three groups (p < .01). Comparing RMS values in the three groups, mean facial variability in twin siblings was 55.9% of that assessed between unrelated persons and about twice higher than that observed between models belonging to the same individual. The present study proposed an innovative method for the facial assessment of twin siblings, based on 3D surface analysis, which may provide additional information concerning the relation between genes and environment. Copyright © 2017 Elsevier B.V. All rights reserved.
Experience-based human perception of facial expressions in Barbary macaques (Macaca sylvanus)
Levy, Xandria; Meints, Kerstin; Majolo, Bonaventura
2017-01-01
Background Facial expressions convey key cues of human emotions, and may also be important for interspecies interactions. The universality hypothesis suggests that six basic emotions (anger, disgust, fear, happiness, sadness, and surprise) should be expressed by similar facial expressions in close phylogenetic species such as humans and nonhuman primates. However, some facial expressions have been shown to differ in meaning between humans and nonhuman primates like macaques. This ambiguity in signalling emotion can lead to an increased risk of aggression and injuries for both humans and animals. This raises serious concerns for activities such as wildlife tourism where humans closely interact with wild animals. Understanding what factors (i.e., experience and type of emotion) affect ability to recognise emotional state of nonhuman primates, based on their facial expressions, can enable us to test the validity of the universality hypothesis, as well as reduce the risk of aggression and potential injuries in wildlife tourism. Methods The present study investigated whether different levels of experience of Barbary macaques, Macaca sylvanus, affect the ability to correctly assess different facial expressions related to aggressive, distressed, friendly or neutral states, using an online questionnaire. Participants’ level of experience was defined as either: (1) naïve: never worked with nonhuman primates and never or rarely encountered live Barbary macaques; (2) exposed: shown pictures of the different Barbary macaques’ facial expressions along with the description and the corresponding emotion prior to undertaking the questionnaire; (3) expert: worked with Barbary macaques for at least two months. Results Experience with Barbary macaques was associated with better performance in judging their emotional state. Simple exposure to pictures of macaques’ facial expressions improved the ability of inexperienced participants to better discriminate neutral and distressed faces, and a trend was found for aggressive faces. However, these participants, even when previously exposed to pictures, had difficulties in recognising aggressive, distressed and friendly faces above chance level. Discussion These results do not support the universality hypothesis as exposed and naïve participants had difficulties in correctly identifying aggressive, distressed and friendly faces. Exposure to facial expressions improved their correct recognition. In addition, the findings suggest that providing simple exposure to 2D pictures (for example, information signs explaining animals’ facial signalling in zoos or animal parks) is not a sufficient educational tool to reduce tourists’ misinterpretations of macaque emotion. Additional measures, such as keeping a safe distance between tourists and wild animals, as well as reinforcing learning via videos or supervised visits led by expert guides, could reduce such issues and improve both animal welfare and tourist experience. PMID:28584731
Tansatit, Tanvaa; Apinuntrum, Prawit; Phetudom, Thavorn
2015-12-01
The auriculotemporal nerve is one of the peripheral nerves that communicates with the facial nerve. However, the function of these communications is poorly understood. Details of how these communications form and connect with each other are still unclear. In addition, a reliable anatomical landmark for locating these communications during surgery has not been sufficiently described. Microdissection was performed on 20 lateral hemifaces of 10 soft-embalmed cadavers to investigate facial-auriculotemporal nerve communications with emphasis on determining their function. The auriculotemporal nerve was identified in the retromandibular space and traced towards its terminations. The communicating branches were followed and the anatomical relationships to surrounding structures observed. The auriculotemporal nerve is suspended above the maxillary artery in the dense retromandibular fascia behind the mandibular ramus. It forms a knot and fans out, providing multiple branches in all directions in the sagittal plane. Inferiorly, it connects the maxillary periarterial plexus, while minute branches supply the temporomandibular joint anteriorly. The larger branches mainly communicate with the branches of the temporofacial division of the facial nerve, and the auricular branches enter the fascia of the auricular cartilage posteriorly. The temporal branches and occasionally the zygomatic branches arise superiorly to distribute within the temporoparietal fascia. The auriculotemporal nerve forms the parotid retromandibular plexus through two types of communication. It sends one to three branches to join the zygomatic and buccal branches of the facial nerve at the branching area of the temporofacial division. It also communicates with the periarterial plexus of the superficial temporal and maxillary arteries. This plexus continues anteriorly along the branches of the facial nerve and the periarterial plexus of the transverse facial artery as the parotid periductal autonomic plexus, supplying the branches of the parotid duct within the loop of the two main divisions of the parotid gland. A single cutaneous zygomatic branch arising from the auriculotemporal nerve in some specimens, the intraparotid communications with the zygomatic and the buccal trunks of the facial nerve, the retromandibular communications with the superficial temporal-maxillary periarterial plexuses, and the periductal autonomic plexus between the loop of the two main facial divisions lead to the suggestion that these communications of the auriculotemporal nerve convey the secretomotor to the zygomatic and buccal branches of the facial nerve. This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.
Mitchem, Dorian G.; Zietsch, Brendan P.; Wright, Margaret J.; Martin, Nicholas G.; Hewitt, John K.; Keller, Matthew C.
2015-01-01
Theories in both evolutionary and social psychology suggest that a positive correlation should exist between facial attractiveness and general intelligence, and several empirical observations appear to corroborate this expectation. Using highly reliable measures of facial attractiveness and IQ in a large sample of identical and fraternal twins and their siblings, we found no evidence for a phenotypic correlation between these traits. Likewise, neither the genetic nor the environmental latent factor correlations were statistically significant. We supplemented our analyses of new data with a simple meta-analysis that found evidence of publication bias among past studies of the relationship between facial attractiveness and intelligence. In view of these results, we suggest that previously published reports may have overestimated the strength of the relationship and that the theoretical bases for the predicted attractiveness-intelligence correlation may need to be reconsidered. PMID:25937789
On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information
NASA Astrophysics Data System (ADS)
Stathopoulou, I.-O.; Alepis, E.; Tsihrintzis, G. A.; Virvou, M.
Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.
Odat, Haitham; Alawneh, Khaled; Al-Qudah, Mohannad
2018-01-01
Jugular paragangliomas are slow growing highly vascular tumors arising from jugular paraganglia. The gold standard of treatment is complete surgical resection. Pre-operative embolization of these highly vascular tumors is essential to reduce intra-operative bleeding, allow safe dissection, and decrease operative time and post-operative complications. Onyx (ethylene-vinyl alcohol copolymer) has been widely used as permanent occluding material for vascular tumors of skull base because of its unique physical properties. We present the case of a 33-year-old woman who had left-sided facial nerve paralysis after Onyx embolization of jugular paraganglioma. The tumor was resected on the next day of embolization. The patient was followed up for 30 months with serial imaging studies and facial nerve assessment. The facial verve function improved from House–Brackmann grade V to grade II at the last visit. PMID:29518926
Odat, Haitham; Alawneh, Khaled; Al-Qudah, Mohannad
2018-03-07
Jugular paragangliomas are slow growing highly vascular tumors arising from jugular paraganglia. The gold standard of treatment is complete surgical resection. Pre-operative embolization of these highly vascular tumors is essential to reduce intra-operative bleeding, allow safe dissection, and decrease operative time and post-operative complications. Onyx (ethylene-vinyl alcohol copolymer) has been widely used as permanent occluding material for vascular tumors of skull base because of its unique physical properties. We present the case of a 33-year-old woman who had left-sided facial nerve paralysis after Onyx embolization of jugular paraganglioma. The tumor was resected on the next day of embolization. The patient was followed up for 30 months with serial imaging studies and facial nerve assessment. The facial verve function improved from House-Brackmann grade V to grade II at the last visit.
Emotion Unchained: Facial Expression Modulates Gaze Cueing under Cognitive Load.
Pecchinenda, Anna; Petrucci, Manuel
2016-01-01
Direction of eye gaze cues spatial attention, and typically this cueing effect is not modulated by the expression of a face unless top-down processes are explicitly or implicitly involved. To investigate the role of cognitive control on gaze cueing by emotional faces, participants performed a gaze cueing task with happy, angry, or neutral faces under high (i.e., counting backward by 7) or low cognitive load (i.e., counting forward by 2). Results show that high cognitive load enhances gaze cueing effects for angry facial expressions. In addition, cognitive load reduces gaze cueing for neutral faces, whereas happy facial expressions and gaze affected object preferences regardless of load. This evidence clearly indicates a differential role of cognitive control in processing gaze direction and facial expression, suggesting that under typical conditions, when we shift attention based on social cues from another person, cognitive control processes are used to reduce interference from emotional information.
Emotion Unchained: Facial Expression Modulates Gaze Cueing under Cognitive Load
Petrucci, Manuel
2016-01-01
Direction of eye gaze cues spatial attention, and typically this cueing effect is not modulated by the expression of a face unless top-down processes are explicitly or implicitly involved. To investigate the role of cognitive control on gaze cueing by emotional faces, participants performed a gaze cueing task with happy, angry, or neutral faces under high (i.e., counting backward by 7) or low cognitive load (i.e., counting forward by 2). Results show that high cognitive load enhances gaze cueing effects for angry facial expressions. In addition, cognitive load reduces gaze cueing for neutral faces, whereas happy facial expressions and gaze affected object preferences regardless of load. This evidence clearly indicates a differential role of cognitive control in processing gaze direction and facial expression, suggesting that under typical conditions, when we shift attention based on social cues from another person, cognitive control processes are used to reduce interference from emotional information. PMID:27959925
A Face Attention Technique for a Robot Able to Interpret Facial Expressions
NASA Astrophysics Data System (ADS)
Simplício, Carlos; Prado, José; Dias, Jorge
Automatic facial expressions recognition using vision is an important subject towards human-robot interaction. Here is proposed a human face focus of attention technique and a facial expressions classifier (a Dynamic Bayesian Network) to incorporate in an autonomous mobile agent whose hardware is composed by a robotic platform and a robotic head. The focus of attention technique is based on the symmetry presented by human faces. By using the output of this module the autonomous agent keeps always targeting the human face frontally. In order to accomplish this, the robot platform performs an arc centered at the human; thus the robotic head, when necessary, moves synchronized. In the proposed probabilistic classifier the information is propagated, from the previous instant, in a lower level of the network, to the current instant. Moreover, to recognize facial expressions are used not only positive evidences but also negative.
Facial paralysis caused by metastasis of breast carcinoma to the temporal bone.
Lan, Ming-Ying; Shiao, An-Suey; Li, Wing-Yin
2004-11-01
Metastatic tumors to the temporal bone are very rare. The most common sites of origin of temporal bone metastases are breast, lung, kidney, gastrointestinal tract, larynx, prostate gland, and thyroid gland. The pathogenesis of spread to the temporal bone is most commonly by the hematogenous route. The common otologic symptoms that manifest with facial nerve paralysis are often thought to be due to a mastoid infection. Here is a report on a case of breast carcinoma presenting with otalgia, otorrhea, and facial paralysis for 2 months. The patient was initially diagnosed as mastoiditis, and later the clinical impression was revised to metastatic breast carcinoma to temporal bone, based on the pathologic findings. Metastatic disease should be considered as a possible etiology in patients with a clinical history of malignant neoplasms presenting with common otologic or vestibular symptoms, especially with facial nerve paralysis.
Lerut, B; Vosbeck, J; Linder, T E
2011-04-01
We present a rare case of a facial nerve granular cell tumour in the right parotid gland, in a 10-year-old boy. A parotid or neurogenic tumour was suspected, based on magnetic resonance imaging. Intra-operatively, strong adhesions to surrounding structures were found, and a midfacial nerve branch had to be sacrificed for complete tumour removal. Recent reports verify that granular cell tumours arise from Schwann cells of peripheral nerve branches. The rarity of this tumour within the parotid gland, its origin from peripheral nerves, its sometimes misleading imaging characteristics, and its rare presentation with facial weakness and pain all have considerable implications on the surgical strategy and pre-operative counselling. Fine needle aspiration cytology may confirm the neurogenic origin of this lesion. When resecting the tumour, the surgeon must anticipate strong adherence to the facial nerve and be prepared to graft, or sacrifice, certain branches of this nerve.
Photo-anthropometric study on face among Garo adult females of Bangladesh.
Akhter, Z; Banu, M L A; Alam, M M; Hossain, S; Nazneen, M
2013-08-01
Facial anthropometry has well-known implications in health-related fields. Measurement of human face is used in identification of person in Forensic medicine, Plastic surgery, Orthodontics, Archeology, Hair-style design and examination of the differences between races and ethnicities. Facial anthropometry provides an indication of the variations in facial shape in a specified population. Bangladesh harbours many cultures and people of different races because of the colonial rules of the past regimes. Standards based on ethnic or racial data are desirable because these standards reflect the potentially different patterns of craniofacial growth resulting from racial, ethnic and sexual differences. In the above context, the present study was attempted to establish ethnic specific anthropometric data for the Christian Garo adult females of Bangladesh. The study was an observational, cross-sectional and primarily descriptive in nature with some analytical components and it was carried out with a total number of 100 Christian Garo adult females aged between 25-45 years. Three vertical facial dimensions such as facial height from 'trichion' to 'gnathion', nasal length and total vermilion height were measured by photographic method. Though these measurements were taken by photographic method but they were converted into actual size using one of the physically measured variables between two angles of the mouth (chilion to chilion). The data were then statistically analyzed by computation to find out its normatic value. The study also observed the possible 'correlation' between the facial height from 'trichion' to 'gnathion' with nasal length and total vermilion height. Multiplication factors were estimated for estimating facial height from nasal length and total vermilion height. Comparison were made between 'estimated' values with the 'measured' values by using't' test. The mean (+/- SD) of nasal length and total vermilion height were 4.53 +/- 0.36 cm and 1.63 +/- 0.23 cm respectively and the mean (+/- SD) of facial height from 'trichion' to 'gnathion' was 16.88 +/- 1.11 cm. Nasal length and total vermilion height showed also a significant positive correlation with facial height from 'trichion' to 'gnathion'. No significant difference was found between the 'measured' and 'estimated' facial height from 'trichion' to 'gnathion' for nasal length and total vermilion height.
Ethnicity identification from face images
NASA Astrophysics Data System (ADS)
Lu, Xiaoguang; Jain, Anil K.
2004-08-01
Human facial images provide the demographic information, such as ethnicity and gender. Conversely, ethnicity and gender also play an important role in face-related applications. Image-based ethnicity identification problem is addressed in a machine learning framework. The Linear Discriminant Analysis (LDA) based scheme is presented for the two-class (Asian vs. non-Asian) ethnicity classification task. Multiscale analysis is applied to the input facial images. An ensemble framework, which integrates the LDA analysis for the input face images at different scales, is proposed to further improve the classification performance. The product rule is used as the combination strategy in the ensemble. Experimental results based on a face database containing 263 subjects (2,630 face images, with equal balance between the two classes) are promising, indicating that LDA and the proposed ensemble framework have sufficient discriminative power for the ethnicity classification problem. The normalized ethnicity classification scores can be helpful in the facial identity recognition. Useful as a "soft" biometric, face matching scores can be updated based on the output of ethnicity classification module. In other words, ethnicity classifier does not have to be perfect to be useful in practice.
Genetic Modifiers of the Physical Malformations in Velo-Cardio-Facial Syndrome/DiGeorge Syndrome
ERIC Educational Resources Information Center
Aggarwal, Vimla S.; Morrow, Bernice E.
2008-01-01
Velo-cardio-facial syndrome/DiGeorge syndrome (VCFS/DGS), the most common micro-deletion disorder in humans, is characterized by craniofacial, parathyroid, and thymic defects as well as cardiac outflow tract malformations. Most patients have a similar hemizygous 3 million base pair deletion on 22q11.2. Studies in mouse have shown that "Tbx1", a…
Bekele, E; Bian, D; Peterman, J; Park, S; Sarkar, N
2017-06-01
Schizophrenia is a life-long, debilitating psychotic disorder with poor outcome that affects about 1% of the population. Although pharmacotherapy can alleviate some of the acute psychotic symptoms, residual social impairments present a significant barrier that prevents successful rehabilitation. With limited resources and access to social skills training opportunities, innovative technology has emerged as a potentially powerful tool for intervention. In this paper, we present a novel virtual reality (VR)-based system for understanding facial emotion processing impairments that may lead to poor social outcome in schizophrenia. We henceforth call it a VR System for Affect Analysis in Facial Expressions (VR-SAAFE). This system integrates a VR-based task presentation platform that can minutely control facial expressions of an avatar with or without accompanying verbal interaction, with an eye-tracker to quantitatively measure a participants real-time gaze and a set of physiological sensors to infer his/her affective states to allow in-depth understanding of the emotion recognition mechanism of patients with schizophrenia based on quantitative metrics. A usability study with 12 patients with schizophrenia and 12 healthy controls was conducted to examine processing of the emotional faces. Preliminary results indicated that there were significant differences in the way patients with schizophrenia processed and responded towards the emotional faces presented in the VR environment compared with healthy control participants. The preliminary results underscore the utility of such a VR-based system that enables precise and quantitative assessment of social skill deficits in patients with schizophrenia.
I care, even after the first impression: Facial appearance-based evaluations in healthcare context.
Mattarozzi, Katia; Colonnello, Valentina; De Gioia, Francesco; Todorov, Alexander
2017-06-01
Prior research has demonstrated that healthcare providers' implicit biases may contribute to healthcare disparities. Independent research in social psychology indicates that facial appearance-based evaluations affect social behavior in a variety of domains, influencing political, legal, and economic decisions. Whether and to what extent these evaluations influence approach behavior in healthcare contexts warrants research attention. Here we investigate the impact of facial appearance-based evaluations of trustworthiness on healthcare providers' caring inclination, and the moderating role of experience and information about the social identity of the faces. Novice and expert nurses rated their inclination to provide care when viewing photos of trustworthy-, neutral-, and untrustworthy-looking faces. To explore whether information about the target of care influences caring inclination, some participants were told that they would view patients' faces while others received no information about the faces. Both novice and expert nurses had higher caring inclination scores for trustworthy-than for untrustworthy-looking faces; however, experts had higher scores than novices for untrustworthy-looking faces. Regardless of a face's trustworthiness level, experts had higher caring inclination scores for patients than for unidentified individuals, while novices showed no differences. Facial appearance-based inferences can bias caring inclination in healthcare contexts. However, expert healthcare providers are less biased by these inferences and more sensitive to information about the target of care. These findings highlight the importance of promoting novice healthcare professionals' awareness of first impression biases. Copyright © 2017 Elsevier Ltd. All rights reserved.
FaceTOON: a unified platform for feature-based cartoon expression generation
NASA Astrophysics Data System (ADS)
Zaharia, Titus; Marre, Olivier; Prêteux, Françoise; Monjaux, Perrine
2008-02-01
This paper presents the FaceTOON system, a semi-automatic platform dedicated to the creation of verbal and emotional facial expressions, within the applicative framework of 2D cartoon production. The proposed FaceTOON platform makes it possible to rapidly create 3D facial animations with a minimum amount of user interaction. In contrast with existing commercial 3D modeling softwares, which usually require from the users advanced 3D graphics skills and competences, the FaceTOON system is based exclusively on 2D interaction mechanisms, the 3D modeling stage being completely transparent for the user. The system takes as input a neutral 3D face model, free of any facial feature, and a set of 2D drawings, representing the desired facial features. A 2D/3D virtual mapping procedure makes it possible to obtain a ready-for-animation model which can be directly manipulated and deformed for generating expressions. The platform includes a complete set of dedicated tools for 2D/3D interactive deformation, pose management, key-frame interpolation and MPEG-4 compliant animation and rendering. The proposed FaceTOON system is currently considered for industrial evaluation and commercialization by the Quadraxis company.
Dynamic texture recognition using local binary patterns with an application to facial expressions.
Zhao, Guoying; Pietikäinen, Matti
2007-06-01
Dynamic texture (DT) is an extension of texture to the temporal domain. Description and recognition of DTs have attracted growing attention. In this paper, a novel approach for recognizing DTs is proposed and its simplifications and extensions to facial image analysis are also considered. First, the textures are modeled with volume local binary patterns (VLBP), which are an extension of the LBP operator widely used in ordinary texture analysis, combining motion and appearance. To make the approach computationally simple and easy to extend, only the co-occurrences of the local binary patterns on three orthogonal planes (LBP-TOP) are then considered. A block-based method is also proposed to deal with specific dynamic events such as facial expressions in which local information and its spatial locations should also be taken into account. In experiments with two DT databases, DynTex and Massachusetts Institute of Technology (MIT), both the VLBP and LBP-TOP clearly outperformed the earlier approaches. The proposed block-based method was evaluated with the Cohn-Kanade facial expression database with excellent results. The advantages of our approach include local processing, robustness to monotonic gray-scale changes, and simple computation.
Watkins, Christopher D
2017-11-01
While facial cues to body size are a valid guide to health and attractiveness, it is unclear whether the observer's own condition predicts the salience of (low) size as a cue to female attractiveness. The current study examines whether measures related to women's own attractiveness/appearance predict the extent to which they use facial cues to size to differentiate other women on the attractiveness dimension. Women completed a body mass index (BMI) preference task, where they indicated their preference for high- versus low-BMI versions of the same woman, provided data to calculate their BMI and completed various psychometric measures (self-rated attractiveness/health, dissatisfaction with physical appearance). Here, attractive women and women who were dissatisfied with their own appearance were more likely to associate facial cues to low body size with high attractiveness. These data suggest that psychological factors related to women's appearance shape their evaluations of other women based on cues to size. Such variation in attractiveness judgements may function to reduce the costs of female competition for resources, for example, by identifying "quality" rivals or excluding others based on cues to size.
A voxel-based lesion study on facial emotion recognition after penetrating brain injury
Dal Monte, Olga; Solomon, Jeffrey M.; Schintu, Selene; Knutson, Kristine M.; Strenziok, Maren; Pardini, Matteo; Leopold, Anne; Raymont, Vanessa; Grafman, Jordan
2013-01-01
The ability to read emotions in the face of another person is an important social skill that can be impaired in subjects with traumatic brain injury (TBI). To determine the brain regions that modulate facial emotion recognition, we conducted a whole-brain analysis using a well-validated facial emotion recognition task and voxel-based lesion symptom mapping (VLSM) in a large sample of patients with focal penetrating TBIs (pTBIs). Our results revealed that individuals with pTBI performed significantly worse than normal controls in recognizing unpleasant emotions. VLSM mapping results showed that impairment in facial emotion recognition was due to damage in a bilateral fronto-temporo-limbic network, including medial prefrontal cortex (PFC), anterior cingulate cortex, left insula and temporal areas. Beside those common areas, damage to the bilateral and anterior regions of PFC led to impairment in recognizing unpleasant emotions, whereas bilateral posterior PFC and left temporal areas led to impairment in recognizing pleasant emotions. Our findings add empirical evidence that the ability to read pleasant and unpleasant emotions in other people's faces is a complex process involving not only a common network that includes bilateral fronto-temporo-limbic lobes, but also other regions depending on emotional valence. PMID:22496440
Spisak, Brian R; Dekker, Peter H; Krüger, Max; van Vugt, Mark
2012-01-01
This paper examines the impact of facial cues on leadership emergence. Using evolutionary social psychology, we expand upon implicit and contingent theories of leadership and propose that different types of intergroup relations elicit different implicit cognitive leadership prototypes. It is argued that a biologically based hormonal connection between behavior and corresponding facial characteristics interacts with evolutionarily consistent social dynamics to influence leadership emergence. We predict that masculine-looking leaders are selected during intergroup conflict (war) and feminine-looking leaders during intergroup cooperation (peace). Across two experiments we show that a general categorization of leader versus nonleader is an initial implicit requirement for emergence, and at a context-specific level facial cues of masculinity and femininity contingently affect war versus peace leadership emergence in the predicted direction. In addition, we replicate our findings in Experiment 1 across culture using Western and East Asian samples. In Experiment 2, we also show that masculine-feminine facial cues are better predictors of leadership than male-female cues. Collectively, our results indicate a multi-level classification of context-specific leadership based on visual cues imbedded in the human face and challenge traditional distinctions of male and female leadership.
Spisak, Brian R.; Dekker, Peter H.; Krüger, Max; van Vugt, Mark
2012-01-01
This paper examines the impact of facial cues on leadership emergence. Using evolutionary social psychology, we expand upon implicit and contingent theories of leadership and propose that different types of intergroup relations elicit different implicit cognitive leadership prototypes. It is argued that a biologically based hormonal connection between behavior and corresponding facial characteristics interacts with evolutionarily consistent social dynamics to influence leadership emergence. We predict that masculine-looking leaders are selected during intergroup conflict (war) and feminine-looking leaders during intergroup cooperation (peace). Across two experiments we show that a general categorization of leader versus nonleader is an initial implicit requirement for emergence, and at a context-specific level facial cues of masculinity and femininity contingently affect war versus peace leadership emergence in the predicted direction. In addition, we replicate our findings in Experiment 1 across culture using Western and East Asian samples. In Experiment 2, we also show that masculine-feminine facial cues are better predictors of leadership than male-female cues. Collectively, our results indicate a multi-level classification of context-specific leadership based on visual cues imbedded in the human face and challenge traditional distinctions of male and female leadership. PMID:22276190
NASA Astrophysics Data System (ADS)
Chung, Soyoung; Kim, Joojin; Hong, Helen
2016-03-01
During maxillofacial surgery, prediction of the facial outcome after surgery is main concern for both surgeons and patients. However, registration of the facial CBCT images and 3D photographic images has some difficulties that regions around the eyes and mouth are affected by facial expressions or the registration speed is low due to their dense clouds of points on surfaces. Therefore, we propose a framework for the fusion of facial CBCT images and 3D photos with skin segmentation and two-stage surface registration. Our method is composed of three major steps. First, to obtain a CBCT skin surface for the registration with 3D photographic surface, skin is automatically segmented from CBCT images and the skin surface is generated by surface modeling. Second, to roughly align the scale and the orientation of the CBCT skin surface and 3D photographic surface, point-based registration with four corresponding landmarks which are located around the mouth is performed. Finally, to merge the CBCT skin surface and 3D photographic surface, Gaussian-weight-based surface registration is performed within narrow-band of 3D photographic surface.
Feeser, Melanie; Fan, Yan; Weigand, Anne; Hahn, Adam; Gärtner, Matti; Aust, Sabine; Böker, Heinz; Bajbouj, Malek; Grimm, Simone
2014-12-01
Previous studies have shown that oxytocin (OXT) enhances social cognitive processes. It has also been demonstrated that OXT does not uniformly facilitate social cognition. The effects of OXT administration strongly depend on the exposure to stressful experiences in early life. Emotional facial recognition is crucial for social cognition. However, no study has yet examined how the effects of OXT on the ability to identify emotional faces are altered by early life stress (ELS) experiences. Given the role of OXT in modulating social motivational processes, we specifically aimed to investigate its effects on the recognition of approach- and avoidance-related facial emotions. In a double-blind, between-subjects, placebo-controlled design, 82 male participants performed an emotion recognition task with faces taken from the "Karolinska Directed Emotional Faces" set. We clustered the six basic emotions along the dimensions approach (happy, surprise, anger) and avoidance (fear, sadness, disgust). ELS was assessed with the Childhood Trauma Questionnaire (CTQ). Our results showed that OXT improved the ability to recognize avoidance-related emotional faces as compared to approach-related emotional faces. Whereas the performance for avoidance-related emotions in participants with higher ELS scores was comparable in both OXT and placebo condition, OXT enhanced emotion recognition in participants with lower ELS scores. Independent of OXT administration, we observed increased emotion recognition for avoidance-related faces in participants with high ELS scores. Our findings suggest that the investigation of OXT on social recognition requires a broad approach that takes ELS experiences as well as motivational processes into account.
Emotional facial recognition in proactive and reactive violent offenders.
Philipp-Wiegmann, Florence; Rösler, Michael; Retz-Junginger, Petra; Retz, Wolfgang
2017-10-01
The purpose of this study is to analyse individual differences in the ability of emotional facial recognition in violent offenders, who were characterised as either reactive or proactive in relation to their offending. In accordance with findings of our previous study, we expected higher impairments in facial recognition in reactive than proactive violent offenders. To assess the ability to recognize facial expressions, the computer-based Facial Emotional Expression Labeling Test (FEEL) was performed. Group allocation of reactive und proactive violent offenders and assessment of psychopathic traits were performed by an independent forensic expert using rating scales (PROREA, PCL-SV). Compared to proactive violent offenders and controls, the performance of emotion recognition in the reactive offender group was significantly lower, both in total and especially in recognition of negative emotions such as anxiety (d = -1.29), sadness (d = -1.54), and disgust (d = -1.11). Furthermore, reactive violent offenders showed a tendency to interpret non-anger emotions as anger. In contrast, proactive violent offenders performed as well as controls. General and specific deficits in reactive violent offenders are in line with the results of our previous study and correspond to predictions of the Integrated Emotion System (IES, 7) and the hostile attribution processes (21). Due to the different error pattern in the FEEL test, the theoretical distinction between proactive and reactive aggression can be supported based on emotion recognition, even though aggression itself is always a heterogeneous act rather than a distinct one-dimensional concept.
Cutaneous Sensibility Changes in Bell's Palsy Patients.
Cárdenas Palacio, Carlos Andrés; Múnera Galarza, Francisco Alejandro
2017-05-01
Objective Bell's palsy is a cranial nerve VII dysfunction that renders the patient unable to control facial muscles from the affected side. Nevertheless, some patients have reported cutaneous changes in the paretic area. Therefore, cutaneous sensibility changes might be possible additional symptoms within the clinical presentation of this disorder. Accordingly, the aim of this research was to investigate the relationship between cutaneous sensibility and facial paralysis severity in these patients. Study Design Prospective longitudinal cohort study. Settings Tertiary care medical center. Subjects and Methods Twelve acute-onset Bell's palsy patients were enrolled from March to September 2009. In addition, 12 sex- and age-matched healthy volunteers were tested. Cutaneous sensibility was evaluated with pressure threshold and 2-point discrimination at 6 areas of the face. Facial paralysis severity was evaluated with the House-Brackmann scale. Results Statistically significant correlations based on the Spearman's test were found between facial paralysis severity and cutaneous sensitivity on forehead, eyelid, cheek, nose, and lip ( P < .05). Additionally, significant differences based on the Student's t test were observed between both sides of the face in 2-point discrimination on eyelid, cheek, and lip ( P < .05) in Bell's palsy patients but not in healthy subjects. Conclusion Such results suggest a possible relationship between the loss of motor control of the face and changes in facial sensory information processing. Such findings are worth further research about the neurophysiologic changes associated with the cutaneous sensibility disturbances of these patients.
A new atlas for the evaluation of facial features: advantages, limits, and applicability.
Ritz-Timme, Stefanie; Gabriel, Peter; Obertovà, Zuzana; Boguslawski, Melanie; Mayer, F; Drabik, A; Poppa, Pasquale; De Angelis, Danilo; Ciaffi, Romina; Zanotti, Benedetta; Gibelli, Daniele; Cattaneo, Cristina
2011-03-01
Methods for the verification of the identity of offenders in cases involving video-surveillance images in criminal investigation events are currently under scrutiny by several forensic experts around the globe. The anthroposcopic, or morphological, approach based on facial features is the most frequently used by international forensic experts. However, a specific set of applicable features has not yet been agreed on by the experts. Furthermore, population frequencies of such features have not been recorded, and only few validation tests have been published. To combat and prevent crime in Europe, the European Commission funded an extensive research project dedicated to the optimization of methods for facial identification of persons on photographs. Within this research project, standardized photographs of 900 males between 20 and 31 years of age from Germany, Italy, and Lithuania were acquired. Based on these photographs, 43 facial features were described and evaluated in detail. These efforts led to the development of a new model of a morphologic atlas, called DMV atlas ("Düsseldorf Milan Vilnius," from the participating cities). This study is the first attempt at verifying the feasibility of this atlas as a preliminary step to personal identification by exploring the intra- and interobserver error. The analysis yielded mismatch percentages from 19% to 39%, which reflect the subjectivity of the approach and suggest caution in verifying personal identity only from the classification of facial features. Nonetheless, the use of the atlas leads to a significant improvement of consistency in the evaluation.
Shabat, Yael Ben; Shitzer, Avraham; Fiala, Dusan
2014-08-01
Wind chill equivalent temperatures (WCETs) were estimated by a modified Fiala's whole body thermoregulation model of a clothed person. Facial convective heat exchange coefficients applied in the computations concurrently with environmental radiation effects were taken from a recently derived human-based correlation. Apart from these, the analysis followed the methodology used in the derivation of the currently used wind chill charts. WCET values are summarized by the following equation:[Formula: see text]Results indicate consistently lower estimated facial skin temperatures and consequently higher WCETs than those listed in the literature and used by the North American weather services. Calculated dynamic facial skin temperatures were additionally applied in the estimation of probabilities for the occurrence of risks of frostbite. Predicted weather combinations for probabilities of "Practically no risk of frostbite for most people," for less than 5 % risk at wind speeds above 40 km h(-1), were shown to occur at air temperatures above -10 °C compared to the currently published air temperature of -15 °C. At air temperatures below -35 °C, the presently calculated weather combination of 40 km h(-1)/-35 °C, at which the transition for risks to incur a frostbite in less than 2 min, is less conservative than that published: 60 km h(-1)/-40 °C. The present results introduce a fundamentally improved scientific basis for estimating facial skin temperatures, wind chill temperatures and risk probabilities for frostbites over those currently practiced.
NASA Astrophysics Data System (ADS)
Shabat, Yael Ben; Shitzer, Avraham; Fiala, Dusan
2014-08-01
Wind chill equivalent temperatures (WCETs) were estimated by a modified Fiala's whole body thermoregulation model of a clothed person. Facial convective heat exchange coefficients applied in the computations concurrently with environmental radiation effects were taken from a recently derived human-based correlation. Apart from these, the analysis followed the methodology used in the derivation of the currently used wind chill charts. WCET values are summarized by the following equation: Results indicate consistently lower estimated facial skin temperatures and consequently higher WCETs than those listed in the literature and used by the North American weather services. Calculated dynamic facial skin temperatures were additionally applied in the estimation of probabilities for the occurrence of risks of frostbite. Predicted weather combinations for probabilities of "Practically no risk of frostbite for most people," for less than 5 % risk at wind speeds above 40 km h-1, were shown to occur at air temperatures above -10 °C compared to the currently published air temperature of -15 °C. At air temperatures below -35 °C, the presently calculated weather combination of 40 km h-1/-35 °C, at which the transition for risks to incur a frostbite in less than 2 min, is less conservative than that published: 60 km h-1/-40 °C. The present results introduce a fundamentally improved scientific basis for estimating facial skin temperatures, wind chill temperatures and risk probabilities for frostbites over those currently practiced.
Gómez-Valdés, Jorge; Hünemeier, Tábita; Quinto-Sánchez, Mirsha; Paschetta, Carolina; de Azevedo, Soledad; González, Marina F.; Martínez-Abadías, Neus; Esparza, Mireia; Pucciarelli, Héctor M.; Salzano, Francisco M.; Bau, Claiton H. D.; Bortolini, Maria Cátira; González-José, Rolando
2013-01-01
Antisocial and criminal behaviors are multifactorial traits whose interpretation relies on multiple disciplines. Since these interpretations may have social, moral and legal implications, a constant review of the evidence is necessary before any scientific claim is considered as truth. A recent study proposed that men with wider faces relative to facial height (fWHR) are more likely to develop unethical behaviour mediated by a psychological sense of power. This research was based on reports suggesting that sexual dimorphism and selection would be responsible for a correlation between fWHR and aggression. Here we show that 4,960 individuals from 94 modern human populations belonging to a vast array of genetic and cultural contexts do not display significant amounts of fWHR sexual dimorphism. Further analyses using populations with associated ethnographical records as well as samples of male prisoners of the Mexico City Federal Penitentiary condemned by crimes of variable level of inter-personal aggression (homicide, robbery, and minor faults) did not show significant evidence, suggesting that populations/individuals with higher levels of bellicosity, aggressive behaviour, or power-mediated behaviour display greater fWHR. Finally, a regression analysis of fWHR on individual's fitness showed no significant correlation between this facial trait and reproductive success. Overall, our results suggest that facial attributes are poor predictors of aggressive behaviour, or at least, that sexual selection was weak enough to leave a signal on patterns of between- and within-sex and population facial variation. PMID:23326328
Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms.
Phillips, P Jonathon; Yates, Amy N; Hu, Ying; Hahn, Carina A; Noyes, Eilidh; Jackson, Kelsey; Cavazos, Jacqueline G; Jeckeln, Géraldine; Ranjan, Rajeev; Sankaranarayanan, Swami; Chen, Jun-Cheng; Castillo, Carlos D; Chellappa, Rama; White, David; O'Toole, Alice J
2018-06-12
Achieving the upper limits of face identification accuracy in forensic applications can minimize errors that have profound social and personal consequences. Although forensic examiners identify faces in these applications, systematic tests of their accuracy are rare. How can we achieve the most accurate face identification: using people and/or machines working alone or in collaboration? In a comprehensive comparison of face identification by humans and computers, we found that forensic facial examiners, facial reviewers, and superrecognizers were more accurate than fingerprint examiners and students on a challenging face identification test. Individual performance on the test varied widely. On the same test, four deep convolutional neural networks (DCNNs), developed between 2015 and 2017, identified faces within the range of human accuracy. Accuracy of the algorithms increased steadily over time, with the most recent DCNN scoring above the median of the forensic facial examiners. Using crowd-sourcing methods, we fused the judgments of multiple forensic facial examiners by averaging their rating-based identity judgments. Accuracy was substantially better for fused judgments than for individuals working alone. Fusion also served to stabilize performance, boosting the scores of lower-performing individuals and decreasing variability. Single forensic facial examiners fused with the best algorithm were more accurate than the combination of two examiners. Therefore, collaboration among humans and between humans and machines offers tangible benefits to face identification accuracy in important applications. These results offer an evidence-based roadmap for achieving the most accurate face identification possible. Copyright © 2018 the Author(s). Published by PNAS.
The role of great auricular-facial nerve neurorrhaphy in facial nerve damage
Sun, Yan; Liu, Limei; Han, Yuechen; Xu, Lei; Zhang, Daogong; Wang, Haibo
2015-01-01
Background: Facial nerve is easy to be damaged, and there are many reconstructive methods for facial nerve reconstructive, such as facial nerve end to end anastomosis, the great auricular nerve graft, the sural nerve graft, or hypoglossal-facial nerve anastomosis. However, there is still little study about great auricular-facial nerve neurorrhaphy. The aim of the present study was to identify the role of great auricular-facial nerve neurorrhaphy and the mechanism. Methods: Rat models of facial nerve cut (FC), facial nerve end to end anastomosis (FF), facial-great auricular neurorrhaphy (FG), and control (Ctrl) were established. Apex nasi amesiality observation, electrophysiology and immunofluorescence assays were employed to investigate the function and mechanism. Results: In apex nasi amesiality observation, it was found apex nasi amesiality of FG group was partly recovered. Additionally, electrophysiology and immunofluorescence assays revealed that facial-great auricular neurorrhaphy could transfer nerve impulse and express AChR which was better than facial nerve cut and worse than facial nerve end to end anastomosis. Conclusions: The present study indicated that great auricular-facial nerve neurorrhaphy is a substantial solution for facial lesion repair, as it is efficiently preventing facial muscles atrophy by generating neurotransmitter like ACh. PMID:26550216
The role of great auricular-facial nerve neurorrhaphy in facial nerve damage.
Sun, Yan; Liu, Limei; Han, Yuechen; Xu, Lei; Zhang, Daogong; Wang, Haibo
2015-01-01
Facial nerve is easy to be damaged, and there are many reconstructive methods for facial nerve reconstructive, such as facial nerve end to end anastomosis, the great auricular nerve graft, the sural nerve graft, or hypoglossal-facial nerve anastomosis. However, there is still little study about great auricular-facial nerve neurorrhaphy. The aim of the present study was to identify the role of great auricular-facial nerve neurorrhaphy and the mechanism. Rat models of facial nerve cut (FC), facial nerve end to end anastomosis (FF), facial-great auricular neurorrhaphy (FG), and control (Ctrl) were established. Apex nasi amesiality observation, electrophysiology and immunofluorescence assays were employed to investigate the function and mechanism. In apex nasi amesiality observation, it was found apex nasi amesiality of FG group was partly recovered. Additionally, electrophysiology and immunofluorescence assays revealed that facial-great auricular neurorrhaphy could transfer nerve impulse and express AChR which was better than facial nerve cut and worse than facial nerve end to end anastomosis. The present study indicated that great auricular-facial nerve neurorrhaphy is a substantial solution for facial lesion repair, as it is efficiently preventing facial muscles atrophy by generating neurotransmitter like ACh.
Haunted by a doppelgänger: irrelevant facial similarity affects rule-based judgments.
von Helversen, Bettina; Herzog, Stefan M; Rieskamp, Jörg
2014-01-01
Judging other people is a common and important task. Every day professionals make decisions that affect the lives of other people when they diagnose medical conditions, grant parole, or hire new employees. To prevent discrimination, professional standards require that decision makers render accurate and unbiased judgments solely based on relevant information. Facial similarity to previously encountered persons can be a potential source of bias. Psychological research suggests that people only rely on similarity-based judgment strategies if the provided information does not allow them to make accurate rule-based judgments. Our study shows, however, that facial similarity to previously encountered persons influences judgment even in situations in which relevant information is available for making accurate rule-based judgments and where similarity is irrelevant for the task and relying on similarity is detrimental. In two experiments in an employment context we show that applicants who looked similar to high-performing former employees were judged as more suitable than applicants who looked similar to low-performing former employees. This similarity effect was found despite the fact that the participants used the relevant résumé information about the applicants by following a rule-based judgment strategy. These findings suggest that similarity-based and rule-based processes simultaneously underlie human judgment.
Appearance-based human gesture recognition using multimodal features for human computer interaction
NASA Astrophysics Data System (ADS)
Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun
2011-03-01
The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.
New Protocol for Skin Landmark Registration in Image-Guided Neurosurgery: Technical Note.
Gerard, Ian J; Hall, Jeffery A; Mok, Kelvin; Collins, D Louis
2015-09-01
Newer versions of the commercial Medtronic StealthStation allow the use of only 8 landmark pairs for patient-to-image registration as opposed to 9 landmarks in older systems. The choice of which landmark pair to drop in these newer systems can have an effect on the quality of the patient-to-image registration. To investigate 4 landmark registration protocols based on 8 landmark pairs and compare the resulting registration accuracy with a 9-landmark protocol. Four different protocols were tested on both phantoms and patients. Two of the protocols involved using 4 ear landmarks and 4 facial landmarks and the other 2 involved using 3 ear landmarks and 5 facial landmarks. Both the fiducial registration error and target registration error were evaluated for each of the different protocols to determine any difference between them and the 9-landmark protocol. No difference in fiducial registration error was found between any of the 8-landmark protocols and the 9-landmark protocol. A significant decrease (P < .05) in target registration error was found when using a protocol based on 4 ear landmarks and 4 facial landmarks compared with the other protocols based on 3 ear landmarks. When using 8 landmarks to perform the patient-to-image registration, the protocol using 4 ear landmarks and 4 facial landmarks greatly outperformed the other 8-landmark protocols and 9-landmark protocol, resulting in the lowest target registration error.
Yankouskaya, Alla; Booth, David A; Humphreys, Glyn
2012-11-01
Interactions between the processing of emotion expression and form-based information from faces (facial identity) were investigated using the redundant-target paradigm, in which we specifically tested whether identity and emotional expression are integrated in a superadditive manner (Miller, Cognitive Psychology 14:247-279, 1982). In Experiments 1 and 2, participants performed emotion and face identity judgments on faces with sad or angry emotional expressions. Responses to redundant targets were faster than responses to either single target when a universal emotion was conveyed, and performance violated the predictions from a model assuming independent processing of emotion and face identity. Experiment 4 showed that these effects were not modulated by varying interstimulus and nontarget contingencies, and Experiment 5 demonstrated that the redundancy gains were eliminated when faces were inverted. Taken together, these results suggest that the identification of emotion and facial identity interact in face processing.
Lee, Anthony J; Mitchem, Dorian G; Wright, Margaret J; Martin, Nicholas G; Keller, Matthew C; Zietsch, Brendan P
2016-01-01
Popular theory suggests that facial averageness is preferred in a partner for genetic benefits to offspring. However, whether facial averageness is associated with genetic quality is yet to be established. Here, we computed an objective measure of facial averageness for a large sample ( N = 1,823) of identical and nonidentical twins and their siblings to test two predictions from the theory that facial averageness reflects genetic quality. First, we use biometrical modelling to estimate the heritability of facial averageness, which is necessary if it reflects genetic quality. We also test for a genetic association between facial averageness and facial attractiveness. Second, we assess whether paternal age at conception (a proxy of mutation load) is associated with facial averageness and facial attractiveness. Our findings are mixed with respect to our hypotheses. While we found that facial averageness does have a genetic component, and a significant phenotypic correlation exists between facial averageness and attractiveness, we did not find a genetic correlation between facial averageness and attractiveness (therefore, we cannot say that the genes that affect facial averageness also affect facial attractiveness) and paternal age at conception was not negatively associated with facial averageness. These findings support some of the previously untested assumptions of the 'genetic benefits' account of facial averageness, but cast doubt on others.
Re, Daniel E; Rule, Nicholas O
2016-10-01
Recent research has demonstrated that judgments of Chief Executive Officers' (CEOs') faces predict their firms' financial performance, finding that characteristics associated with higher power (e.g., dominance) predict greater profits. Most of these studies have focused on CEOs of profit-based businesses, where the main criterion for success is financial gain. Here, we examined whether facial appearance might predict measures of success in a sample of CEOs of non-profit organizations (NPOs). Indeed, contrary to findings for the CEOs of profit-based businesses, judgments of leadership and power from the faces of CEOs of NPOs negatively correlated with multiple measures of charitable success (Study 1). Moreover, CEOs of NPOs looked less powerful than the CEOs of profit-based businesses (Study 2) and leadership ratings positively associated with warmth-based traits and NPO success when participants knew the faces belonged to CEOs of NPOs (Study 3). CEOs who look less dominant may therefore achieve greater success in leading NPOs, opposite the relationship found for the CEOs of profit-based companies. Thus, the relationship between facial appearance and leadership success varies by organizational context. © The Author(s) 2016.
Gender identity rather than sexual orientation impacts on facial preferences.
Ciocca, Giacomo; Limoncin, Erika; Cellerino, Alessandro; Fisher, Alessandra D; Gravina, Giovanni Luca; Carosa, Eleonora; Mollaioli, Daniele; Valenzano, Dario R; Mennucci, Andrea; Bandini, Elisa; Di Stasi, Savino M; Maggi, Mario; Lenzi, Andrea; Jannini, Emmanuele A
2014-10-01
Differences in facial preferences between heterosexual men and women are well documented. It is still a matter of debate, however, how variations in sexual identity/sexual orientation may modify the facial preferences. This study aims to investigate the facial preferences of male-to-female (MtF) individuals with gender dysphoria (GD) and the influence of short-term/long-term relationships on facial preference, in comparison with healthy subjects. Eighteen untreated MtF subjects, 30 heterosexual males, 64 heterosexual females, and 42 homosexual males from university students/staff, at gay events, and in Gender Clinics were shown a composite male or female face. The sexual dimorphism of these pictures was stressed or reduced in a continuous fashion through an open-source morphing program with a sequence of 21 pictures of the same face warped from a feminized to a masculinized shape. An open-source morphing program (gtkmorph) based on the X-Morph algorithm. MtF GD subjects and heterosexual females showed the same pattern of preferences: a clear preference for less dimorphic (more feminized) faces for both short- and long-term relationships. Conversely, both heterosexual and homosexual men selected significantly much more dimorphic faces, showing a preference for hyperfeminized and hypermasculinized faces, respectively. These data show that the facial preferences of MtF GD individuals mirror those of the sex congruent with their gender identity. Conversely, heterosexual males trace the facial preferences of homosexual men, indicating that changes in sexual orientation do not substantially affect preference for the most attractive faces. © 2014 International Society for Sexual Medicine.
Accuracy of computer-assisted navigation: significant augmentation by facial recognition software.
Glicksman, Jordan T; Reger, Christine; Parasher, Arjun K; Kennedy, David W
2017-09-01
Over the past 20 years, image guidance navigation has been used with increasing frequency as an adjunct during sinus and skull base surgery. These devices commonly utilize surface registration, where varying pressure of the registration probe and loss of contact with the face during the skin tracing process can lead to registration inaccuracies, and the number of registration points incorporated is necessarily limited. The aim of this study was to evaluate the use of novel facial recognition software for image guidance registration. Consecutive adults undergoing endoscopic sinus surgery (ESS) were prospectively studied. Patients underwent image guidance registration via both conventional surface registration and facial recognition software. The accuracy of both registration processes were measured at the head of the middle turbinate (MTH), middle turbinate axilla (MTA), anterior wall of sphenoid sinus (SS), and nasal tip (NT). Forty-five patients were included in this investigation. Facial recognition was accurate to within a mean of 0.47 mm at the MTH, 0.33 mm at the MTA, 0.39 mm at the SS, and 0.36 mm at the NT. Facial recognition was more accurate than surface registration at the MTH by an average of 0.43 mm (p = 0.002), at the MTA by an average of 0.44 mm (p < 0.001), and at the SS by an average of 0.40 mm (p < 0.001). The integration of facial recognition software did not adversely affect registration time. In this prospective study, automated facial recognition software significantly improved the accuracy of image guidance registration when compared to conventional surface registration. © 2017 ARS-AAOA, LLC.
Contextual influences on pain communication in couples with and without a partner with chronic pain.
Gagnon, Michelle M; Hadjistavropoulos, Thomas; MacNab, Ying C
2017-10-01
This is an experimental study of pain communication in couples. Despite evidence that chronic pain in one partner impacts both members of the dyad, dyadic influences on pain communication have not been sufficiently examined and are typically studied based on retrospective reports. Our goal was to directly study contextual influences (ie, presence of chronic pain, gender, relationship quality, and pain catastrophizing) on self-reported and nonverbal (ie, facial expressions) pain responses. Couples with (n = 66) and without (n = 65) an individual with chronic pain (ICP) completed relationship and pain catastrophizing questionnaires. Subsequently, one partner underwent a pain task (pain target, PT), while the other partner observed (pain observer, PO). In couples with an ICP, the ICP was assigned to be the PT. Pain intensity and PO perceived pain intensity ratings were recorded at multiple intervals. Facial expressions were video recorded throughout the pain task. Pain-related facial expression was quantified using the Facial Action Coding System. The most consistent predictor of either partner's pain-related facial expression was the pain-related facial expression of the other partner. Pain targets provided higher pain ratings than POs and female PTs reported and showed more pain, regardless of chronic pain status. Gender and the interaction between gender and relationship satisfaction were predictors of pain-related facial expression among PTs, but not POs. None of the examined variables predicted self-reported pain. Results suggest that contextual variables influence pain communication in couples, with distinct influences for PTs and POs. Moreover, self-report and nonverbal responses are not displayed in a parallel manner.
Three-dimensional facial analyses of Indian and Malaysian women.
Kusugal, Preethi; Ruttonji, Zarir; Gowda, Roopa; Rajpurohit, Ladusingh; Lad, Pritam; Ritu
2015-01-01
Facial measurements serve as a valuable tool in the treatment planning of maxillofacial rehabilitation, orthodontic treatment, and orthognathic surgeries. The esthetic guidelines of face are still based on neoclassical canons, which were used in the ancient art. These canons are considered to be highly subjective, and there is ample evidence in the literature, which raises such questions as whether or not these canons can be applied for the modern population. This study was carried out to analyze the facial features of Indian and Malaysian women by using three-dimensional (3D) scanner and thus determine the prevalence of neoclassical facial esthetic canons in both the groups. The study was carried out on 60 women in the age range of 18-25 years, out of whom 30 were Indian and 30 Malaysian. As many as 16 facial measurements were taken by using a noncontact 3D scanner. Unpaired t-test was used for comparison of facial measurements between Indian and Malaysian females. Two-tailed Fisher exact test was used to determine the prevalence of neoclassical canons. Orbital Canon was prevalent in 80% of Malaysian women; the same was found only in 16% of Indian women (P = 0.00013). About 43% of Malaysian women exhibited orbitonasal canon (P = 0.0470) whereas nasoaural canon was prevalent in 73% of Malaysian and 33% of Indian women (P = 0.0068). Orbital, orbitonasal, and nasoaural canon were more prevalent in Malaysian women. Facial profile canon, nasooral, and nasofacial canons were not seen in either group. Though some canons provide guidelines in esthetic analyses of face, complete reliance on these canons is not justifiable.
Bell's palsy: a summary of current evidence and referral algorithm.
Glass, Graeme E; Tzafetta, Kallirroi
2014-12-01
Spontaneous idiopathic facial nerve (Bell's) palsy leaves residual hemifacial weakness in 29% which is severe and disfiguring in over half of these cases. Acute medical management remains the best way to improve outcomes. Reconstructive surgery can improve long term disfigurement. However, acute and surgical options are time-dependent. As family practitioners see, on average, one case every 2 years, a summary of this condition based on common clinical questions may improve acute management and guide referral for those who need specialist input. We formulated a series of clinical questions likely to be of use to family practitioners on encountering this condition and sought evidence from the literature to answer them. The lifetime risk is 1 in 60, and is more common in pregnancy and diabetes mellitus. Patients often present with facial pain or paraesthesia, altered taste and intolerance to loud noise in addition to facial droop. It is probably caused by ischaemic compression of the facial nerve within the meatal segment of the facial canal probably as a result of viral inflammation. When given early, high dose corticosteroids can improve outcomes. Neither antiviral therapy nor other adjuvant therapies are supported by evidence. As the facial muscles remain viable re-innervation targets for up to 2 years, late referrals require more complex reconstructions. Early recognition, steroid therapy and early referral for facial reanimation (when the diagnosis is secure) are important features of good management when encountering these complex cases. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Neural mechanism for judging the appropriateness of facial affect.
Kim, Ji-Woong; Kim, Jae-Jin; Jeong, Bum Seok; Ki, Seon Wan; Im, Dong-Mi; Lee, Soo Jung; Lee, Hong Shick
2005-12-01
Questions regarding the appropriateness of facial expressions in particular situations arise ubiquitously in everyday social interactions. To determine the appropriateness of facial affect, first of all, we should represent our own or the other's emotional state as induced by the social situation. Then, based on these representations, we should infer the possible affective response of the other person. In this study, we identified the brain mechanism mediating special types of social evaluative judgments of facial affect in which the internal reference is related to theory of mind (ToM) processing. Many previous ToM studies have used non-emotional stimuli, but, because so much valuable social information is conveyed through nonverbal emotional channels, this investigation used emotionally salient visual materials to tap ToM. Fourteen right-handed healthy subjects volunteered for our study. We used functional magnetic resonance imaging to examine brain activation during the judgmental task for the appropriateness of facial affects as opposed to gender matching tasks. We identified activation of a brain network, which includes both medial frontal cortex, left temporal pole, left inferior frontal gyrus, and left thalamus during the judgmental task for appropriateness of facial affect compared to the gender matching task. The results of this study suggest that the brain system involved in ToM plays a key role in judging the appropriateness of facial affect in an emotionally laden situation. In addition, our result supports that common neural substrates are involved in performing diverse kinds of ToM tasks irrespective of perceptual modalities and the emotional salience of test materials.
Estimation of 2D to 3D dimensions and proportionality indices for facial examination.
Martos, Rubén; Valsecchi, Andrea; Ibáñez, Oscar; Alemán, Inmaculada
2018-06-01
Photo-anthropometry is a metric-based facial image comparison technique where measurements of the face are taken from an image using predetermined facial landmarks. In particular, dimensions and proportionality indices (DPIs) are compared to DPIs from another facial image. Different studies concluded that photo-anthropometric facial comparison, as it is currently practiced, is unsuitable for elimination purposes. The major limitation is the need for images acquired under very restrictive, controlled conditions. To overcome this latter issue, we propose a novel methodology to estimate 3D DPIs from 2D ones. It uses computer graphic techniques to simulate thousands of facial photographs under known camera conditions and regression to derive the mathematical relationship between 2D and 3D DPIs automatically. Additionally, we present a methodology that makes use of the estimated 3D DPIs for reducing the number of potential matches of a given unknown facial photograph within a set of known candidates. The error in the estimation of the 3D DPIs can be as large as 35%, but both I and III quartiles are consistently inside the ±5% range. The methodology for filtering cases has demonstrated to be useful in the task of narrowing down the list of possible candidates for a given photograph. It is able to remove on average (validated using cross-validation technique) 57% and 24% of the negative cases, depending on the amounts of DPIs available. Limitations of the work developed together with open research lines are included within the Discussion section. Copyright © 2018 Elsevier B.V. All rights reserved.
Dudding-Byth, Tracy; Baxter, Anne; Holliday, Elizabeth G; Hackett, Anna; O'Donnell, Sheridan; White, Susan M; Attia, John; Brunner, Han; de Vries, Bert; Koolen, David; Kleefstra, Tjitske; Ratwatte, Seshika; Riveros, Carlos; Brain, Steve; Lovell, Brian C
2017-12-19
Massively parallel genetic sequencing allows rapid testing of known intellectual disability (ID) genes. However, the discovery of novel syndromic ID genes requires molecular confirmation in at least a second or a cluster of individuals with an overlapping phenotype or similar facial gestalt. Using computer face-matching technology we report an automated approach to matching the faces of non-identical individuals with the same genetic syndrome within a database of 3681 images [1600 images of one of 10 genetic syndrome subgroups together with 2081 control images]. Using the leave-one-out method, two research questions were specified: 1) Using two-dimensional (2D) photographs of individuals with one of 10 genetic syndromes within a database of images, did the technology correctly identify more than expected by chance: i) a top match? ii) at least one match within the top five matches? or iii) at least one in the top 10 with an individual from the same syndrome subgroup? 2) Was there concordance between correct technology-based matches and whether two out of three clinical geneticists would have considered the diagnosis based on the image alone? The computer face-matching technology correctly identifies a top match, at least one correct match in the top five and at least one in the top 10 more than expected by chance (P < 0.00001). There was low agreement between the technology and clinicians, with higher accuracy of the technology when results were discordant (P < 0.01) for all syndromes except Kabuki syndrome. Although the accuracy of the computer face-matching technology was tested on images of individuals with known syndromic forms of intellectual disability, the results of this pilot study illustrate the potential utility of face-matching technology within deep phenotyping platforms to facilitate the interpretation of DNA sequencing data for individuals who remain undiagnosed despite testing the known developmental disorder genes.
Yoo, Sung-Hoon; Oh, Sung-Kwun; Pedrycz, Witold
2015-09-01
In this study, we propose a hybrid method of face recognition by using face region information extracted from the detected face region. In the preprocessing part, we develop a hybrid approach based on the Active Shape Model (ASM) and the Principal Component Analysis (PCA) algorithm. At this step, we use a CCD (Charge Coupled Device) camera to acquire a facial image by using AdaBoost and then Histogram Equalization (HE) is employed to improve the quality of the image. ASM extracts the face contour and image shape to produce a personal profile. Then we use a PCA method to reduce dimensionality of face images. In the recognition part, we consider the improved Radial Basis Function Neural Networks (RBF NNs) to identify a unique pattern associated with each person. The proposed RBF NN architecture consists of three functional modules realizing the condition phase, the conclusion phase, and the inference phase completed with the help of fuzzy rules coming in the standard 'if-then' format. In the formation of the condition part of the fuzzy rules, the input space is partitioned with the use of Fuzzy C-Means (FCM) clustering. In the conclusion part of the fuzzy rules, the connections (weights) of the RBF NNs are represented by four kinds of polynomials such as constant, linear, quadratic, and reduced quadratic. The values of the coefficients are determined by running a gradient descent method. The output of the RBF NNs model is obtained by running a fuzzy inference method. The essential design parameters of the network (including learning rate, momentum coefficient and fuzzification coefficient used by the FCM) are optimized by means of Differential Evolution (DE). The proposed P-RBF NNs (Polynomial based RBF NNs) are applied to facial recognition and its performance is quantified from the viewpoint of the output performance and recognition rate. Copyright © 2015 Elsevier Ltd. All rights reserved.
Clinical outcomes of facial transplantation: a review.
Shanmugarajah, Kumaran; Hettiaratchy, Shehan; Clarke, Alex; Butler, Peter E M
2011-01-01
A total of 18 composite tissue allotransplants of the face have currently been reported. Prior to the start of the face transplant programme, there had been intense debate over the risks and benefits of performing this experimental surgery. This review examines the surgical, functional and aesthetic, immunological and psychological outcomes of facial transplantation thus far, based on the predicted risks outlined in early publications from teams around the world. The initial experience has demonstrated that facial transplantation is surgically feasible. Functional and aesthetic outcomes have been very encouraging with good motor and sensory recovery and improvements to important facial functions observed. Episodes of acute rejection have been common, as predicted, but easily controlled with increases in systemic immunosuppression. Psychological improvements have been remarkable and have resulted in the reintegration of patients into the outside world, social networks and even the workplace. Complications of immunosuppression and patient mortality have been observed in the initial series. These have highlighted rigorous patient selection as the key predictor of success. The overall early outcomes of the face transplant programme have been generally more positive than many predicted. This initial success is testament to the robust approach of teams. Dissemination of outcomes and ongoing refinement of the process may allow facial transplantation to eventually become a first-line reconstructive option for those with extensive facial disfigurements. Copyright © 2011 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
Divine proportions in attractive and nonattractive faces.
Pancherz, Hans; Knapp, Verena; Erbe, Christina; Heiss, Anja Melina
2010-01-01
To test Ricketts' 1982 hypothesis that facial beauty is measurable by comparing attractive and nonattractive faces of females and males with respect to the presence of the divine proportions. The analysis of frontal view facial photos of 90 cover models (50 females, 40 males) from famous fashion magazines and of 34 attractive (29 females, five males) and 34 nonattractive (13 females, 21 males) persons selected from a group of former orthodontic patients was carried out in this study. Based on Ricketts' method, five transverse and seven vertical facial reference distances were measured and compared with the corresponding calculated divine distances expressed in phi-relationships (f=1.618). Furthermore, transverse and vertical facial disproportion indices were created. For both the models and patients, all the reference distances varied largely from respective divine values. The average deviations ranged from 0.3% to 7.8% in the female groups of models and attractive patients with no difference between them. In the male groups of models and attractive patients, the average deviations ranged from 0.2% to 11.2%. When comparing attractive and nonattractive female, as well as male, patients, deviations from the divine values for all variables were larger in the nonattractive sample. Attractive individuals have facial proportions closer to the divine values than nonattractive ones. In accordance with the hypothesis of Ricketts, facial beauty is measurable to some degree. COPYRIGHT © 2009 BY QUINTESSENCE PUBLISHING CO, INC.
ERIC Educational Resources Information Center
Rosen, Tamara E.; Lerner, Matthew D.
2016-01-01
Facial emotion recognition (FER) is thought to be a key deficit domain in autism spectrum disorder (ASD). However, the extant literature is based solely on cross-sectional studies; thus, little is known about even short-term intra-individual dynamics of FER in ASD over time. The present study sought to examine trajectories of FER in ASD youth over…
Affect in Human-Robot Interaction
2014-01-01
is capable of learning and producing a large number of facial expressions based on Ekman’s Facial Action Coding System, FACS (Ekman and Friesen 1978... tactile (pushed, stroked, etc.), auditory (loud sound), temperature and olfactory (alcohol, smoke, etc.). The personality of the robot consists of...robot’s behavior through decision-making, learning , or action selection, a number of researchers used the fuzzy logic approach to emotion generation
Penile herpes zoster: an unusual location for a common disease.
Bjekic, Milan; Markovic, Milica; Sipetic, Sandra
2011-01-01
Herpes zoster is a common dermatological condition which affects up to 20% of the population, most frequently involving the thoracic and facial dermatomes with sacral lesions occurring rarely and only a few reported cases of penile shingles. We report two cases of unusual penile clinical presentations of varicella zoster virus infection in immunocompetent men. The patients presented with grouped clusters of vesicles and erythema on the left side of penile shaft and posterior aspect of the left thigh and buttock, involving s2-s4 dermatomes. The lesions resolved quickly upon administration of oral antiviral therapy. Penile herpes zoster should not be overlooked in patients with unilateral vesicular rash.
Mok, Gary Tsz Kin; Chung, Brian Hon-Yin
2017-01-01
Background 22q11.2 deletion syndrome (22q11.2DS) is a common genetic disorder with an estimated frequency of 1/4,000. It is a multi-systemic disorder with high phenotypic variability. Our previous work showed substantial under-diagnosis of 22q11.2DS as 1 in 10 adult patients with conotruncal defects were found to have 22q11.2DS. The National Institute of Health (NIH) has created an atlas of human malformation syndrome from diverse populations to provide an easy tool to assist clinician in diagnosing the syndromic across various populations. In this study, we seek to determine whether training the computer-aided facial recognition technology using images from ethnicity-matched patients from the NIH Atlas can improve the detection performance of this technology. Methods Clinical photographs of 16 Chinese subjects with molecularly confirmed 22q11.2DS, from the NIH atlas and its related publication were used for training the facial recognition technology. The system automatically localizes hundreds of facial fiducial points and takes measurements. The final classification is based on these measurements, as well as an estimated probability of subjects having 22q11.2DS based on the entire facial image. Clinical photographs of 7 patients with molecularly confirmed 22q11.2DS were obtained with informed consent and used for testing the performance in recognizing facial profiles of the Chinese subjects before and after training. Results All 7 test cases were improved in ranking and scoring after the software training. In 4 cases, 22q11.2DS did not appear as one possible syndrome match before the training; however, it appeared within the first 10 syndrome matches after training. Conclusions The present pilot data shows that this technology can be trained to recognize patients with 22q11.2DS. It also highlights the need to collect clinical photographs of patients from diverse populations to be used as resources for training the software which can lead to improvement of the performance of computer-aided facial recognition technology.
Management of the Facial Nerve in Lateral Skull Base Surgery Analytic Retrospective Study
El Shazly, Mohamed A.; Mokbel, Mahmoud A.M.; Elbadry, Amr A.; Badran, Hatem S.
2011-01-01
Background: Surgical approaches to the jugular foramen are often complex and lengthy procedures associated with significant morbidity based on the anatomic and tumor characteristics. In addition to the risk of intra-operative hemorrhage from vascular tumors, lower cranial nerves deficits are frequently increased after intra-operative manipulation. Accordingly, modifications in the surgical techniques have been developed to minimize these risks. Preoperative embolization and intra-operative ligation of the external carotid artery have decreased the intraoperative blood loss. Accurate identification and exposure of the cranial nerves extracranially allows for their preservation during tumor resection. The modification of facial nerve mobilization provides widened infratemporal exposure with less postoperative facial weakness. The ideal approach should enable complete, one stage tumor resection with excellent infratemporal and posterior fossa exposure and would not aggravate or cause neurologic deficit. The aim of this study is to present our experience in handling jugular foramen lesions (mainly glomus jugulare) without the need for anterior facial nerve transposition. Methods: In this series we present our experience in Kasr ElEini University hospital (Cairo—Egypt) in handling 36 patients with jugular foramen lesions over a period of 20 years where the previously mentioned preoperative and operative rules were followed. The clinical status, operative technique and postoperative care and outcome are detailed and analyzed in relation to the outcome. Results: Complete cure without complications was achieved in four cases of congenital cholesteatoma and four cases with class B glomus. In advanced cases of glomus jugulare (28 patients) (C and D stages) complete cure was achieved in 21 of them (75%). The operative complications were also related to this group of 28 patients, in the form of facial paralysis in 20 of them (55.6%) and symptomatic vagal paralysis in 18 of them (50%). Conclusions: Total anterior rerouting of the facial nerve carries a high risk of facial paralysis. So it should be reserved for cases where the lesion extends beyond the vertical ICA. Otherwise, for less extensive lesions and less aggressive pathologies, less aggressive approaches could be adopted with less hazards. PMID:24179402
Management of the facial nerve in lateral skull base surgery analytic retrospective study.
El Shazly, Mohamed A; Mokbel, Mahmoud A M; Elbadry, Amr A; Badran, Hatem S
2011-01-01
Surgical approaches to the jugular foramen are often complex and lengthy procedures associated with significant morbidity based on the anatomic and tumor characteristics. In addition to the risk of intra-operative hemorrhage from vascular tumors, lower cranial nerves deficits are frequently increased after intra-operative manipulation. Accordingly, modifications in the surgical techniques have been developed to minimize these risks. Preoperative embolization and intra-operative ligation of the external carotid artery have decreased the intraoperative blood loss. Accurate identification and exposure of the cranial nerves extracranially allows for their preservation during tumor resection. The modification of facial nerve mobilization provides widened infratemporal exposure with less postoperative facial weakness. The ideal approach should enable complete, one stage tumor resection with excellent infratemporal and posterior fossa exposure and would not aggravate or cause neurologic deficit. The aim of this study is to present our experience in handling jugular foramen lesions (mainly glomus jugulare) without the need for anterior facial nerve transposition. In this series we present our experience in Kasr ElEini University hospital (Cairo-Egypt) in handling 36 patients with jugular foramen lesions over a period of 20 years where the previously mentioned preoperative and operative rules were followed. The clinical status, operative technique and postoperative care and outcome are detailed and analyzed in relation to the outcome. Complete cure without complications was achieved in four cases of congenital cholesteatoma and four cases with class B glomus. In advanced cases of glomus jugulare (28 patients) (C and D stages) complete cure was achieved in 21 of them (75%). The operative complications were also related to this group of 28 patients, in the form of facial paralysis in 20 of them (55.6%) and symptomatic vagal paralysis in 18 of them (50%). Total anterior rerouting of the facial nerve carries a high risk of facial paralysis. So it should be reserved for cases where the lesion extends beyond the vertical ICA. Otherwise, for less extensive lesions and less aggressive pathologies, less aggressive approaches could be adopted with less hazards.
Facial Scar Revision: Understanding Facial Scar Treatment
... Contact Us Trust your face to a facial plastic surgeon Facial Scar Revision Understanding Facial Scar Treatment ... face like the eyes or lips. A facial plastic surgeon has many options for treating and improving ...
Facial Expression Presentation for Real-Time Internet Communication
NASA Astrophysics Data System (ADS)
Dugarry, Alexandre; Berrada, Aida; Fu, Shan
2003-01-01
Text, voice and video images are the most common forms of media content for instant communication on the Internet. Studies have shown that facial expressions convey much richer information than text and voice during a face-to-face conversation. The currently available real time means of communication (instant text messages, chat programs and videoconferencing), however, have major drawbacks in terms of exchanging facial expression. The first two means do not involve the image transmission, whilst video conferencing requires a large bandwidth that is not always available, and the transmitted image sequence is neither smooth nor without delay. The objective of the work presented here is to develop a technique that overcomes these limitations, by extracting the facial expression of speakers and to realise real-time communication. In order to get the facial expressions, the main characteristics of the image are emphasized. Interpolation is performed on edge points previously detected to create geometric shapes such as arcs, lines, etc. The regional dominant colours of the pictures are also extracted and the combined results are subsequently converted into Scalable Vector Graphics (SVG) format. The application based on the proposed technique aims at being used simultaneously with chat programs and being able to run on any platform.
Emerging perceptions of facial plastic surgery among medical students.
Rosenthal, E; Clark, J M; Wax, M K; Cook, T A
2001-11-01
The purpose of this study was to examine the perceptions of medical students regarding facial aesthetic surgery and those specialists most likely to perform aesthetic or reconstructive facial surgery. A survey was designed based on a review of the literature to assess the desirable characteristics and the perceived role of the facial plastic and reconstructive surgeon (FPRS). The surveys were distributed to 2 populations: medical students from 4 medical schools and members of the general public. A total of 339 surveys were collected, 217 from medical students and 122 from the general public. Medical students and the public had similar responses. The results demonstrated that respondents preferred a male plastic surgeon from the ages of 41 to 50 years old and would look to their family doctor for a recommendation. Facial aesthetic and reconstructive surgery was considered the domain of maxillofacial and general plastic surgeons, not the FPRS. Integration of the FPRS into the medical school curriculum may help to improve the perceived role of the specialty within the medical community. It is important for the specialty to communicate to aspiring physicians the dedicated training of an otolaryngologist specializing in FPRS.