Hadden, Kellie L; LeFort, Sandra; O'Brien, Michelle; Coyte, Peter C; Guerriere, Denise N
2016-04-01
The purpose of the current study was to examine the concurrent and discriminant validity of the Child Facial Coding System for children with cerebral palsy. Eighty-five children (mean = 8.35 years, SD = 4.72 years) were videotaped during a passive joint stretch with their physiotherapist and during 3 time segments: baseline, passive joint stretch, and recovery. Children's pain responses were rated from videotape using the Numerical Rating Scale and Child Facial Coding System. Results indicated that Child Facial Coding System scores during the passive joint stretch significantly correlated with Numerical Rating Scale scores (r = .72, P < .01). Child Facial Coding System scores were also significantly higher during the passive joint stretch than the baseline and recovery segments (P < .001). Facial activity was not significantly correlated with the developmental measures. These findings suggest that the Child Facial Coding System is a valid method of identifying pain in children with cerebral palsy. © The Author(s) 2015.
The Facial Expression Coding System (FACES): Development, Validation, and Utility
ERIC Educational Resources Information Center
Kring, Ann M.; Sloan, Denise M.
2007-01-01
This article presents information on the development and validation of the Facial Expression Coding System (FACES; A. M. Kring & D. Sloan, 1991). Grounded in a dimensional model of emotion, FACES provides information on the valence (positive, negative) of facial expressive behavior. In 5 studies, reliability and validity data from 13 diverse…
Bock, Astrid; Huber, Eva; Peham, Doris; Benecke, Cord
2015-01-01
The development (Study 1) and validation (Study 2) of a categorical system for the attribution of facial expressions of negative emotions to specific functions. The facial expressions observed inOPDinterviews (OPD-Task-Force 2009) are coded according to the Facial Action Coding System (FACS; Ekman et al. 2002) and attributed to categories of basic emotional displays using EmFACS (Friesen & Ekman 1984). In Study 1 we analyze a partial sample of 20 interviews and postulate 10 categories of functions that can be arranged into three main categories (interactive, self and object). In Study 2 we rate the facial expressions (n=2320) from the OPD interviews (10 minutes each interview) of 80 female subjects (16 healthy, 64 with DSM-IV diagnosis; age: 18-57 years) according to the categorical system and correlate them with problematic relationship experiences (measured with IIP,Horowitz et al. 2000). Functions of negative facial expressions can be attributed reliably and validly with the RFE-Coding System. The attribution of interactive, self-related and object-related functions allows for a deeper understanding of the emotional facial expressions of patients with mental disorders.
Donato, Gianluca; Bartlett, Marian Stewart; Hager, Joseph C.; Ekman, Paul; Sejnowski, Terrence J.
2010-01-01
The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions. PMID:21188284
Recognizing Action Units for Facial Expression Analysis
Tian, Ying-li; Kanade, Takeo; Cohn, Jeffrey F.
2010-01-01
Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams. PMID:25210210
Facial Expression Generation from Speaker's Emotional States in Daily Conversation
NASA Astrophysics Data System (ADS)
Mori, Hiroki; Ohshima, Koh
A framework for generating facial expressions from emotional states in daily conversation is described. It provides a mapping between emotional states and facial expressions, where the former is represented by vectors with psychologically-defined abstract dimensions, and the latter is coded by the Facial Action Coding System. In order to obtain the mapping, parallel data with rated emotional states and facial expressions were collected for utterances of a female speaker, and a neural network was trained with the data. The effectiveness of proposed method is verified by a subjective evaluation test. As the result, the Mean Opinion Score with respect to the suitability of generated facial expression was 3.86 for the speaker, which was close to that of hand-made facial expressions.
Richoz, Anne-Raphaëlle; Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G; Caldara, Roberto
2015-04-01
The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus--pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits obtained from patients by using static images of facial expressions, and offer novel routes for patient rehabilitation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Nine-year-old children use norm-based coding to visually represent facial expression.
Burton, Nichola; Jeffery, Linda; Skinner, Andrew L; Benton, Christopher P; Rhodes, Gillian
2013-10-01
Children are less skilled than adults at making judgments about facial expression. This could be because they have not yet developed adult-like mechanisms for visually representing faces. Adults are thought to represent faces in a multidimensional face-space, and have been shown to code the expression of a face relative to the norm or average face in face-space. Norm-based coding is economical and adaptive, and may be what makes adults more sensitive to facial expression than children. This study investigated the coding system that children use to represent facial expression. An adaptation aftereffect paradigm was used to test 24 adults and 18 children (9 years 2 months to 9 years 11 months old). Participants adapted to weak and strong antiexpressions. They then judged the expression of an average expression. Adaptation created aftereffects that made the test face look like the expression opposite that of the adaptor. Consistent with the predictions of norm-based but not exemplar-based coding, aftereffects were larger for strong than weak adaptors for both age groups. Results indicate that, like adults, children's coding of facial expressions is norm-based. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Brain Systems for Assessing Facial Attractiveness
ERIC Educational Resources Information Center
Winston, Joel S.; O'Doherty, John; Kilner, James M.; Perrett, David I.; Dolan, Raymond J.
2007-01-01
Attractiveness is a facial attribute that shapes human affiliative behaviours. In a previous study we reported a linear response to facial attractiveness in orbitofrontal cortex (OFC), a region involved in reward processing. There are strong theoretical grounds for the hypothesis that coding stimulus reward value also involves the amygdala. The…
Social Risk and Depression: Evidence from Manual and Automatic Facial Expression Analysis
Girard, Jeffrey M.; Cohn, Jeffrey F.; Mahoor, Mohammad H.; Mavadati, Seyedmohammad; Rosenwald, Dean P.
2014-01-01
Investigated the relationship between change over time in severity of depression symptoms and facial expression. Depressed participants were followed over the course of treatment and video recorded during a series of clinical interviews. Facial expressions were analyzed from the video using both manual and automatic systems. Automatic and manual coding were highly consistent for FACS action units, and showed similar effects for change over time in depression severity. For both systems, when symptom severity was high, participants made more facial expressions associated with contempt, smiled less, and those smiles that occurred were more likely to be accompanied by facial actions associated with contempt. These results are consistent with the “social risk hypothesis” of depression. According to this hypothesis, when symptoms are severe, depressed participants withdraw from other people in order to protect themselves from anticipated rejection, scorn, and social exclusion. As their symptoms fade, participants send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and produced the same pattern of depression effects suggests that automatic facial expression analysis may be ready for use in behavioral and clinical science. PMID:24598859
Psychometric challenges and proposed solutions when scoring facial emotion expression codes.
Olderbak, Sally; Hildebrandt, Andrea; Pinkpank, Thomas; Sommer, Werner; Wilhelm, Oliver
2014-12-01
Coding of facial emotion expressions is increasingly performed by automated emotion expression scoring software; however, there is limited discussion on how best to score the resulting codes. We present a discussion of facial emotion expression theories and a review of contemporary emotion expression coding methodology. We highlight methodological challenges pertinent to scoring software-coded facial emotion expression codes and present important psychometric research questions centered on comparing competing scoring procedures of these codes. Then, on the basis of a time series data set collected to assess individual differences in facial emotion expression ability, we derive, apply, and evaluate several statistical procedures, including four scoring methods and four data treatments, to score software-coded emotion expression data. These scoring procedures are illustrated to inform analysis decisions pertaining to the scoring and data treatment of other emotion expression questions and under different experimental circumstances. Overall, we found applying loess smoothing and controlling for baseline facial emotion expression and facial plasticity are recommended methods of data treatment. When scoring facial emotion expression ability, maximum score is preferred. Finally, we discuss the scoring methods and data treatments in the larger context of emotion expression research.
A comparison of facial expression properties in five hylobatid species.
Scheider, Linda; Liebal, Katja; Oña, Leonardo; Burrows, Anne; Waller, Bridget
2014-07-01
Little is known about facial communication of lesser apes (family Hylobatidae) and how their facial expressions (and use of) relate to social organization. We investigated facial expressions (defined as combinations of facial movements) in social interactions of mated pairs in five different hylobatid species belonging to three different genera using a recently developed objective coding system, the Facial Action Coding System for hylobatid species (GibbonFACS). We described three important properties of their facial expressions and compared them between genera. First, we compared the rate of facial expressions, which was defined as the number of facial expressions per units of time. Second, we compared their repertoire size, defined as the number of different types of facial expressions used, independent of their frequency. Third, we compared the diversity of expression, defined as the repertoire weighted by the rate of use for each type of facial expression. We observed a higher rate and diversity of facial expression, but no larger repertoire, in Symphalangus (siamangs) compared to Hylobates and Nomascus species. In line with previous research, these results suggest siamangs differ from other hylobatids in certain aspects of their social behavior. To investigate whether differences in facial expressions are linked to hylobatid socio-ecology, we used a Phylogenetic General Least Square (PGLS) regression analysis to correlate those properties with two social factors: group-size and level of monogamy. No relationship between the properties of facial expressions and these socio-ecological factors was found. One explanation could be that facial expressions in hylobatid species are subject to phylogenetic inertia and do not differ sufficiently between species to reveal correlations with factors such as group size and monogamy level. © 2014 Wiley Periodicals, Inc.
Spontaneous and posed facial expression in Parkinson's disease.
Smith, M C; Smith, M K; Ellgring, H
1996-09-01
Spontaneous and posed emotional facial expressions in individuals with Parkinson's disease (PD, n = 12) were compared with those of healthy age-matched controls (n = 12). The intensity and amount of facial expression in PD patients were expected to be reduced for spontaneous but not posed expressions. Emotional stimuli were video clips selected from films, 2-5 min in duration, designed to elicit feelings of happiness, sadness, fear, disgust, or anger. Facial movements were coded using Ekman and Friesen's (1978) Facial Action Coding System (FACS). In addition, participants rated their emotional experience on 9-point Likert scales. The PD group showed significantly less overall facial reactivity than did controls when viewing the films. The predicted Group X Condition (spontaneous vs. posed) interaction effect on smile intensity was found when PD participants with more severe disease were compared with those with milder disease and with controls. In contrast, ratings of emotional experience were similar for both groups. Depression was positively associated with emotion rating but not with measures of facial activity. Spontaneous facial expression appears to be selectively affected in PD, whereas posed expression and emotional experience remain relatively intact.
Namba, Shushi; Kabir, Russell S.; Miyatani, Makoto; Nakao, Takashi
2017-01-01
While numerous studies have examined the relationships between facial actions and emotions, they have yet to account for the ways that specific spontaneous facial expressions map onto emotional experiences induced without expressive intent. Moreover, previous studies emphasized that a fine-grained investigation of facial components could establish the coherence of facial actions with actual internal states. Therefore, this study aimed to accumulate evidence for the correspondence between spontaneous facial components and emotional experiences. We reinvestigated data from previous research which secretly recorded spontaneous facial expressions of Japanese participants as they watched film clips designed to evoke four different target emotions: surprise, amusement, disgust, and sadness. The participants rated their emotional experiences via a self-reported questionnaire of 16 emotions. These spontaneous facial expressions were coded using the Facial Action Coding System, the gold standard for classifying visible facial movements. We corroborated each facial action that was present in the emotional experiences by applying stepwise regression models. The results found that spontaneous facial components occurred in ways that cohere to their evolutionary functions based on the rating values of emotional experiences (e.g., the inner brow raiser might be involved in the evaluation of novelty). This study provided new empirical evidence for the correspondence between each spontaneous facial component and first-person internal states of emotion as reported by the expresser. PMID:28522979
ERIC Educational Resources Information Center
Camras, Linda A.; Oster, Harriet; Bakeman, Roger; Meng, Zhaolan; Ujiie, Tatsuo; Campos, Joseph J.
2007-01-01
Do infants show distinct negative facial expressions for different negative emotions? To address this question, European American, Chinese, and Japanese 11-month-olds were videotaped during procedures designed to elicit mild anger or frustration and fear. Facial behavior was coded using Baby FACS, an anatomically based scoring system. Infants'…
Automated and objective action coding of facial expressions in patients with acute facial palsy.
Haase, Daniel; Minnigerode, Laura; Volk, Gerd Fabian; Denzler, Joachim; Guntinas-Lichius, Orlando
2015-05-01
Aim of the present observational single center study was to objectively assess facial function in patients with idiopathic facial palsy with a new computer-based system that automatically recognizes action units (AUs) defined by the Facial Action Coding System (FACS). Still photographs using posed facial expressions of 28 healthy subjects and of 299 patients with acute facial palsy were automatically analyzed for bilateral AU expression profiles. All palsies were graded with the House-Brackmann (HB) grading system and with the Stennert Index (SI). Changes of the AU profiles during follow-up were analyzed for 77 patients. The initial HB grading of all patients was 3.3 ± 1.2. SI at rest was 1.86 ± 1.3 and during motion 3.79 ± 4.3. Healthy subjects showed a significant AU asymmetry score of 21 ± 11 % and there was no significant difference to patients (p = 0.128). At initial examination of patients, the number of activated AUs was significantly lower on the paralyzed side than on the healthy side (p < 0.0001). The final examination for patients took place 4 ± 6 months post baseline. The number of activated AUs and the ratio between affected and healthy side increased significantly between baseline and final examination (both p < 0.0001). The asymmetry score decreased between baseline and final examination (p < 0.0001). The number of activated AUs on the healthy side did not change significantly (p = 0.779). Radical rethinking in facial grading is worthwhile: automated FACS delivers fast and objective global and regional data on facial motor function for use in clinical routine and clinical trials.
Production of Emotional Facial Expressions in European American, Japanese, and Chinese Infants.
ERIC Educational Resources Information Center
Camras, Linda A.; And Others
1998-01-01
European American, Japanese, and Chinese 11-month-olds participated in emotion-inducing laboratory procedures. Facial responses were scored with BabyFACS, an anatomically based coding system. Overall, Chinese infants were less expressive than European American and Japanese infants, suggesting that differences in expressivity between European…
2017-01-01
Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models. PMID:28742816
Hosoya, Haruo; Hyvärinen, Aapo
2017-07-01
Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models.
Realistic facial expression of virtual human based on color, sweat, and tears effects.
Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan
2014-01-01
Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.
Realistic Facial Expression of Virtual Human Based on Color, Sweat, and Tears Effects
Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan
2014-01-01
Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics. PMID:25136663
The identification of unfolding facial expressions.
Fiorentini, Chiara; Schmidt, Susanna; Viviani, Paolo
2012-01-01
We asked whether the identification of emotional facial expressions (FEs) involves the simultaneous perception of the facial configuration or the detection of emotion-specific diagnostic cues. We recorded at high speed (500 frames s-1) the unfolding of the FE in five actors, each expressing six emotions (anger, surprise, happiness, disgust, fear, sadness). Recordings were coded every 10 frames (20 ms of real time) with the Facial Action Coding System (FACS, Ekman et al 2002, Salt Lake City, UT: Research Nexus eBook) to identify the facial actions contributing to each expression, and their intensity changes over time. Recordings were shown in slow motion (1/20 of recording speed) to one hundred observers in a forced-choice identification task. Participants were asked to identify the emotion during the presentation as soon as they felt confident to do so. Responses were recorded along with the associated response times (RTs). The RT probability density functions for both correct and incorrect responses were correlated with the facial activity during the presentation. There were systematic correlations between facial activities, response probabilities, and RT peaks, and significant differences in RT distributions for correct and incorrect answers. The results show that a reliable response is possible long before the full FE configuration is reached. This suggests that identification is reached by integrating in time individual diagnostic facial actions, and does not require perceiving the full apex configuration.
Differences between Children and Adults in the Recognition of Enjoyment Smiles
ERIC Educational Resources Information Center
Del Giudice, Marco; Colle, Livia
2007-01-01
The authors investigated the differences between 8-year-olds (n = 80) and adults (n = 80) in recognition of felt versus faked enjoyment smiles by using a newly developed picture set that is based on the Facial Action Coding System. The authors tested the effect of different facial action units (AUs) on judgments of smile authenticity. Multiple…
A Neural Basis of Facial Action Recognition in Humans
Srinivasan, Ramprakash; Golomb, Julie D.
2016-01-01
By combining different facial muscle actions, called action units, humans can produce an extraordinarily large number of facial expressions. Computational models and studies in cognitive science and social psychology have long hypothesized that the brain needs to visually interpret these action units to understand other people's actions and intentions. Surprisingly, no studies have identified the neural basis of the visual recognition of these action units. Here, using functional magnetic resonance imaging and an innovative machine learning analysis approach, we identify a consistent and differential coding of action units in the brain. Crucially, in a brain region thought to be responsible for the processing of changeable aspects of the face, multivoxel pattern analysis could decode the presence of specific action units in an image. This coding was found to be consistent across people, facilitating the estimation of the perceived action units on participants not used to train the multivoxel decoder. Furthermore, this coding of action units was identified when participants attended to the emotion category of the facial expression, suggesting an interaction between the visual analysis of action units and emotion categorization as predicted by the computational models mentioned above. These results provide the first evidence for a representation of action units in the brain and suggest a mechanism for the analysis of large numbers of facial actions and a loss of this capacity in psychopathologies. SIGNIFICANCE STATEMENT Computational models and studies in cognitive and social psychology propound that visual recognition of facial expressions requires an intermediate step to identify visible facial changes caused by the movement of specific facial muscles. Because facial expressions are indeed created by moving one's facial muscles, it is logical to assume that our visual system solves this inverse problem. Here, using an innovative machine learning method and neuroimaging data, we identify for the first time a brain region responsible for the recognition of actions associated with specific facial muscles. Furthermore, this representation is preserved across subjects. Our machine learning analysis does not require mapping the data to a standard brain and may serve as an alternative to hyperalignment. PMID:27098688
Reading Faces: From Features to Recognition.
Guntupalli, J Swaroop; Gobbini, M Ida
2017-12-01
Chang and Tsao recently reported that the monkey face patch system encodes facial identity in a space of facial features as opposed to exemplars. Here, we discuss how such coding might contribute to face recognition, emphasizing the critical role of learning and interactions with other brain areas for optimizing the recognition of familiar faces. Copyright © 2017 Elsevier Ltd. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-06
... Process To Develop Consumer Data Privacy Code of Conduct Concerning Facial Recognition Technology AGENCY... technology. This Notice announces the meetings to be held in February, March, April, May, and June 2014. The... promote trust regarding facial recognition technology in the commercial context.\\4\\ NTIA encourages...
Helping the police with their inquiries
NASA Astrophysics Data System (ADS)
Kitson, Anthony J.
1995-09-01
The UK Home Office has held a long term interest in facial recognition. Work has concentrated upon providing the UK police with facilities to improve the use that can be made of the memory of victims and witnesses rather than automatically matching images. During the 1970s a psychological coding scheme and a search method were developed by Aberdeen University and Home Office. This has been incorporated into systems for searching prisoner photographs both experimentally and operationally. The coding scheme has also been incorporated in a facial likeness composition system. The Home Office is currenly implementing a national criminal record system (Phoenix) and work has been conducted to define and demonstrate standards for image enabled terminals for this application. Users have been consulted to establish suitable picture quality for the purpose, and a study of compression methods is in hand. Recently there has been increased use made by UK courts of expert testimony based upon the measurement of facial images. We are currently working with a group of practitioners to examine and improve the quality of such evidence and to develop a national standard.
Blend Shape Interpolation and FACS for Realistic Avatar
NASA Astrophysics Data System (ADS)
Alkawaz, Mohammed Hazim; Mohamad, Dzulkifli; Basori, Ahmad Hoirul; Saba, Tanzila
2015-03-01
The quest of developing realistic facial animation is ever-growing. The emergence of sophisticated algorithms, new graphical user interfaces, laser scans and advanced 3D tools imparted further impetus towards the rapid advancement of complex virtual human facial model. Face-to-face communication being the most natural way of human interaction, the facial animation systems became more attractive in the information technology era for sundry applications. The production of computer-animated movies using synthetic actors are still challenging issues. Proposed facial expression carries the signature of happiness, sadness, angry or cheerful, etc. The mood of a particular person in the midst of a large group can immediately be identified via very subtle changes in facial expressions. Facial expressions being very complex as well as important nonverbal communication channel are tricky to synthesize realistically using computer graphics. Computer synthesis of practical facial expressions must deal with the geometric representation of the human face and the control of the facial animation. We developed a new approach by integrating blend shape interpolation (BSI) and facial action coding system (FACS) to create a realistic and expressive computer facial animation design. The BSI is used to generate the natural face while the FACS is employed to reflect the exact facial muscle movements for four basic natural emotional expressions such as angry, happy, sad and fear with high fidelity. The results in perceiving the realistic facial expression for virtual human emotions based on facial skin color and texture may contribute towards the development of virtual reality and game environment of computer aided graphics animation systems.
Joint Patch and Multi-label Learning for Facial Action Unit Detection
Zhao, Kaili; Chu, Wen-Sheng; De la Torre, Fernando; Cohn, Jeffrey F.; Zhang, Honggang
2016-01-01
The face is one of the most powerful channel of nonverbal communication. The most commonly used taxonomy to describe facial behaviour is the Facial Action Coding System (FACS). FACS segments the visible effects of facial muscle activation into 30+ action units (AUs). AUs, which may occur alone and in thousands of combinations, can describe nearly all-possible facial expressions. Most existing methods for automatic AU detection treat the problem using one-vs-all classifiers and fail to exploit dependencies among AU and facial features. We introduce joint-patch and multi-label learning (JPML) to address these issues. JPML leverages group sparsity by selecting a sparse subset of facial patches while learning a multi-label classifier. In four of five comparisons on three diverse datasets, CK+, GFT, and BP4D, JPML produced the highest average F1 scores in comparison with state-of-the art. PMID:27382243
Wolf, Karsten; Raedler, Thomas; Henke, Kai; Kiefer, Falk; Mass, Reinhard; Quante, Markus; Wiedemann, Klaus
2005-01-01
The purpose of this pilot study was to establish the validity of an improved facial electromyogram (EMG) method for the measurement of facial pain expression. Darwin defined pain in connection with fear as a simultaneous occurrence of eye staring, brow contraction and teeth chattering. Prkachin was the first to use the video-based Facial Action Coding System to measure facial expressions while using four different types of pain triggers, identifying a group of facial muscles around the eyes. The activity of nine facial muscles in 10 healthy male subjects was analyzed. Pain was induced through a laser system with a randomized sequence of different intensities. Muscle activity was measured with a new, highly sensitive and selective facial EMG. The results indicate two groups of muscles as key for pain expression. These results are in concordance with Darwin's definition. As in Prkachin's findings, one muscle group is assembled around the orbicularis oculi muscle, initiating eye staring. The second group consists of the mentalis and depressor anguli oris muscles, which trigger mouth movements. The results demonstrate the validity of the facial EMG method for measuring facial pain expression. Further studies with psychometric measurements, a larger sample size and a female test group should be conducted.
Sparse coding for flexible, robust 3D facial-expression synthesis.
Lin, Yuxu; Song, Mingli; Quynh, Dao Thi Phuong; He, Ying; Chen, Chun
2012-01-01
Computer animation researchers have been extensively investigating 3D facial-expression synthesis for decades. However, flexible, robust production of realistic 3D facial expressions is still technically challenging. A proposed modeling framework applies sparse coding to synthesize 3D expressive faces, using specified coefficients or expression examples. It also robustly recovers facial expressions from noisy and incomplete data. This approach can synthesize higher-quality expressions in less time than the state-of-the-art techniques.
A study of patient facial expressivity in relation to orthodontic/surgical treatment.
Nafziger, Y J
1994-09-01
A dynamic analysis of the faces of patients seeking an aesthetic restoration of facial aberrations with orthognathic treatment requires (besides the routine static study, such as records, study models, photographs, and cephalometric tracings) the study of their facial expressions. To determine a classification method for the units of expressive facial behavior, the mobility of the face is studied with the aid of the facial action coding system (FACS) created by Ekman and Friesen. With video recordings of faces and photographic images taken from the video recordings, the authors have modified a technique of facial analysis structured on the visual observation of the anatomic basis of movement. The technique, itself, is based on the defining of individual facial expressions and then codifying such expressions through the use of minimal, anatomic action units. These action units actually combine to form facial expressions. With the help of FACS, the facial expressions of 18 patients before and after orthognathic surgery, and six control subjects without dentofacial deformation have been studied. I was able to register 6278 facial expressions and then further define 18,844 action units, from the 6278 facial expressions. A classification of the facial expressions made by subject groups and repeated in quantified time frames has allowed establishment of "rules" or "norms" relating to expression, thus further enabling the making of comparisons of facial expressiveness between patients and control subjects. This study indicates that the facial expressions of the patients were more similar to the facial expressions of the controls after orthognathic surgery. It was possible to distinguish changes in facial expressivity in patients after dentofacial surgery, the type and degree of change depended on the facial structure before surgery. Changes noted tended toward a functioning that is identical to that of subjects who do not suffer from dysmorphosis and toward greater lip competence, particularly the function of the orbicular muscle of the lips, with reduced compensatory activity of the lower lip and the chin. The results of our study are supported by the clinical observations and suggest that the FACS technique should be able to provide a coding for the study of facial expression.
Affect in Human-Robot Interaction
2014-01-01
is capable of learning and producing a large number of facial expressions based on Ekman’s Facial Action Coding System, FACS (Ekman and Friesen 1978... tactile (pushed, stroked, etc.), auditory (loud sound), temperature and olfactory (alcohol, smoke, etc.). The personality of the robot consists of...robot’s behavior through decision-making, learning , or action selection, a number of researchers used the fuzzy logic approach to emotion generation
A dynamic appearance descriptor approach to facial actions temporal modeling.
Jiang, Bihan; Valstar, Michel; Martinez, Brais; Pantic, Maja
2014-02-01
Both the configuration and the dynamics of facial expressions are crucial for the interpretation of human facial behavior. Yet to date, the vast majority of reported efforts in the field either do not take the dynamics of facial expressions into account, or focus only on prototypic facial expressions of six basic emotions. Facial dynamics can be explicitly analyzed by detecting the constituent temporal segments in Facial Action Coding System (FACS) Action Units (AUs)-onset, apex, and offset. In this paper, we present a novel approach to explicit analysis of temporal dynamics of facial actions using the dynamic appearance descriptor Local Phase Quantization from Three Orthogonal Planes (LPQ-TOP). Temporal segments are detected by combining a discriminative classifier for detecting the temporal segments on a frame-by-frame basis with Markov Models that enforce temporal consistency over the whole episode. The system is evaluated in detail over the MMI facial expression database, the UNBC-McMaster pain database, the SAL database, the GEMEP-FERA dataset in database-dependent experiments, in cross-database experiments using the Cohn-Kanade, and the SEMAINE databases. The comparison with other state-of-the-art methods shows that the proposed LPQ-TOP method outperforms the other approaches for the problem of AU temporal segment detection, and that overall AU activation detection benefits from dynamic appearance information.
A large-scale analysis of sex differences in facial expressions
Kodra, Evan; el Kaliouby, Rana; LaFrance, Marianne
2017-01-01
There exists a stereotype that women are more expressive than men; however, research has almost exclusively focused on a single facial behavior, smiling. A large-scale study examines whether women are consistently more expressive than men or whether the effects are dependent on the emotion expressed. Studies of gender differences in expressivity have been somewhat restricted to data collected in lab settings or which required labor-intensive manual coding. In the present study, we analyze gender differences in facial behaviors as over 2,000 viewers watch a set of video advertisements in their home environments. The facial responses were recorded using participants’ own webcams. Using a new automated facial coding technology we coded facial activity. We find that women are not universally more expressive across all facial actions. Nor are they more expressive in all positive valence actions and less expressive in all negative valence actions. It appears that generally women express actions more frequently than men, and in particular express more positive valence actions. However, expressiveness is not greater in women for all negative valence actions and is dependent on the discrete emotional state. PMID:28422963
Mele, Sonia; Ghirardi, Valentina; Craighero, Laila
2017-12-01
A long-term debate concerns whether the sensorimotor coding carried out during transitive actions observation reflects the low-level movement implementation details or the movement goals. On the contrary, phonemes and emotional facial expressions are intransitive actions that do not fall into this debate. The investigation of phonemes discrimination has proven to be a good model to demonstrate that the sensorimotor system plays a role in understanding actions acoustically presented. In the present study, we adapted the experimental paradigms already used in phonemes discrimination during face posture manipulation, to the discrimination of emotional facial expressions. We submitted participants to a lower or to an upper face posture manipulation during the execution of a four alternative labelling task of pictures randomly taken from four morphed continua between two emotional facial expressions. The results showed that the implementation of low-level movement details influence the discrimination of ambiguous facial expressions differing for a specific involvement of those movement details. These findings indicate that facial expressions discrimination is a good model to test the role of the sensorimotor system in the perception of actions visually presented.
Matsumoto, David; Willingham, Bob
2006-09-01
Facial behaviors of medal winners of the judo competition at the 2004 Athens Olympic Games were coded with P. Ekman and W. V. Friesen's (1978) Facial Affect Coding System (FACS) and interpreted using their Emotion FACS dictionary. Winners' spontaneous expressions were captured immediately when they completed medal matches, when they received their medal from a dignitary, and when they posed on the podium. The 84 athletes who contributed expressions came from 35 countries. The findings strongly supported the notion that expressions occur in relation to emotionally evocative contexts in people of all cultures, that these expressions correspond to the facial expressions of emotion considered to be universal, that expressions provide information that can reliably differentiate the antecedent situations that produced them, and that expressions that occur without inhibition are different than those that occur in social and interactive settings. ((c) 2006 APA, all rights reserved).
ERIC Educational Resources Information Center
Ekman, Paul; Friesen, Wallace V.
1976-01-01
The Facial Action Code (FAC) was derived from an analysis of the anatomical basis of facial movement. The development of the method is explained, contrasting it to other methods of measuring facial behavior. An example of how facial behavior is measured is provided, and ideas about research applications are discussed. (Author)
NASA Astrophysics Data System (ADS)
Amijoyo Mochtar, Andi
2018-02-01
Applications of robotics have become important for human life in recent years. There are many specification of robots that have been improved and encriched with the technology advances. One of them are humanoid robot with facial expression which closer with the human facial expression naturally. The purpose of this research is to make computation on facial expressions and conduct the tensile strength for silicone rubber as artificial skin. Facial expressions were calculated by determining dimension, material properties, number of node elements, boundary condition, force condition, and analysis type. A Facial expression robot is determined by the direction and the magnitude external force on the driven point. The expression face of robot is identical with the human facial expression where the muscle structure in face according to the human face anatomy. For developing facial expression robots, facial action coding system (FACS) in approached due to follow expression human. The tensile strength is conducting due to check the proportional force of artificial skin that can be applied on the future of robot facial expression. Combining of calculated and experimental results can generate reliable and sustainable robot facial expression that using silicone rubber as artificial skin.
Development and validation of an Argentine set of facial expressions of emotion.
Vaiman, Marcelo; Wagner, Mónica Anna; Caicedo, Estefanía; Pereno, Germán Leandro
2017-02-01
Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion research is receiving in this region. Here we present the development and validation of the Universidad Nacional de Cordoba, Expresiones de Emociones Faciales (UNCEEF), a Facial Action Coding System (FACS)-verified set of pictures of Argentineans expressing the six basic emotions, plus neutral expressions. FACS scores, recognition rates, Hu scores, and discrimination indices are reported. Evidence of convergent validity was obtained using the Pictures of Facial Affect in an Argentine sample. However, recognition accuracy was greater for UNCEEF. The importance of local sets of emotion pictures is discussed.
Zanette, Sarah; Gao, Xiaoqing; Brunet, Megan; Bartlett, Marian Stewart; Lee, Kang
2016-10-01
The current study used computer vision technology to examine the nonverbal facial expressions of children (6-11years old) telling antisocial and prosocial lies. Children in the antisocial lying group completed a temptation resistance paradigm where they were asked not to peek at a gift being wrapped for them. All children peeked at the gift and subsequently lied about their behavior. Children in the prosocial lying group were given an undesirable gift and asked if they liked it. All children lied about liking the gift. Nonverbal behavior was analyzed using the Computer Expression Recognition Toolbox (CERT), which employs the Facial Action Coding System (FACS), to automatically code children's facial expressions while lying. Using CERT, children's facial expressions during antisocial and prosocial lying were accurately and reliably differentiated significantly above chance-level accuracy. The basic expressions of emotion that distinguished antisocial lies from prosocial lies were joy and contempt. Children expressed joy more in prosocial lying than in antisocial lying. Girls showed more joy and less contempt compared with boys when they told prosocial lies. Boys showed more contempt when they told prosocial lies than when they told antisocial lies. The key action units (AUs) that differentiate children's antisocial and prosocial lies are blink/eye closure, lip pucker, and lip raise on the right side. Together, these findings indicate that children's facial expressions differ while telling antisocial versus prosocial lies. The reliability of CERT in detecting such differences in facial expression suggests the viability of using computer vision technology in deception research. Copyright © 2016 Elsevier Inc. All rights reserved.
Cross-domain expression recognition based on sparse coding and transfer learning
NASA Astrophysics Data System (ADS)
Yang, Yong; Zhang, Weiyi; Huang, Yong
2017-05-01
Traditional facial expression recognition methods usually assume that the training set and the test set are independent and identically distributed. However, in actual expression recognition applications, the conditions of independent and identical distribution are hardly satisfied for the training set and test set because of the difference of light, shade, race and so on. In order to solve this problem and improve the performance of expression recognition in the actual applications, a novel method based on transfer learning and sparse coding is applied to facial expression recognition. First of all, a common primitive model, that is, the dictionary is learnt. Then, based on the idea of transfer learning, the learned primitive pattern is transferred to facial expression and the corresponding feature representation is obtained by sparse coding. The experimental results in CK +, JAFFE and NVIE database shows that the transfer learning based on sparse coding method can effectively improve the expression recognition rate in the cross-domain expression recognition task and is suitable for the practical facial expression recognition applications.
The Perception and Mimicry of Facial Movements Predict Judgments of Smile Authenticity
Korb, Sebastian; With, Stéphane; Niedenthal, Paula; Kaiser, Susanne; Grandjean, Didier
2014-01-01
The mechanisms through which people perceive different types of smiles and judge their authenticity remain unclear. Here, 19 different types of smiles were created based on the Facial Action Coding System (FACS), using highly controlled, dynamic avatar faces. Participants observed short videos of smiles while their facial mimicry was measured with electromyography (EMG) over four facial muscles. Smile authenticity was judged after each trial. Avatar attractiveness was judged once in response to each avatar’s neutral face. Results suggest that, in contrast to most earlier work using static pictures as stimuli, participants relied less on the Duchenne marker (the presence of crow’s feet wrinkles around the eyes) in their judgments of authenticity. Furthermore, mimicry of smiles occurred in the Zygomaticus Major, Orbicularis Oculi, and Corrugator muscles. Consistent with theories of embodied cognition, activity in these muscles predicted authenticity judgments, suggesting that facial mimicry influences the perception of smiles. However, no significant mediation effect of facial mimicry was found. Avatar attractiveness did not predict authenticity judgments or mimicry patterns. PMID:24918939
Stability of Facial Affective Expressions in Schizophrenia
Fatouros-Bergman, H.; Spang, J.; Merten, J.; Preisler, G.; Werbart, A.
2012-01-01
Thirty-two videorecorded interviews were conducted by two interviewers with eight patients diagnosed with schizophrenia. Each patient was interviewed four times: three weekly interviews by the first interviewer and one additional interview by the second interviewer. 64 selected sequences where the patients were speaking about psychotic experiences were scored for facial affective behaviour with Emotion Facial Action Coding System (EMFACS). In accordance with previous research, the results show that patients diagnosed with schizophrenia express negative facial affectivity. Facial affective behaviour seems not to be dependent on temporality, since within-subjects ANOVA revealed no substantial changes in the amount of affects displayed across the weekly interview occasions. Whereas previous findings found contempt to be the most frequent affect in patients, in the present material disgust was as common, but depended on the interviewer. The results suggest that facial affectivity in these patients is primarily dominated by the negative emotions of disgust and, to a lesser extent, contempt and implies that this seems to be a fairly stable feature. PMID:22966449
Adkinson, Joshua M; Murphy, Robert X
2011-05-01
In 2009, the National Highway Traffic Safety Administration projected that 33,963 people would die and millions would be injured in motor vehicle collisions (MVC). Multiple studies have evaluated the impact of restraint devices in MVCs. This study examines longitudinal changes in facial fractures after MVC as result of utilization of restraint devices. The Pennsylvania Trauma Systems Foundation-Pennsylvania Trauma Outcomes Study database was queried for MVCs from 1989 to 2009. Restraint device use was noted, and facial fractures were identified by International Classification of Diseases-ninth revision codes. Surgeon cost data were extrapolated. More than 15,000 patients sustained ≥1 facial fracture. Only orbital blowout fractures increased over 20 years. Patients were 2.1% less likely every year to have ≥1 facial fracture, which translated into decreased estimated surgeon charges. Increased use of protective devices by patients involved in MVCs resulted in a change in incidence of different facial fractures with reduced need for reconstructive surgery.
ERIC Educational Resources Information Center
Dondi, Marco; Messinger, Daniel; Colle, Marta; Tabasso, Alessia; Simion, Francesca; Barba, Beatrice Dalla; Fogel, Alan
2007-01-01
To better understand the form and recognizability of neonatal smiling, 32 newborns (14 girls; M = 25.6 hr) were videorecorded in the behavioral states of alertness, drowsiness, active sleep, and quiet sleep. Baby Facial Action Coding System coding of both lip corner raising (simple or non-Duchenne) and lip corner raising with cheek raising…
ERIC Educational Resources Information Center
Fogel, Alan; Hsu, Hui-Chin; Shapiro, Alyson F.; Nelson-Goens, G. Christina; Secrist, Cory
2006-01-01
Different types of smiling varying in amplitude of lip corner retraction were investigated during 2 mother-infant games--peekaboo and tickle--at 6 and 12 months and during normally occurring and perturbed games. Using Facial Action Coding System (FACS), infant smiles were coded as simple (lip corner retraction only), Duchenne (simple plus cheek…
Sayers, W Michael; Sayette, Michael A
2013-09-01
Research on emotion suppression has shown a rebound effect, in which expression of the targeted emotion increases following a suppression attempt. In prior investigations, participants have been explicitly instructed to suppress their responses, which has drawn the act of suppression into metaconsciousness. Yet emerging research emphasizes the importance of nonconscious approaches to emotion regulation. This study is the first in which a craving rebound effect was evaluated without simultaneously raising awareness about suppression. We aimed to link spontaneously occurring attempts to suppress cigarette craving to increased smoking motivation assessed immediately thereafter. Smokers (n = 66) received a robust cued smoking-craving manipulation while their facial responses were videotaped and coded using the Facial Action Coding System. Following smoking-cue exposure, participants completed a behavioral choice task previously found to index smoking motivation. Participants evincing suppression-related facial expressions during cue exposure subsequently valued smoking more than did those not displaying these expressions, which suggests that internally generated suppression can exert powerful rebound effects.
Bologna, Matteo; Berardelli, Isabella; Paparella, Giulia; Marsili, Luca; Ricciardi, Lucia; Fabbrini, Giovanni; Berardelli, Alfredo
2016-01-01
Altered emotional processing, including reduced emotion facial expression and defective emotion recognition, has been reported in patients with Parkinson's disease (PD). However, few studies have objectively investigated facial expression abnormalities in PD using neurophysiological techniques. It is not known whether altered facial expression and recognition in PD are related. To investigate possible deficits in facial emotion expression and emotion recognition and their relationship, if any, in patients with PD. Eighteen patients with PD and 16 healthy controls were enrolled in this study. Facial expressions of emotion were recorded using a 3D optoelectronic system and analyzed using the facial action coding system. Possible deficits in emotion recognition were assessed using the Ekman test. Participants were assessed in one experimental session. Possible relationship between the kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients were evaluated using the Spearman's test and multiple regression analysis. The facial expression of all six basic emotions had slower velocity and lower amplitude in patients in comparison to healthy controls (all P s < 0.05). Patients also yielded worse Ekman global score and disgust, sadness, and fear sub-scores than healthy controls (all P s < 0.001). Altered facial expression kinematics and emotion recognition deficits were unrelated in patients (all P s > 0.05). Finally, no relationship emerged between kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients (all P s > 0.05). The results in this study provide further evidence of altered emotional processing in PD. The lack of any correlation between altered facial emotion expression kinematics and emotion recognition deficits in patients suggests that these abnormalities are mediated by separate pathophysiological mechanisms.
Social Use of Facial Expressions in Hylobatids
Scheider, Linda; Waller, Bridget M.; Oña, Leonardo; Burrows, Anne M.; Liebal, Katja
2016-01-01
Non-human primates use various communicative means in interactions with others. While primate gestures are commonly considered to be intentionally and flexibly used signals, facial expressions are often referred to as inflexible, automatic expressions of affective internal states. To explore whether and how non-human primates use facial expressions in specific communicative interactions, we studied five species of small apes (gibbons) by employing a newly established Facial Action Coding System for hylobatid species (GibbonFACS). We found that, despite individuals often being in close proximity to each other, in social (as opposed to non-social contexts) the duration of facial expressions was significantly longer when gibbons were facing another individual compared to non-facing situations. Social contexts included grooming, agonistic interactions and play, whereas non-social contexts included resting and self-grooming. Additionally, gibbons used facial expressions while facing another individual more often in social contexts than non-social contexts where facial expressions were produced regardless of the attentional state of the partner. Also, facial expressions were more likely ‘responded to’ by the partner’s facial expressions when facing another individual than non-facing. Taken together, our results indicate that gibbons use their facial expressions differentially depending on the social context and are able to use them in a directed way in communicative interactions with other conspecifics. PMID:26978660
Compressive sensing using optimized sensing matrix for face verification
NASA Astrophysics Data System (ADS)
Oey, Endra; Jeffry; Wongso, Kelvin; Tommy
2017-12-01
Biometric appears as one of the solutions which is capable in solving problems that occurred in the usage of password in terms of data access, for example there is possibility in forgetting password and hard to recall various different passwords. With biometrics, physical characteristics of a person can be captured and used in the identification process. In this research, facial biometric is used in the verification process to determine whether the user has the authority to access the data or not. Facial biometric is chosen as its low cost implementation and generate quite accurate result for user identification. Face verification system which is adopted in this research is Compressive Sensing (CS) technique, in which aims to reduce dimension size as well as encrypt data in form of facial test image where the image is represented in sparse signals. Encrypted data can be reconstructed using Sparse Coding algorithm. Two types of Sparse Coding namely Orthogonal Matching Pursuit (OMP) and Iteratively Reweighted Least Squares -ℓp (IRLS-ℓp) will be used for comparison face verification system research. Reconstruction results of sparse signals are then used to find Euclidean norm with the sparse signal of user that has been previously saved in system to determine the validity of the facial test image. Results of system accuracy obtained in this research are 99% in IRLS with time response of face verification for 4.917 seconds and 96.33% in OMP with time response of face verification for 0.4046 seconds with non-optimized sensing matrix, while 99% in IRLS with time response of face verification for 13.4791 seconds and 98.33% for OMP with time response of face verification for 3.1571 seconds with optimized sensing matrix.
Gunnery, Sarah D; Naumova, Elena N; Saint-Hilaire, Marie; Tickle-Degnen, Linda
2017-01-01
People with Parkinson's disease (PD) often experience a decrease in their facial expressivity, but little is known about how the coordinated movements across regions of the face are impaired in PD. The face has neurologically independent regions that coordinate to articulate distinct social meanings that others perceive as gestalt expressions, and so understanding how different regions of the face are affected is important. Using the Facial Action Coding System, this study comprehensively measured spontaneous facial expression across 600 frames for a multiple case study of people with PD who were rated as having varying degrees of facial expression deficits, and created correlation matrices for frequency and intensity of produced muscle activations across different areas of the face. Data visualization techniques were used to create temporal and correlational mappings of muscle action in the face at different degrees of facial expressivity. Results showed that as severity of facial expression deficit increased, there was a decrease in number, duration, intensity, and coactivation of facial muscle action. This understanding of how regions of the parkinsonian face move independently and in conjunction with other regions will provide a new focus for future research aiming to model how facial expression in PD relates to disease progression, stigma, and quality of life.
Universals and cultural variations in 22 emotional expressions across five cultures.
Cordaro, Daniel T; Sun, Rui; Keltner, Dacher; Kamble, Shanmukh; Huddar, Niranjan; McNeil, Galen
2018-02-01
We collected and Facial Action Coding System (FACS) coded over 2,600 free-response facial and body displays of 22 emotions in China, India, Japan, Korea, and the United States to test 5 hypotheses concerning universals and cultural variants in emotional expression. New techniques enabled us to identify cross-cultural core patterns of expressive behaviors for each of the 22 emotions. We also documented systematic cultural variations of expressive behaviors within each culture that were shaped by the cultural resemblance in values, and identified a gradient of universality for the 22 emotions. Our discussion focused on the science of new expressions and how the evidence from this investigation identifies the extent to which emotional displays vary across cultures. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Norm-based coding of facial identity in adults with autism spectrum disorder.
Walsh, Jennifer A; Maurer, Daphne; Vida, Mark D; Rhodes, Gillian; Jeffery, Linda; Rutherford, M D
2015-03-01
It is unclear whether reported deficits in face processing in individuals with autism spectrum disorders (ASD) can be explained by deficits in perceptual face coding mechanisms. In the current study, we examined whether adults with ASD showed evidence of norm-based opponent coding of facial identity, a perceptual process underlying the recognition of facial identity in typical adults. We began with an original face and an averaged face and then created an anti-face that differed from the averaged face in the opposite direction from the original face by a small amount (near adaptor) or a large amount (far adaptor). To test for norm-based coding, we adapted participants on different trials to the near versus far adaptor, then asked them to judge the identity of the averaged face. We varied the size of the test and adapting faces in order to reduce any contribution of low-level adaptation. Consistent with the predictions of norm-based coding, high functioning adults with ASD (n = 27) and matched typical participants (n = 28) showed identity aftereffects that were larger for the far than near adaptor. Unlike results with children with ASD, the strength of the aftereffects were similar in the two groups. This is the first study to demonstrate norm-based coding of facial identity in adults with ASD. Copyright © 2015 Elsevier Ltd. All rights reserved.
Murata, Aiko; Saito, Hisamichi; Schug, Joanna; Ogawa, Kenji; Kameda, Tatsuya
2016-01-01
A number of studies have shown that individuals often spontaneously mimic the facial expressions of others, a tendency known as facial mimicry. This tendency has generally been considered a reflex-like "automatic" response, but several recent studies have shown that the degree of mimicry may be moderated by contextual information. However, the cognitive and motivational factors underlying the contextual moderation of facial mimicry require further empirical investigation. In this study, we present evidence that the degree to which participants spontaneously mimic a target's facial expressions depends on whether participants are motivated to infer the target's emotional state. In the first study we show that facial mimicry, assessed by facial electromyography, occurs more frequently when participants are specifically instructed to infer a target's emotional state than when given no instruction. In the second study, we replicate this effect using the Facial Action Coding System to show that participants are more likely to mimic facial expressions of emotion when they are asked to infer the target's emotional state, rather than make inferences about a physical trait unrelated to emotion. These results provide convergent evidence that the explicit goal of understanding a target's emotional state affects the degree of facial mimicry shown by the perceiver, suggesting moderation of reflex-like motor activities by higher cognitive processes.
Murata, Aiko; Saito, Hisamichi; Schug, Joanna; Ogawa, Kenji; Kameda, Tatsuya
2016-01-01
A number of studies have shown that individuals often spontaneously mimic the facial expressions of others, a tendency known as facial mimicry. This tendency has generally been considered a reflex-like “automatic” response, but several recent studies have shown that the degree of mimicry may be moderated by contextual information. However, the cognitive and motivational factors underlying the contextual moderation of facial mimicry require further empirical investigation. In this study, we present evidence that the degree to which participants spontaneously mimic a target’s facial expressions depends on whether participants are motivated to infer the target’s emotional state. In the first study we show that facial mimicry, assessed by facial electromyography, occurs more frequently when participants are specifically instructed to infer a target’s emotional state than when given no instruction. In the second study, we replicate this effect using the Facial Action Coding System to show that participants are more likely to mimic facial expressions of emotion when they are asked to infer the target’s emotional state, rather than make inferences about a physical trait unrelated to emotion. These results provide convergent evidence that the explicit goal of understanding a target’s emotional state affects the degree of facial mimicry shown by the perceiver, suggesting moderation of reflex-like motor activities by higher cognitive processes. PMID:27055206
Role of temporal processing stages by inferior temporal neurons in facial recognition.
Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji
2011-01-01
In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition.
Role of Temporal Processing Stages by Inferior Temporal Neurons in Facial Recognition
Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji
2011-01-01
In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition. PMID:21734904
Predictive codes of familiarity and context during the perceptual learning of facial identities
NASA Astrophysics Data System (ADS)
Apps, Matthew A. J.; Tsakiris, Manos
2013-11-01
Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.
Facial correlates of emotional behaviour in the domestic cat (Felis catus).
Bennett, Valerie; Gourkow, Nadine; Mills, Daniel S
2017-08-01
Leyhausen's (1979) work on cat behaviour and facial expressions associated with offensive and defensive behaviour is widely embraced as the standard for interpretation of agonistic behaviour in this species. However, it is a largely anecdotal description that can be easily misunderstood. Recently a facial action coding system has been developed for cats (CatFACS), similar to that used for objectively coding human facial expressions. This study reports on the use of this system to describe the relationship between behaviour and facial expressions of cats in confinement contexts without and with human interaction, in order to generate hypotheses about the relationship between these expressions and underlying emotional state. Video recordings taken of 29 cats resident in a Canadian animal shelter were analysed using 1-0 sampling of 275 4-s video clips. Observations under the two conditions were analysed descriptively using hierarchical cluster analysis for binomial data and indicated that in both situations, about half of the data clustered into three groups. An argument is presented that these largely reflect states based on varying degrees of relaxed engagement, fear and frustration. Facial actions associated with fear included blinking and half-blinking and a left head and gaze bias at lower intensities. Facial actions consistently associated with frustration included hissing, nose-licking, dropping of the jaw, the raising of the upper lip, nose wrinkling, lower lip depression, parting of the lips, mouth stretching, vocalisation and showing of the tongue. Relaxed engagement appeared to be associated with a right gaze and head turn bias. The results also indicate potential qualitative changes associated with differences in intensity in emotional expression following human intervention. The results were also compared to the classic description of "offensive and defensive moods" in cats (Leyhausen, 1979) and previous work by Gourkow et al. (2014a) on behavioural styles in cats in order to assess if these observations had replicable features noted by others. This revealed evidence of convergent validity between the methods However, the use of CatFACS revealed elements relating to vocalisation and response lateralisation, not previously reported in this literature. Copyright © 2017 Elsevier B.V. All rights reserved.
Automatic Detection of Frontal Face Midline by Chain-coded Merlin-Farber Hough Trasform
NASA Astrophysics Data System (ADS)
Okamoto, Daichi; Ohyama, Wataru; Wakabayashi, Tetsushi; Kimura, Fumitaka
We propose a novel approach for detection of the facial midline (facial symmetry axis) from a frontal face image. The facial midline has several applications, for instance reducing computational cost required for facial feature extraction (FFE) and postoperative assessment for cosmetic or dental surgery. The proposed method detects the facial midline of a frontal face from an edge image as the symmetry axis using the Merlin-Faber Hough transformation. And a new performance improvement scheme for midline detection by MFHT is present. The main concept of the proposed scheme is suppression of redundant vote on the Hough parameter space by introducing chain code representation for the binary edge image. Experimental results on the image dataset containing 2409 images from FERET database indicate that the proposed algorithm can improve the accuracy of midline detection from 89.9% to 95.1 % for face images with different scales and rotation.
Grossman, Ruth B; Edelson, Lisa R; Tager-Flusberg, Helen
2013-06-01
People with high-functioning autism (HFA) have qualitative differences in facial expression and prosody production, which are rarely systematically quantified. The authors' goals were to qualitatively and quantitatively analyze prosody and facial expression productions in children and adolescents with HFA. Participants were 22 male children and adolescents with HFA and 18 typically developing (TD) controls (17 males, 1 female). The authors used a story retelling task to elicit emotionally laden narratives, which were analyzed through the use of acoustic measures and perceptual codes. Naïve listeners coded all productions for emotion type, degree of expressiveness, and awkwardness. The group with HFA was not significantly different in accuracy or expressiveness of facial productions, but was significantly more awkward than the TD group. Participants with HFA were significantly more expressive in their vocal productions, with a trend for greater awkwardness. Severity of social communication impairment, as captured by the Autism Diagnostic Observation Schedule (ADOS; Lord, Rutter, DiLavore, & Risi, 1999), was correlated with greater vocal and facial awkwardness. Facial and vocal expressions of participants with HFA were as recognizable as those of their TD peers but were qualitatively different, particularly when listeners coded samples with intact dynamic properties. These preliminary data show qualitative differences in nonverbal communication that may have significant negative impact on the social communication success of children and adolescents with HFA.
Woolley, J D; Chuang, B; Fussell, C; Scherer, S; Biagianti, B; Fulford, D; Mathalon, D H; Vinogradov, S
2017-05-01
Blunted facial affect is a common negative symptom of schizophrenia. Additionally, assessing the trustworthiness of faces is a social cognitive ability that is impaired in schizophrenia. Currently available pharmacological agents are ineffective at improving either of these symptoms, despite their clinical significance. The hypothalamic neuropeptide oxytocin has multiple prosocial effects when administered intranasally to healthy individuals and shows promise in decreasing negative symptoms and enhancing social cognition in schizophrenia. Although two small studies have investigated oxytocin's effects on ratings of facial trustworthiness in schizophrenia, its effects on facial expressivity have not been investigated in any population. We investigated the effects of oxytocin on facial emotional expressivity while participants performed a facial trustworthiness rating task in 33 individuals with schizophrenia and 35 age-matched healthy controls using a double-blind, placebo-controlled, cross-over design. Participants rated the trustworthiness of presented faces interspersed with emotionally evocative photographs while being video-recorded. Participants' facial expressivity in these videos was quantified by blind raters using a well-validated manualized approach (i.e. the Facial Expression Coding System; FACES). While oxytocin administration did not affect ratings of facial trustworthiness, it significantly increased facial expressivity in individuals with schizophrenia (Z = -2.33, p = 0.02) and at trend level in healthy controls (Z = -1.87, p = 0.06). These results demonstrate that oxytocin administration can increase facial expressivity in response to emotional stimuli and suggest that oxytocin may have the potential to serve as a treatment for blunted facial affect in schizophrenia.
Seeing Emotions: A Review of Micro and Subtle Emotion Expression Training
ERIC Educational Resources Information Center
Poole, Ernest Andre
2016-01-01
In this review I explore and discuss the use of micro and subtle expression training in the social sciences. These trainings, offered commercially, are designed and endorsed by noted psychologist Paul Ekman, co-author of the Facial Action Coding System, a comprehensive system of measuring muscular movement in the face and its relationship to the…
Soccer-Related Facial Trauma: A Nationwide Perspective.
Bobian, Michael R; Hanba, Curtis J; Svider, Peter F; Hojjat, Houmehr; Folbe, Adam J; Eloy, Jean Anderson; Shkoukani, Mahdi A
2016-12-01
Soccer participation continues to increase among all ages in the US. Our objective was to analyze trends in soccer-related facial injury epidemiology, demographics, and mechanisms of injury. The National Electronic Injury Surveillance System was evaluated for soccer-related facial injuries from 2010 through 2014. Results for product code "soccer" were filtered for injures to the face. Number of injuries was extrapolated, and data were analyzed for age, sex, specific injury diagnoses, locations, and mechanisms. In all, 2054 soccer-related facial trauma entries were analyzed. During this time, the number of injures remained relatively stable. Lacerations were the most common diagnosis (44.2%), followed by contusions and fractures. The most common sites of fracture were the nose (75.1%). Of fractures with a reported mechanism of injury, the most common was head-to-head collisions (39.0%). Patients <19 years accounted for 66.9% of injuries, and athletes over 18 years old had a higher risk of fractures. The incidence of soccer-related facial trauma has remained stable, but the severity of such injuries remain a danger. Facial protection in soccer is virtually absent, and our findings reinforce the need to educate athletes, families, and physicians on injury awareness and prevention. © The Author(s) 2016.
The extraction and use of facial features in low bit-rate visual communication.
Pearson, D
1992-01-29
A review is given of experimental investigations by the author and his collaborators into methods of extracting binary features from images of the face and hands. The aim of the research has been to enable deaf people to communicate by sign language over the telephone network. Other applications include model-based image coding and facial-recognition systems. The paper deals with the theoretical postulates underlying the successful experimental extraction of facial features. The basic philosophy has been to treat the face as an illuminated three-dimensional object and to identify features from characteristics of their Gaussian maps. It can be shown that in general a composite image operator linked to a directional-illumination estimator is required to accomplish this, although the latter can often be omitted in practice.
The faces of pain: a cluster analysis of individual differences in facial activity patterns of pain.
Kunz, M; Lautenbacher, S
2014-07-01
There is general agreement that facial activity during pain conveys pain-specific information but is nevertheless characterized by substantial inter-individual differences. With the present study we aim to investigate whether these differences represent idiosyncratic variations or whether they can be clustered into distinct facial activity patterns. Facial actions during heat pain were assessed in two samples of pain-free individuals (n = 128; n = 112) and were later analysed using the Facial Action Coding System. Hierarchical cluster analyses were used to look for combinations of single facial actions in episodes of pain. The stability/replicability of facial activity patterns was determined across samples as well as across different basic social situations. Cluster analyses revealed four distinct activity patterns during pain, which stably occurred across samples and situations: (I) narrowed eyes with furrowed brows and wrinkled nose; (II) opened mouth with narrowed eyes; (III) raised eyebrows; and (IV) furrowed brows with narrowed eyes. In addition, a considerable number of participants were facially completely unresponsive during pain induction (stoic cluster). These activity patterns seem to be reaction stereotypies in the majority of individuals (in nearly two-thirds), whereas a minority displayed varying clusters across situations. These findings suggest that there is no uniform set of facial actions but instead there are at least four different facial activity patterns occurring during pain that are composed of different configurations of facial actions. Raising awareness about these different 'faces of pain' might hold the potential of improving the detection and, thereby, the communication of pain. © 2013 European Pain Federation - EFIC®
Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation
Cid, Felipe; Moreno, Jose; Bustos, Pablo; Núñez, Pedro
2014-01-01
This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions. PMID:24787636
Emotion unfolded by motion: a role for parietal lobe in decoding dynamic facial expressions.
Sarkheil, Pegah; Goebel, Rainer; Schneider, Frank; Mathiak, Klaus
2013-12-01
Facial expressions convey important emotional and social information and are frequently applied in investigations of human affective processing. Dynamic faces may provide higher ecological validity to examine perceptual and cognitive processing of facial expressions. Higher order processing of emotional faces was addressed by varying the task and virtual face models systematically. Blood oxygenation level-dependent activation was assessed using functional magnetic resonance imaging in 20 healthy volunteers while viewing and evaluating either emotion or gender intensity of dynamic face stimuli. A general linear model analysis revealed that high valence activated a network of motion-responsive areas, indicating that visual motion areas support perceptual coding for the motion-based intensity of facial expressions. The comparison of emotion with gender discrimination task revealed increased activation of inferior parietal lobule, which highlights the involvement of parietal areas in processing of high level features of faces. Dynamic emotional stimuli may help to emphasize functions of the hypothesized 'extended' over the 'core' system for face processing.
Skilful communication: Emotional facial expressions recognition in very old adults.
María Sarabia-Cobo, Carmen; Navas, María José; Ellgring, Heiner; García-Rodríguez, Beatriz
2016-02-01
The main objective of this study was to assess the changes associated with ageing in the ability to identify emotional facial expressions and to what extent such age-related changes depend on the intensity with which each basic emotion is manifested. A randomised controlled trial carried out on 107 subjects who performed a six alternative forced-choice emotional expressions identification task. The stimuli consisted of 270 virtual emotional faces expressing the six basic emotions (happiness, sadness, surprise, fear, anger and disgust) at three different levels of intensity (low, pronounced and maximum). The virtual faces were generated by facial surface changes, as described in the Facial Action Coding System (FACS). A progressive age-related decline in the ability to identify emotional facial expressions was detected. The ability to recognise the intensity of expressions was one of the most strongly impaired variables associated with age, although the valence of emotion was also poorly identified, particularly in terms of recognising negative emotions. Nurses should be mindful of how ageing affects communication with older patients. In this study, very old adults displayed more difficulties in identifying emotional facial expressions, especially low intensity expressions and those associated with difficult emotions like disgust or fear. Copyright © 2015 Elsevier Ltd. All rights reserved.
Psychocentricity and participant profiles: implications for lexical processing among multilinguals
Libben, Gary; Curtiss, Kaitlin; Weber, Silke
2014-01-01
Lexical processing among bilinguals is often affected by complex patterns of individual experience. In this paper we discuss the psychocentric perspective on language representation and processing, which highlights the centrality of individual experience in psycholinguistic experimentation. We discuss applications to the investigation of lexical processing among multilinguals and explore the advantages of using high-density experiments with multilinguals. High density experiments are designed to co-index measures of lexical perception and production, as well as participant profiles. We discuss the challenges associated with the characterization of participant profiles and present a new data visualization technique, that we term Facial Profiles. This technique is based on Chernoff faces developed over 40 years ago. The Facial Profile technique seeks to overcome some of the challenges associated with the use of Chernoff faces, while maintaining the core insight that recoding multivariate data as facial features can engage the human face recognition system and thus enhance our ability to detect and interpret patterns within multivariate datasets. We demonstrate that Facial Profiles can code participant characteristics in lexical processing studies by recoding variables such as reading ability, speaking ability, and listening ability into iconically-related relative sizes of eye, mouth, and ear, respectively. The balance of ability in bilinguals can be captured by creating composite facial profiles or Janus Facial Profiles. We demonstrate the use of Facial Profiles and Janus Facial Profiles in the characterization of participant effects in the study of lexical perception and production. PMID:25071614
Self-organized Evaluation of Dynamic Hand Gestures for Sign Language Recognition
NASA Astrophysics Data System (ADS)
Buciu, Ioan; Pitas, Ioannis
Two main theories exist with respect to face encoding and representation in the human visual system (HVS). The first one refers to the dense (holistic) representation of the face, where faces have "holon"-like appearance. The second one claims that a more appropriate face representation is given by a sparse code, where only a small fraction of the neural cells corresponding to face encoding is activated. Theoretical and experimental evidence suggest that the HVS performs face analysis (encoding, storing, face recognition, facial expression recognition) in a structured and hierarchical way, where both representations have their own contribution and goal. According to neuropsychological experiments, it seems that encoding for face recognition, relies on holistic image representation, while a sparse image representation is used for facial expression analysis and classification. From the computer vision perspective, the techniques developed for automatic face and facial expression recognition fall into the same two representation types. Like in Neuroscience, the techniques which perform better for face recognition yield a holistic image representation, while those techniques suitable for facial expression recognition use a sparse or local image representation. The proposed mathematical models of image formation and encoding try to simulate the efficient storing, organization and coding of data in the human cortex. This is equivalent with embedding constraints in the model design regarding dimensionality reduction, redundant information minimization, mutual information minimization, non-negativity constraints, class information, etc. The presented techniques are applied as a feature extraction step followed by a classification method, which also heavily influences the recognition results.
Cost-Sensitive Local Binary Feature Learning for Facial Age Estimation.
Lu, Jiwen; Liong, Venice Erin; Zhou, Jie
2015-12-01
In this paper, we propose a cost-sensitive local binary feature learning (CS-LBFL) method for facial age estimation. Unlike the conventional facial age estimation methods that employ hand-crafted descriptors or holistically learned descriptors for feature representation, our CS-LBFL method learns discriminative local features directly from raw pixels for face representation. Motivated by the fact that facial age estimation is a cost-sensitive computer vision problem and local binary features are more robust to illumination and expression variations than holistic features, we learn a series of hashing functions to project raw pixel values extracted from face patches into low-dimensional binary codes, where binary codes with similar chronological ages are projected as close as possible, and those with dissimilar chronological ages are projected as far as possible. Then, we pool and encode these local binary codes within each face image as a real-valued histogram feature for face representation. Moreover, we propose a cost-sensitive local binary multi-feature learning method to jointly learn multiple sets of hashing functions using face patches extracted from different scales to exploit complementary information. Our methods achieve competitive performance on four widely used face aging data sets.
Fujiwara, Takeo; Mizuki, Rie; Miki, Takahiro; Chemtob, Claude
2015-01-01
"Emotional numbing" is a symptom of post-traumatic stress disorder (PTSD) characterized by a loss of interest in usually enjoyable activities, feeling detached from others, and an inability to express a full range of emotions. Emotional numbing is usually assessed through self-report, and is particularly difficult to ascertain among young children. We conducted a pilot study to explore the use of facial expression ratings in response to a comedy video clip to assess emotional reactivity among preschool children directly exposed to the Great East Japan Earthquake. This study included 23 child participants. Child PTSD symptoms were measured using a modified version of the Parent's Report of the Child's Reaction to Stress scale. Children were filmed while watching a 2-min video compilation of natural scenes ('baseline video') followed by a 2-min video clip from a television comedy ('comedy video'). Children's facial expressions were processed the using Noldus FaceReader software, which implements the Facial Action Coding System (FACS). We investigated the association between PTSD symptom scores and facial emotion reactivity using linear regression analysis. Children with higher PTSD symptom scores showed a significantly greater proportion of neutral facial expressions, controlling for sex, age, and baseline facial expression (p < 0.05). This pilot study suggests that facial emotion reactivity, measured using facial expression recognition software, has the potential to index emotional numbing in young children. This pilot study adds to the emerging literature on using experimental psychopathology methods to characterize children's reactions to disasters.
Schulte-Rüther, Martin; Otte, Ellen; Adigüzel, Kübra; Firk, Christine; Herpertz-Dahlmann, Beate; Koch, Iring; Konrad, Kerstin
2017-02-01
It has been suggested that an early deficit in the human mirror neuron system (MNS) is an important feature of autism. Recent findings related to simple hand and finger movements do not support a general dysfunction of the MNS in autism. Studies investigating facial actions (e.g., emotional expressions) have been more consistent, however, mostly relied on passive observation tasks. We used a new variant of a compatibility task for the assessment of automatic facial mimicry responses that allowed for simultaneous control of attention to facial stimuli. We used facial electromyography in 18 children and adolescents with Autism spectrum disorder (ASD) and 18 typically developing controls (TDCs). We observed a robust compatibility effect in ASD, that is, the execution of a facial expression was facilitated if a congruent facial expression was observed. Time course analysis of RT distributions and comparison to a classic compatibility task (symbolic Simon task) revealed that the facial compatibility effect appeared early and increased with time, suggesting fast and sustained activation of motor codes during observation of facial expressions. We observed a negative correlation of the compatibility effect with age across participants and in ASD, and a positive correlation between self-rated empathy and congruency for smiling faces in TDC but not in ASD. This pattern of results suggests that basic motor mimicry is intact in ASD, but is not associated with complex social cognitive abilities such as emotion understanding and empathy. Autism Res 2017, 10: 298-310. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.
Fujiwara, Takeo; Mizuki, Rie; Miki, Takahiro; Chemtob, Claude
2015-01-01
“Emotional numbing” is a symptom of post-traumatic stress disorder (PTSD) characterized by a loss of interest in usually enjoyable activities, feeling detached from others, and an inability to express a full range of emotions. Emotional numbing is usually assessed through self-report, and is particularly difficult to ascertain among young children. We conducted a pilot study to explore the use of facial expression ratings in response to a comedy video clip to assess emotional reactivity among preschool children directly exposed to the Great East Japan Earthquake. This study included 23 child participants. Child PTSD symptoms were measured using a modified version of the Parent’s Report of the Child’s Reaction to Stress scale. Children were filmed while watching a 2-min video compilation of natural scenes (‘baseline video’) followed by a 2-min video clip from a television comedy (‘comedy video’). Children’s facial expressions were processed the using Noldus FaceReader software, which implements the Facial Action Coding System (FACS). We investigated the association between PTSD symptom scores and facial emotion reactivity using linear regression analysis. Children with higher PTSD symptom scores showed a significantly greater proportion of neutral facial expressions, controlling for sex, age, and baseline facial expression (p < 0.05). This pilot study suggests that facial emotion reactivity, measured using facial expression recognition software, has the potential to index emotional numbing in young children. This pilot study adds to the emerging literature on using experimental psychopathology methods to characterize children’s reactions to disasters. PMID:26528206
Diminished facial emotion expression and associated clinical characteristics in Anorexia Nervosa.
Lang, Katie; Larsson, Emma E C; Mavromara, Liza; Simic, Mima; Treasure, Janet; Tchanturia, Kate
2016-02-28
This study aimed to investigate emotion expression in a large group of children, adolescents and adults with Anorexia Nervosa (AN), and investigate the associated clinical correlates. One hundred and forty-one participants (AN=66, HC= 75) were recruited and positive and negative film clips were used to elicit emotion expressions. The Facial Activation Coding system (FACES) was used to code emotion expression. Subjective ratings of emotion were collected. Individuals with AN displayed less positive emotions during the positive film clip compared to healthy controls (HC). There was no significant difference between the groups on the Positive and Negative Affect Scale (PANAS). The AN group displayed emotional incongruence (reporting a different emotion to what would be expected given the stimuli, with limited facial affect to signal the emotion experienced), whereby they reported feeling significantly higher rates of negative emotion during the positive clip. There were no differences in emotion expression between the groups during the negative film clip. Despite this individuals with AN reported feeling significantly higher levels of negative emotions during the negative clip. Diminished positive emotion expression was associated with more severe clinical symptoms, which could suggest that these individuals represent a group with serious social difficulties, which may require specific attention in treatment. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Interactive searching of facial image databases
NASA Astrophysics Data System (ADS)
Nicholls, Robert A.; Shepherd, John W.; Shepherd, Jean
1995-09-01
A set of psychological facial descriptors has been devised to enable computerized searching of criminal photograph albums. The descriptors have been used to encode image databased of up to twelve thousand images. Using a system called FACES, the databases are searched by translating a witness' verbal description into corresponding facial descriptors. Trials of FACES have shown that this coding scheme is more productive and efficient than searching traditional photograph albums. An alternative method of searching the encoded database using a genetic algorithm is currenly being tested. The genetic search method does not require the witness to verbalize a description of the target but merely to indicate a degree of similarity between the target and a limited selection of images from the database. The major drawback of FACES is that is requires a manual encoding of images. Research is being undertaken to automate the process, however, it will require an algorithm which can predict human descriptive values. Alternatives to human derived coding schemes exist using statistical classifications of images. Since databases encoded using statistical classifiers do not have an obvious direct mapping to human derived descriptors, a search method which does not require the entry of human descriptors is required. A genetic search algorithm is being tested for such a purpose.
Support vector machine-based facial-expression recognition method combining shape and appearance
NASA Astrophysics Data System (ADS)
Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun
2010-11-01
Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.
Brain systems for assessing the affective value of faces
Said, Christopher P.; Haxby, James V.; Todorov, Alexander
2011-01-01
Cognitive neuroscience research on facial expression recognition and face evaluation has proliferated over the past 15 years. Nevertheless, large questions remain unanswered. In this overview, we discuss the current understanding in the field, and describe what is known and what remains unknown. In §2, we describe three types of behavioural evidence that the perception of traits in neutral faces is related to the perception of facial expressions, and may rely on the same mechanisms. In §3, we discuss cortical systems for the perception of facial expressions, and argue for a partial segregation of function in the superior temporal sulcus and the fusiform gyrus. In §4, we describe the current understanding of how the brain responds to emotionally neutral faces. To resolve some of the inconsistencies in the literature, we perform a large group analysis across three different studies, and argue that one parsimonious explanation of prior findings is that faces are coded in terms of their typicality. In §5, we discuss how these two lines of research—perception of emotional expressions and face evaluation—could be integrated into a common, cognitive neuroscience framework. PMID:21536552
ERIC Educational Resources Information Center
Oster, Harriet; And Others
1992-01-01
Compared subjects' judgments about emotions expressed by the faces of infants pictured in slides to predictions made by the Max system of measuring emotional expression. Judgments did not coincide with Max predictions for fear, anger, sadness, and disgust. Results indicated that expressions of negative affect by infants are not fully…
Contextual influences on pain communication in couples with and without a partner with chronic pain.
Gagnon, Michelle M; Hadjistavropoulos, Thomas; MacNab, Ying C
2017-10-01
This is an experimental study of pain communication in couples. Despite evidence that chronic pain in one partner impacts both members of the dyad, dyadic influences on pain communication have not been sufficiently examined and are typically studied based on retrospective reports. Our goal was to directly study contextual influences (ie, presence of chronic pain, gender, relationship quality, and pain catastrophizing) on self-reported and nonverbal (ie, facial expressions) pain responses. Couples with (n = 66) and without (n = 65) an individual with chronic pain (ICP) completed relationship and pain catastrophizing questionnaires. Subsequently, one partner underwent a pain task (pain target, PT), while the other partner observed (pain observer, PO). In couples with an ICP, the ICP was assigned to be the PT. Pain intensity and PO perceived pain intensity ratings were recorded at multiple intervals. Facial expressions were video recorded throughout the pain task. Pain-related facial expression was quantified using the Facial Action Coding System. The most consistent predictor of either partner's pain-related facial expression was the pain-related facial expression of the other partner. Pain targets provided higher pain ratings than POs and female PTs reported and showed more pain, regardless of chronic pain status. Gender and the interaction between gender and relationship satisfaction were predictors of pain-related facial expression among PTs, but not POs. None of the examined variables predicted self-reported pain. Results suggest that contextual variables influence pain communication in couples, with distinct influences for PTs and POs. Moreover, self-report and nonverbal responses are not displayed in a parallel manner.
Statistical Analysis of Online Eye and Face-tracking Applications in Marketing
NASA Astrophysics Data System (ADS)
Liu, Xuan
Eye-tracking and face-tracking technology have been widely adopted to study viewers' attention and emotional response. In the dissertation, we apply these two technologies to investigate effective online contents that are designed to attract and direct attention and engage viewers emotional responses. In the first part of the dissertation, we conduct a series of experiments that use eye-tracking technology to explore how online models' facial cues affect users' attention on static e-commerce websites. The joint effects of two facial cues, gaze direction and facial expression on attention, are estimated by Bayesian ANOVA, allowing various distributional assumptions. We also consider the similarities and differences in the effects of facial cues among American and Chinese consumers. This study offers insights on how to attract and retain customers' attentions for advertisers that use static advertisement on various websites or ad networks. In the second part of the dissertation, we conduct a face-tracking study where we investigate the relation between experiment participants' emotional responseswhile watching comedy movie trailers and their watching intentions to the actual movies. Viewers' facial expressions are collected in real-time and converted to emo- tional responses with algorithms based on facial coding system. To analyze the data, we propose to use a joint modeling method that link viewers' longitudinal emotion measurements and their watching intentions. This research provides recommenda- tions to filmmakers on how to improve the effectiveness of movie trailers, and how to boost audiences' desire to watch the movies.
Fanti, Kostas A; Kyranides, Melina Nicole; Panayiotou, Georgia
2017-02-01
The current study adds to prior research by investigating specific (happiness, sadness, surprise, disgust, anger and fear) and general (corrugator and zygomatic muscle activity) facial reactions to violent and comedy films among individuals with varying levels of callous-unemotional (CU) traits and impulsive aggression (IA). Participants at differential risk of CU traits and IA were selected from a sample of 1225 young adults. In Experiment 1, participants (N = 82) facial expressions were recorded while they watched violent and comedy films. Video footage of participants' facial expressions was analysed using FaceReader, a facial coding software that classifies facial reactions. Findings suggested that individuals with elevated CU traits showed reduced facial reactions of sadness and disgust to violent films, indicating low empathic concern in response to victims' distress. In contrast, impulsive aggressors produced specifically more angry facial expressions when viewing violent and comedy films. In Experiment 2 (N = 86), facial reactions were measured by monitoring facial electromyography activity. FaceReader findings were verified by the reduced facial electromyography at the corrugator, but not the zygomatic, muscle in response to violent films shown by individuals high in CU traits. Additional analysis suggested that sympathy to victims explained the association between CU traits and reduced facial reactions to violent films.
Face recognition with the Karhunen-Loeve transform
NASA Astrophysics Data System (ADS)
Suarez, Pedro F.
1991-12-01
The major goal of this research was to investigate machine recognition of faces. The approach taken to achieve this goal was to investigate the use of Karhunen-Loe've Transform (KLT) by implementing flexible and practical code. The KLT utilizes the eigenvectors of the covariance matrix as a basis set. Faces were projected onto the eigenvectors, called eigenfaces, and the resulting projection coefficients were used as features. Face recognition accuracies for the KLT coefficients were superior to Fourier based techniques. Additionally, this thesis demonstrated the image compression and reconstruction capabilities of the KLT. This theses also developed the use of the KLT as a facial feature detector. The ability to differentiate between facial features provides a computer communications interface for non-vocal people with cerebral palsy. Lastly, this thesis developed a KLT based axis system for laser scanner data of human heads. The scanner data axis system provides the anthropometric community a more precise method of fitting custom helmets.
Sayette, Michael A.; Creswell, Kasey G.; Dimoff, John D.; Fairbairn, Catharine E.; Cohn, Jeffrey F.; Heckman, Bryan W.; Kirchner, Thomas R.; Levine, John M.; Moreland, Richard L.
2017-01-01
We integrated research on emotion and on small groups to address a fundamental and enduring question facing alcohol researchers: What are the specific mechanisms that underlie the reinforcing effects of drinking? In one of the largest alcohol-administration studies yet conducted, we employed a novel group-formation paradigm to evaluate the socioemotional effects of alcohol. Seven hundred twenty social drinkers (360 male, 360 female) were assembled into groups of 3 unacquainted persons each and given a moderate dose of an alcoholic, placebo, or control beverage, which they consumed over 36 min. These groups’ social interactions were video recorded, and the duration and sequence of interaction partners’ facial and speech behaviors were systematically coded (e.g., using the Facial Action Coding System). Alcohol consumption enhanced individual- and group-level behaviors associated with positive affect, reduced individual-level behaviors associated with negative affect, and elevated self-reported bonding. Our results indicate that alcohol facilitates bonding during group formation. Assessing nonverbal responses in social contexts offers new directions for evaluating the effects of alcohol. PMID:22760882
Appetitive Motivation and Negative Emotion Reactivity among Remitted Depressed Youth
Hankin, Benjamin L.; Wetter, Emily K.; Flory, Kate
2012-01-01
Depression has been characterized as involving altered appetitive motivation and emotional reactivity. Yet no study has examined objective indices of emotional reactivity when the appetitive/approach system is suppressed in response to failure to attain a self-relevant goal and desired reward. Three groups of youth (N = 98, ages 9–15; remitted depressed, n = 34; externalizing disordered without depression, n = 30, and healthy controls, n = 34) participated in a novel reward striving task designed to activate the appetitive/approach motivation system. Objective facial expressions of emotion were videotaped and coded throughout both failure (i.e., nonreward) and control (success and reward) conditions. Observational coding of facial expressions as well as youths’ subjective emotion reports showed that the remitted depressed youth specifically exhibited more negative emotional reactivity to failure in the reward striving task, but not the control condition. Neither externalizing disordered (i.e., ADHD, CD, and/ or ODD) nor control youth displayed greater negative emotional reactivity in either the failure or control condition. Findings suggest that depression among youth is related to dysregulated appetitive motivation and associated negative emotional reactivity after failing to achieve an important, self-relevant goal and not attaining reward. These deficits in reward processing appear to be specific to depression as externalizing disordered youth did not display negative emotional reactivity to failure after their appetitive motivation system was activated. PMID:22901275
Appetitive motivation and negative emotion reactivity among remitted depressed youth.
Hankin, Benjamin L; Wetter, Emily K; Flory, Kate
2012-01-01
Depression has been characterized as involving altered appetitive motivation and emotional reactivity. Yet no study has examined objective indices of emotional reactivity when the appetitive/approach system is suppressed in response to failure to attain a self-relevant goal and desired reward. Three groups of youth (N = 98, ages 9-15; remitted depressed, n = 34; externalizing disordered without depression, n = 30; and healthy controls, n = 34) participated in a novel reward striving task designed to activate the appetitive/approach motivation system. Objective facial expressions of emotion were videotaped and coded throughout both failure (i.e., nonreward) and control (success and reward) conditions. Observational coding of facial expressions as well as youths' subjective emotion reports showed that the remitted depressed youth specifically exhibited more negative emotional reactivity to failure in the reward striving task, but not the control condition. Neither externalizing disordered (i.e., attention deficit hyperactivity disorder, conduct disorder, and/or oppositional defiant disorder) nor control youth displayed greater negative emotional reactivity in either the failure or control condition. Findings suggest that depression among youth is related to dysregulated appetitive motivation and associated negative emotional reactivity after failing to achieve an important, self-relevant goal and not attaining reward. These deficits in reward processing appear to be specific to depression as externalizing disordered youth did not display negative emotional reactivity to failure after their appetitive motivation system was activated.
Liu, Chengwei; Liu, Ying; Iqbal, Zahida; Li, Wenhui; Lv, Bo; Jiang, Zhongqing
2017-01-01
To investigate the interaction between facial expressions and facial gender information during face perception, the present study matched the intensities of the two types of information in face images and then adopted the orthogonal condition of the Garner Paradigm to present the images to participants who were required to judge the gender and expression of the faces; the gender and expression presentations were varied orthogonally. Gender and expression processing displayed a mutual interaction. On the one hand, the judgment of angry expressions occurred faster when presented with male facial images; on the other hand, the classification of the female gender occurred faster when presented with a happy facial expression than when presented with an angry facial expression. According to the evoked-related potential results, the expression classification was influenced by gender during the face structural processing stage (as indexed by N170), which indicates the promotion or interference of facial gender with the coding of facial expression features. However, gender processing was affected by facial expressions in more stages, including the early (P1) and late (LPC) stages of perceptual processing, reflecting that emotional expression influences gender processing mainly by directing attention.
Auerbach, Sarah
2017-01-01
Trait cheerfulness predicts individual differences in experiences and behavioral responses in various humor experiments and settings. The present study is the first to investigate whether trait cheerfulness also influences the impact of a hospital clown intervention on the emotional state of patients. Forty-two adults received a clown visit in a rehabilitation center and rated their emotional state and trait cheerfulness afterward. Facial expressions of patients during the clown visit were coded with the Facial Action Coding System. Looking at the total sample, the hospital clown intervention elicited more frequent facial expressions of genuine enjoyment (Duchenne smiles) than other smiles (Non-Duchenne smiles), and more Duchenne smiles went along with more perceived funniness, a higher level of global positive feelings and transcendence. This supports the notion that overall, hospital clown interventions are beneficial for patients. However, when considering individual differences in the receptiveness to humor, results confirmed that high trait cheerful patients showed more Duchenne smiles than low trait cheerful patients (with no difference in Non-Duchenne smiles), and reported a higher level of positive emotions than low trait cheerful individuals. In summary, although hospital clown interventions on average successfully raise the patients’ level of positive emotions, not all patients in hospitals are equally susceptible to respond to humor with amusement, and thus do not equally benefit from a hospital clown intervention. Implications for research and practitioners are discussed. PMID:29180976
A systems view of mother-infant face-to-face communication.
Beebe, Beatrice; Messinger, Daniel; Bahrick, Lorraine E; Margolis, Amy; Buck, Karen A; Chen, Henian
2016-04-01
Principles of a dynamic, dyadic systems view of mother-infant face-to-face communication, which considers self- and interactive processes in relation to one another, were tested. The process of interaction across time in a large low-risk community sample at infant age 4 months was examined. Split-screen videotape was coded on a 1-s time base for communication modalities of attention, affect, orientation, touch, and composite facial-visual engagement. Time-series approaches generated self- and interactive contingency estimates in each modality. Evidence supporting the following principles was obtained: (a) Significant moment-to-moment predictability within each partner (self-contingency) and between the partners (interactive contingency) characterizes mother-infant communication. (b) Interactive contingency is organized by a bidirectional, but asymmetrical, process: Maternal contingent coordination with infant is higher than infant contingent coordination with mother. (c) Self-contingency organizes communication to a far greater extent than interactive contingency. (d) Self- and interactive contingency processes are not separate; each affects the other in communication modalities of facial affect, facial-visual engagement, and orientation. Each person's self-organization exists in a dynamic, homoeostatic (negative feedback) balance with the degree to which the person coordinates with the partner. For example, those individuals who are less facially stable are likely to coordinate more strongly with the partner's facial affect and vice versa. Our findings support the concept that the dyad is a fundamental unit of analysis in the investigation of early interaction. Moreover, an individual's self-contingency is influenced by the way the individual coordinates with the partner. Our results imply that it is not appropriate to conceptualize interactive processes without simultaneously accounting for dynamically interrelated self-organizing processes. (c) 2016 APA, all rights reserved).
A Systems View of Mother-Infant Face-to-Face Communication
Beebe, Beatrice; Messinger, Daniel; Bahrick, Lorraine E.; Margolis, Amy; Buck, Karen A.; Chen, Henian
2016-01-01
Principles of a dynamic, dyadic systems view of mother-infant face-to-face communication, which considers self- and interactive processes in relation to one another, were tested. We examined the process of interaction across time in a large, low-risk community sample, at infant age 4 months. Split-screen videotape was coded on a 1-s time base for communication modalities of attention, affect, orientation, touch and composite facial-visual engagement. Time-series approaches generated self- and interactive contingency estimates in each modality. Evidence supporting the following principles was obtained: (1) Significant moment-to-moment predictability within each partner (self-contingency) and between the partners (interactive contingency) characterizes mother-infant communication. (2) Interactive contingency is organized by a bi-directional, but asymmetrical, process: maternal contingent coordination with infant is higher than infant contingent coordination with mother. (3) Self-contingency organizes communication to a far greater extent than interactive contingency. (4) Self-and interactive contingency processes are not separate; each affects the other, in communication modalities of facial affect, facial-visual engagement, and orientation. Each person’s self-organization exists in a dynamic, homoeostatic (negative feedback) balance with the degree to which the person coordinates with the partner. For example, those individuals who are less facially stable are likely to coordinate more strongly with the partner’s facial affect; and vice-versa. Our findings support the concept that the dyad is a fundamental unit of analysis in the investigation of early interaction. Moreover, an individual’s self-contingency is influenced by the way the individual coordinates with the partner. Our results imply that it is not appropriate to conceptualize interactive processes without simultaneously accounting for dynamically inter-related self-organizing processes. PMID:26882118
Simoni, Payman; Ostendorf, Robert; Cox, Artemus J
2003-01-01
To examine the relationship between the use of restraining devices and the incidence of specific facial fractures in motor vehicle crashes. Retrospective analysis of patients with facial fractures following a motor vehicle crash. University of Alabama at Birmingham Hospital level I trauma center from 1996 to 2000. Of 3731 patients involved in motor vehicle crashes, a total of 497 patients were found to have facial fractures as determined by International Classification of Diseases, Ninth Revision (ICD-9) codes. Facial fractures were categorized as mandibular, orbital, zygomaticomaxillary complex (ZMC), and nasal. Use of seat belts alone was more effective in decreasing the chance of facial fractures in this population (from 17% to 8%) compared with the use of air bags alone (17% to 11%). The use of seat belts and air bags together decreased the incidence of facial fractures from 17% to 5%. Use of restraining devices in vehicles significantly reduces the chance of incurring facial fractures in a severe motor vehicle crash. However, use of air bags and seat belts does not change the pattern of facial fractures greatly except for ZMC fractures. Air bags are least effective in preventing ZMC fractures. Improving the mechanics of restraining devices might be needed to minimize facial fractures.
Mimicking emotions: how 3-12-month-old infants use the facial expressions and eyes of a model.
Soussignan, Robert; Dollion, Nicolas; Schaal, Benoist; Durand, Karine; Reissland, Nadja; Baudouin, Jean-Yves
2018-06-01
While there is an extensive literature on the tendency to mimic emotional expressions in adults, it is unclear how this skill emerges and develops over time. Specifically, it is unclear whether infants mimic discrete emotion-related facial actions, whether their facial displays are moderated by contextual cues and whether infants' emotional mimicry is constrained by developmental changes in the ability to discriminate emotions. We therefore investigate these questions using Baby-FACS to code infants' facial displays and eye-movement tracking to examine infants' looking times at facial expressions. Three-, 7-, and 12-month-old participants were exposed to dynamic facial expressions (joy, anger, fear, disgust, sadness) of a virtual model which either looked at the infant or had an averted gaze. Infants did not match emotion-specific facial actions shown by the model, but they produced valence-congruent facial responses to the distinct expressions. Furthermore, only the 7- and 12-month-olds displayed negative responses to the model's negative expressions and they looked more at areas of the face recruiting facial actions involved in specific expressions. Our results suggest that valence-congruent expressions emerge in infancy during a period where the decoding of facial expressions becomes increasingly sensitive to the social signal value of emotions.
Davila-Ross, Marina; Jesus, Goncalo; Osborne, Jade; Bard, Kim A.
2015-01-01
The ability to flexibly produce facial expressions and vocalizations has a strong impact on the way humans communicate, as it promotes more explicit and versatile forms of communication. Whereas facial expressions and vocalizations are unarguably closely linked in primates, the extent to which these expressions can be produced independently in nonhuman primates is unknown. The present work, thus, examined if chimpanzees produce the same types of facial expressions with and without accompanying vocalizations, as do humans. Forty-six chimpanzees (Pan troglodytes) were video-recorded during spontaneous play with conspecifics at the Chimfunshi Wildlife Orphanage. ChimpFACS was applied, a standardized coding system to measure chimpanzee facial movements, based on FACS developed for humans. Data showed that the chimpanzees produced the same 14 configurations of open-mouth faces when laugh sounds were present and when they were absent. Chimpanzees, thus, produce these facial expressions flexibly without being morphologically constrained by the accompanying vocalizations. Furthermore, the data indicated that the facial expression plus vocalization and the facial expression alone were used differently in social play, i.e., when in physical contact with the playmates and when matching the playmates’ open-mouth faces. These findings provide empirical evidence that chimpanzees produce distinctive facial expressions independently from a vocalization, and that their multimodal use affects communicative meaning, important traits for a more explicit and versatile way of communication. As it is still uncertain how human laugh faces evolved, the ChimpFACS data were also used to empirically examine the evolutionary relation between open-mouth faces with laugh sounds of chimpanzees and laugh faces of humans. The ChimpFACS results revealed that laugh faces of humans must have gradually emerged from laughing open-mouth faces of ancestral apes. This work examines the main evolutionary changes of laugh faces since the last common ancestor of chimpanzees and humans. PMID:26061420
Davila-Ross, Marina; Jesus, Goncalo; Osborne, Jade; Bard, Kim A
2015-01-01
The ability to flexibly produce facial expressions and vocalizations has a strong impact on the way humans communicate, as it promotes more explicit and versatile forms of communication. Whereas facial expressions and vocalizations are unarguably closely linked in primates, the extent to which these expressions can be produced independently in nonhuman primates is unknown. The present work, thus, examined if chimpanzees produce the same types of facial expressions with and without accompanying vocalizations, as do humans. Forty-six chimpanzees (Pan troglodytes) were video-recorded during spontaneous play with conspecifics at the Chimfunshi Wildlife Orphanage. ChimpFACS was applied, a standardized coding system to measure chimpanzee facial movements, based on FACS developed for humans. Data showed that the chimpanzees produced the same 14 configurations of open-mouth faces when laugh sounds were present and when they were absent. Chimpanzees, thus, produce these facial expressions flexibly without being morphologically constrained by the accompanying vocalizations. Furthermore, the data indicated that the facial expression plus vocalization and the facial expression alone were used differently in social play, i.e., when in physical contact with the playmates and when matching the playmates' open-mouth faces. These findings provide empirical evidence that chimpanzees produce distinctive facial expressions independently from a vocalization, and that their multimodal use affects communicative meaning, important traits for a more explicit and versatile way of communication. As it is still uncertain how human laugh faces evolved, the ChimpFACS data were also used to empirically examine the evolutionary relation between open-mouth faces with laugh sounds of chimpanzees and laugh faces of humans. The ChimpFACS results revealed that laugh faces of humans must have gradually emerged from laughing open-mouth faces of ancestral apes. This work examines the main evolutionary changes of laugh faces since the last common ancestor of chimpanzees and humans.
What's in a "face file"? Feature binding with facial identity, emotion, and gaze direction.
Fitousi, Daniel
2017-07-01
A series of four experiments investigated the binding of facial (i.e., facial identity, emotion, and gaze direction) and non-facial (i.e., spatial location and response location) attributes. Evidence for the creation and retrieval of temporary memory face structures across perception and action has been adduced. These episodic structures-dubbed herein "face files"-consisted of both visuo-visuo and visuo-motor bindings. Feature binding was indicated by partial-repetition costs. That is repeating a combination of facial features or altering them altogether, led to faster responses than repeating or alternating only one of the features. Taken together, the results indicate that: (a) "face files" affect both action and perception mechanisms, (b) binding can take place with facial dimensions and is not restricted to low-level features (Hommel, Visual Cognition 5:183-216, 1998), and (c) the binding of facial and non-facial attributes is facilitated if the dimensions share common spatial or motor codes. The theoretical contributions of these results to "person construal" theories (Freeman, & Ambady, Psychological Science, 20(10), 1183-1188, 2011), as well as to face recognition models (Haxby, Hoffman, & Gobbini, Biological Psychiatry, 51(1), 59-67, 2000) are discussed.
[Expression of the emotions in the drawing of a man by the child from 5 to 11 years of age].
Brechet, Claire; Picard, Delphine; Baldy, René
2007-06-01
This study examines the development of children's ability to express emotions in their human figure drawing. Sixty children of 5, 8, and 11 years were asked to draw "a man," and then a "sad", "happy," "angry" and "surprised" man. Expressivity of the drawings was assessed by means of two procedures: a limited choice and a free labelling procedure. Emotionally expressive drawings were then evaluated in terms of the number and the type of graphic cues that were used to express emotion. It was found that children are able to depict happiness and sadness at 8, anger and surprise at 11. With age, children use increasingly numerous and complex graphic cues for each emotion (i.e., facial expression, body position, and contextual cues). Graphic cues for facial expression (e.g., concave mouth, curved eyebrows, wide opened eyes) share strong similarities with specific "action units" described by Ekman and Friesen (1978) in their Facial Action Coding System. Children's ability to depict emotion in their human figure drawing is discussed in relation to perceptual, conceptual, and graphic abilities.
Automated detection of pain from facial expressions: a rule-based approach using AAM
NASA Astrophysics Data System (ADS)
Chen, Zhanli; Ansari, Rashid; Wilkie, Diana J.
2012-02-01
In this paper, we examine the problem of using video analysis to assess pain, an important problem especially for critically ill, non-communicative patients, and people with dementia. We propose and evaluate an automated method to detect the presence of pain manifested in patient videos using a unique and large collection of cancer patient videos captured in patient homes. The method is based on detecting pain-related facial action units defined in the Facial Action Coding System (FACS) that is widely used for objective assessment in pain analysis. In our research, a person-specific Active Appearance Model (AAM) based on Project-Out Inverse Compositional Method is trained for each patient individually for the modeling purpose. A flexible representation of the shape model is used in a rule-based method that is better suited than the more commonly used classifier-based methods for application to the cancer patient videos in which pain-related facial actions occur infrequently and more subtly. The rule-based method relies on the feature points that provide facial action cues and is extracted from the shape vertices of AAM, which have a natural correspondence to face muscular movement. In this paper, we investigate the detection of a commonly used set of pain-related action units in both the upper and lower face. Our detection results show good agreement with the results obtained by three trained FACS coders who independently reviewed and scored the action units in the cancer patient videos.
Facial expressions of emotion and the course of conjugal bereavement.
Bonanno, G A; Keltner, D
1997-02-01
The common assumption that emotional expression mediates the course of bereavement is tested. Competing hypotheses about the direction of mediation were formulated from the grief work and social-functional accounts of emotional expression. Facial expressions of emotion in conjugally bereaved adults were coded at 6 months post-loss as they described their relationship with the deceased; grief and perceived health were measured at 6, 14, and 25 months. Facial expressions of negative emotion, in particular anger, predicted increased grief at 14 months and poorer perceived health through 25 months. Facial expressions of positive emotion predicted decreased grief through 25 months and a positive but nonsignificant relation to perceived health. Predictive relations between negative and positive emotional expression persisted when initial levels of self-reported emotion, grief, and health were statistically controlled, demonstrating the mediating role of facial expressions of emotion in adjustment to conjugal loss. Theoretical and clinical implications are discussed.
Seeing the mean: ensemble coding for sets of faces.
Haberman, Jason; Whitney, David
2009-06-01
We frequently encounter groups of similar objects in our visual environment: a bed of flowers, a basket of oranges, a crowd of people. How does the visual system process such redundancy? Research shows that rather than code every element in a texture, the visual system favors a summary statistical representation of all the elements. The authors demonstrate that although it may facilitate texture perception, ensemble coding also occurs for faces-a level of processing well beyond that of textures. Observers viewed sets of faces varying in emotionality (e.g., happy to sad) and assessed the mean emotion of each set. Although observers retained little information about the individual set members, they had a remarkably precise representation of the mean emotion. Observers continued to discriminate the mean emotion accurately even when they viewed sets of 16 faces for 500 ms or less. Modeling revealed that perceiving the average facial expression in groups of faces was not due to noisy representation or noisy discrimination. These findings support the hypothesis that ensemble coding occurs extremely fast at multiple levels of visual analysis. (c) 2009 APA, all rights reserved.
Rozin, P; Lowery, L; Imada, S; Haidt, J
1999-04-01
It is proposed that 3 emotions--contempt, anger, and disgust--are typically elicited, across cultures, by violations of 3 moral codes proposed by R. A. Shweder and his colleagues (R. A. Shweder, N. C. Much, M. Mahapatra, & L. Park, 1997). The proposed alignment links anger to autonomy (individual rights violations), contempt to community (violation of communal codes including hierarchy), and disgust to divinity (violations of purity-sanctity). This is the CAD triad hypothesis. Students in the United States and Japan were presented with descriptions of situations that involve 1 of the types of moral violations and asked to assign either an appropriate facial expression (from a set of 6) or an appropriate word (contempt, anger, disgust, or their translations). Results generally supported the CAD triad hypothesis. Results were further confirmed by analysis of facial expressions actually made by Americans to the descriptions of these situations.
Mapping the emotional face. How individual face parts contribute to successful emotion recognition.
Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna
2017-01-01
Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.
Mapping the emotional face. How individual face parts contribute to successful emotion recognition
Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna
2017-01-01
Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation. PMID:28493921
Impaired holistic coding of facial expression and facial identity in congenital prosopagnosia.
Palermo, Romina; Willis, Megan L; Rivolta, Davide; McKone, Elinor; Wilson, C Ellie; Calder, Andrew J
2011-04-01
We test 12 individuals with congenital prosopagnosia (CP), who replicate a common pattern of showing severe difficulty in recognising facial identity in conjunction with normal recognition of facial expressions (both basic and 'social'). Strength of holistic processing was examined using standard expression composite and identity composite tasks. Compared to age- and sex-matched controls, group analyses demonstrated that CPs showed weaker holistic processing, for both expression and identity information. Implications are (a) normal expression recognition in CP can derive from compensatory strategies (e.g., over-reliance on non-holistic cues to expression); (b) the split between processing of expression and identity information may take place after a common stage of holistic processing; and (c) contrary to a recent claim, holistic processing of identity is functionally involved in face identification ability. Copyright © 2011 Elsevier Ltd. All rights reserved.
Impaired holistic coding of facial expression and facial identity in congenital prosopagnosia
Palermo, Romina; Willis, Megan L.; Rivolta, Davide; McKone, Elinor; Wilson, C. Ellie; Calder, Andrew J.
2011-01-01
We test 12 individuals with congenital prosopagnosia (CP), who replicate a common pattern of showing severe difficulty in recognising facial identity in conjunction with normal recognition of facial expressions (both basic and ‘social’). Strength of holistic processing was examined using standard expression composite and identity composite tasks. Compared to age- and sex-matched controls, group analyses demonstrated that CPs showed weaker holistic processing, for both expression and identity information. Implications are (a) normal expression recognition in CP can derive from compensatory strategies (e.g., over-reliance on non-holistic cues to expression); (b) the split between processing of expression and identity information may take place after a common stage of holistic processing; and (c) contrary to a recent claim, holistic processing of identity is functionally involved in face identification ability. PMID:21333662
Epidemiology and resource utilization in pediatric facial fractures.
Soleimani, Tahereh; Greathouse, Shawn Travis; Sood, Rajiv; Tahiri, Youssef H; Tholpady, Sunil S
2016-02-01
Pediatric facial fractures, although uncommon, have a significant impact on public health and the US economy by the coexistence of other injuries and developmental deformities. Violence is one of the most frequent mechanisms leading to facial fracture. Teaching hospitals, while educating future medical professionals, have been linked to greater resource utilization in differing scenarios. This study was designed to compare the differences in patient characteristics and outcomes between teaching and non-teaching hospitals for violence-related pediatric facial fractures. Using the 2000-2009 Kids' Inpatient Database, 3881 patients younger than 18 years were identified with facial fracture and external cause of injury code for assault, fight, or abuse. Patients admitted at teaching hospitals were compared to those admitted at non-teaching hospitals in terms of demographics, injuries, and outcomes. Overall, 76.2% of patients had been treated at teaching hospitals. Compared to those treated at non-teaching hospitals, these patients were more likely to be younger, non-white, covered by Medicaid, from lower income zip codes, and have thoracic injuries; but mortality rate was not significantly different. After adjusting for potential confounders, teaching status of the hospital was not found as a predictor of either longer lengths of stay (LOS) or charges. There is an insignificant difference between LOS and charges at teaching and non-teaching hospitals after controlling for patient demographics. This suggests that the longer LOS observed at teaching hospitals is related to these institutions being more often involved in the care of underserved populations and patients with more severe injuries. Copyright © 2016 Elsevier Inc. All rights reserved.
Behavioral and facial thermal variations in 3-to 4-month-old infants during the Still-Face Paradigm
Aureli, Tiziana; Grazia, Annalisa; Cardone, Daniela; Merla, Arcangelo
2015-01-01
Behavioral and facial thermal responses were recorded in twelve 3- to 4-month-old infants during the Still-Face Paradigm (SFP). As in the usual procedure, infants were observed in a three-step, face-to-face interaction: a normal interaction episode (3 min); the “still-face” episode in which the mother became unresponsive and assumed a neutral expression (1 min); a reunion episode in which the mother resumed the interaction (3 min). A fourth step that consisted of a toy play episode (5 min) was added for our own research interest. We coded the behavioral responses through the Infant and Caregiver Engagement Phases system, and recorded facial skin temperature via thermal infrared (IR) imaging. Comparing still-face episode to play episode, the infants’ communicative engagement decreased, their engagement with the environment increased, and no differences emerged in self-regulatory and protest behaviors. We also found that facial skin temperature increased. For the behavioral results, infants recognized the interruption of the interactional reciprocity caused by the still-face presentation, without showing upset behaviors. According to autonomic results, the parasympathetic system was more active than the sympathetic, as usually happens in aroused but not distressed situations. With respect to the debate about the causal factor of the still-face effect, thermal data were consistent with behavioral data in showing this effect as related to the infants’ expectations of the nature of the social interactions being violated. Moreover, as these are associated to the infants’ subsequent interest in the environment, they indicate the thermal IR imaging as a reliable technique for the detection of physiological variations not only in the emotional system, as indicated by research to date, but also in the attention system. Using this technique for the first time during the SFP allowed us to record autonomic data in a more ecological manner than in previous studies. PMID:26528229
Short Alleles, Bigger Smiles? The Effect of 5-HTTLPR on Positive Emotional Expressions
Haase, Claudia M.; Beermann, Ursula; Saslow, Laura R.; Shiota, Michelle N.; Saturn, Sarina R.; Lwi, Sandy J.; Casey, James J.; Nguyen, Nguyen K.; Whalen, Patrick K.; Keltner, Dacher J.; Levenson, Robert W.
2015-01-01
The present research examined the effect of the 5-HTTLPR polymorphism in the serotonin transporter gene on objectively coded positive emotional expressions (i.e., laughing and smiling behavior objectively coded using the Facial Action Coding System). Three studies with independent samples of participants were conducted. Study 1 examined young adults watching still cartoons. Study 2 examined young, middle-aged, and older adults watching a thematically ambiguous yet subtly amusing film clip. Study 3 examined middle-aged and older spouses discussing an area of marital conflict (which typically produces both positive and negative emotion). Aggregating data across studies, results showed that the short allele of 5-HTTLPR predicted heightened positive emotional expressions. Results remained stable when controlling for age, gender, ethnicity, and depressive symptoms. These findings are consistent with the notion that the short allele of 5-HTTLPR functions as an emotion amplifier, which may confer heightened susceptibility to environmental conditions. PMID:26029940
Proposal of Self-Learning and Recognition System of Facial Expression
NASA Astrophysics Data System (ADS)
Ogawa, Yukihiro; Kato, Kunihito; Yamamoto, Kazuhiko
We describe realization of more complicated function by using the information acquired from some equipped unripe functions. The self-learning and recognition system of the human facial expression, which achieved under the natural relation between human and robot, are proposed. The robot with this system can understand human facial expressions and behave according to their facial expressions after the completion of learning process. The system modelled after the process that a baby learns his/her parents’ facial expressions. Equipping the robot with a camera the system can get face images and equipping the CdS sensors on the robot’s head the robot can get the information of human action. Using the information of these sensors, the robot can get feature of each facial expression. After self-learning is completed, when a person changed his facial expression in front of the robot, the robot operates actions under the relevant facial expression.
Deliberately generated and imitated facial expressions of emotions in people with eating disorders.
Dapelo, Marcela Marin; Bodas, Sergio; Morris, Robin; Tchanturia, Kate
2016-02-01
People with eating disorders have difficulties in socio emotional functioning that could contribute to maintaining the functional consequences of the disorder. This study aimed to explore the ability to deliberately generate (i.e., pose) and imitate facial expressions of emotions in women with anorexia (AN) and bulimia nervosa (BN), compared to healthy controls (HC). One hundred and three participants (36 AN, 25 BN, and 42 HC) were asked to pose and imitate facial expressions of anger, disgust, fear, happiness, and sadness. Their facial expressions were recorded and coded. Participants with eating disorders (both AN and BN) were less accurate than HC when posing facial expressions of emotions. Participants with AN were less accurate compared to HC imitating facial expressions, whilst BN participants had a middle range performance. All results remained significant after controlling for anxiety, depression and autistic features. The relatively small number of BN participants recruited for this study. The study findings suggest that people with eating disorders, particularly those with AN, have difficulties posing and imitating facial expressions of emotions. These difficulties could have an impact in social communication and social functioning. This is the first study to investigate the ability to pose and imitate facial expressions of emotions in people with eating disorders, and the findings suggest this area should be further explored in future studies. Copyright © 2015. Published by Elsevier B.V.
Facial nerve palsy: analysis of cases reported in children in a suburban hospital in Nigeria.
Folayan, M O; Arobieke, R I; Eziyi, E; Oyetola, E O; Elusiyan, J
2014-01-01
The study describes the epidemiology, treatment, and treatment outcomes of the 10 cases of facial nerve palsy seen in children managed at the Obafemi Awolowo University Teaching Hospitals Complex, Ile-Ife over a 10 year period. It also compares findings with report from developed countries. This was a retrospective cohort review of pediatric cases of facial nerve palsy encountered in all the clinics run by specialists in the above named hospital. A diagnosis of facial palsy was based on International Classification of Diseases, Ninth Revision, Clinical Modification codes. Information retrieved from the case note included sex, age, number of days with lesion prior to presentation in the clinic, diagnosis, treatment, treatment outcome, and referral clinic. Only 10 cases of facial nerve palsy were diagnosed in the institution during the study period. Prevalence of facial nerve palsy in this hospital was 0.01%. The lesion more commonly affected males and the right side of the face. All cases were associated with infections: Mainly mumps (70% of cases). Case management include the use of steroids and eye pads for cases that presented within 7 days; and steroids, eye pad, and physical therapy for cases that presented later. All cases of facial nerve palsy associated with mumps and malaria infection fully recovered. The two cases of facial nerve palsy associated with otitis media only partially recovered. Facial nerve palsy in pediatric patients is more commonly associated with mumps in the study environment. Successes are recorded with steroid therapy.
Facial expressions of emotion and psychopathology in adolescent boys.
Keltner, D; Moffitt, T E; Stouthamer-Loeber, M
1995-11-01
On the basis of the widespread belief that emotions underpin psychological adjustment, the authors tested 3 predicted relations between externalizing problems and anger, internalizing problems and fear and sadness, and the absence of externalizing problems and social-moral emotion (embarrassment). Seventy adolescent boys were classified into 1 of 4 comparison groups on the basis of teacher reports using a behavior problem checklist: internalizers, externalizers, mixed (both internalizers and externalizers), and nondisordered boys. The authors coded the facial expressions of emotion shown by the boys during a structured social interaction. Results supported the 3 hypotheses: (a) Externalizing adolescents showed increased facial expressions of anger, (b) on 1 measure internalizing adolescents showed increased facial expressions of fear, and (c) the absence of externalizing problems (or nondisordered classification) was related to increased displays of embarrassment. Discussion focused on the relations of these findings to hypotheses concerning the role of impulse control in antisocial behavior.
An Analysis of Biometric Technology as an Enabler to Information Assurance
2005-03-01
29 Facial Recognition ................................................................................................ 30...al., 2003) Facial Recognition Facial recognition systems are gaining momentum as of late. The reason for this is that facial recognition systems...the traffic camera on the street corner, video technology is everywhere. There are a couple of different methods currently being used for facial
Coding and quantification of a facial expression for pain in lambs.
Guesgen, M J; Beausoleil, N J; Leach, M; Minot, E O; Stewart, M; Stafford, K J
2016-11-01
Facial expressions are routinely used to assess pain in humans, particularly those who are non-verbal. Recently, there has been an interest in developing coding systems for facial grimacing in non-human animals, such as rodents, rabbits, horses and sheep. The aims of this preliminary study were to: 1. Qualitatively identify facial feature changes in lambs experiencing pain as a result of tail-docking and compile these changes to create a Lamb Grimace Scale (LGS); 2. Determine whether human observers can use the LGS to differentiate tail-docked lambs from control lambs and differentiate lambs before and after docking; 3. Determine whether changes in facial action units of the LGS can be objectively quantified in lambs before and after docking; 4. Evaluate effects of restraint of lambs on observers' perceptions of pain using the LGS and on quantitative measures of facial action units. By comparing images of lambs before (no pain) and after (pain) tail-docking, the LGS was devised in consultation with scientists experienced in assessing facial expression in other species. The LGS consists of five facial action units: Orbital Tightening, Mouth Features, Nose Features, Cheek Flattening and Ear Posture. The aims of the study were addressed in two experiments. In Experiment I, still images of the faces of restrained lambs were taken from video footage before and after tail-docking (n=4) or sham tail-docking (n=3). These images were scored by a group of five naïve human observers using the LGS. Because lambs were restrained for the duration of the experiment, Ear Posture was not scored. The scores for the images were averaged to provide one value per feature per period and then scores for the four LGS action units were averaged to give one LGS score per lamb per period. In Experiment II, still images of the faces nine lambs were taken before and after tail-docking. Stills were taken when lambs were restrained and unrestrained in each period. A different group of five human observers scored the images from Experiment II. Changes in facial action units were also quantified objectively by a researcher using image measurement software. In both experiments LGS scores were analyzed using a linear MIXED model to evaluate the effects of tail docking on observers' perception of facial expression changes. Kendall's Index of Concordance was used to measure reliability among observers. In Experiment I, human observers were able to use the LGS to differentiate docked lambs from control lambs. LGS scores significantly increased from before to after treatment in docked lambs but not control lambs. In Experiment II there was a significant increase in LGS scores after docking. This was coupled with changes in other validated indicators of pain after docking in the form of pain-related behaviour. Only two components, Mouth Features and Orbital Tightening, showed significant quantitative changes after docking. The direction of these changes agree with the description of these facial action units in the LGS. Restraint affected people's perceptions of pain as well as quantitative measures of LGS components. Freely moving lambs were scored lower using the LGS over both periods and had a significantly smaller eye aperture and smaller nose and ear angles than when they were held. Agreement among observers for LGS scores were fair overall (Experiment I: W=0.60; Experiment II: W=0.66). This preliminary study demonstrates changes in lamb facial expression associated with pain. The results of these experiments should be interpreted with caution due to low lamb numbers. Copyright © 2016 Elsevier B.V. All rights reserved.
Different coding strategies for the perception of stable and changeable facial attributes.
Taubert, Jessica; Alais, David; Burr, David
2016-09-01
Perceptual systems face competing requirements: improving signal-to-noise ratios of noisy images, by integration; and maximising sensitivity to change, by differentiation. Both processes occur in human vision, under different circumstances: they have been termed priming, or serial dependencies, leading to positive sequential effects; and adaptation or habituation, which leads to negative sequential effects. We reasoned that for stable attributes, such as the identity and gender of faces, the system should integrate: while for changeable attributes like facial expression, it should also engage contrast mechanisms to maximise sensitivity to change. Subjects viewed a sequence of images varying simultaneously in gender and expression, and scored each as male or female, and happy or sad. We found strong and consistent positive serial dependencies for gender, and negative dependency for expression, showing that both processes can operate at the same time, on the same stimuli, depending on the attribute being judged. The results point to highly sophisticated mechanisms for optimizing use of past information, either by integration or differentiation, depending on the permanence of that attribute.
Beurskens, Carien H G; Heymans, Peter G
2006-01-01
What is the effect of mime therapy on facial symmetry and severity of paresis in people with facial nerve paresis? Randomised controlled trial. 50 people recruited from the Outpatient department of two metropolitan hospitals with facial nerve paresis for more than nine months. The experimental group received three months of mime therapy consisting of massage, relaxation, inhibition of synkinesis, and co-ordination and emotional expression exercises. The control group was placed on a waiting list. Assessments were made on admission to the trial and three months later by a measurer blinded to group allocation. Facial symmetry was measured using the Sunnybrook Facial Grading System. Severity of paresis was measured using the House-Brackmann Facial Grading System. After three months of mime therapy, the experimental group had improved their facial symmetry by 20.4 points (95% CI 10.4 to 30.4) on the Sunnybrook Facial Grading System compared with the control group. In addition, the experimental group had reduced the severity of their paresis by 0.6 grade (95% CI 0.1 to 1.1) on the House-Brackmann Facial Grading System compared with the control group. These effects were independent of age, sex, and duration of paresis. Mime therapy improves facial symmetry and reduces the severity of paresis in people with facial nerve paresis.
Humor and laughter in patients with cerebellar degeneration.
Frank, B; Propson, B; Göricke, S; Jacobi, H; Wild, B; Timmann, D
2012-06-01
Humor is a complex behavior which includes cognitive, affective and motor responses. Based on observations of affective changes in patients with cerebellar lesions, the cerebellum may support cerebral and brainstem areas involved in understanding and appreciation of humorous stimuli and expression of laughter. The aim of the present study was to examine if humor appreciation, perception of humorous stimuli, and the succeeding facial reaction differ between patients with cerebellar degeneration and healthy controls. Twenty-three adults with pure cerebellar degeneration were compared with 23 age-, gender-, and education-matched healthy control subjects. No significant difference in humor appreciation and perception of humorous stimuli could be found between groups using the 3 Witz-Dimensionen Test, a validated test asking for funniness and aversiveness of jokes and cartoons. Furthermore, while observing jokes, humorous cartoons, and video sketches, facial expressions of subjects were videotaped and afterwards analysed using the Facial Action Coding System. Using depression as a covariate, the number, and to a lesser degree, the duration of facial expressions during laughter were reduced in cerebellar patients compared to healthy controls. In sum, appreciation of humor appears to be largely preserved in patients with chronic cerebellar degeneration. Cerebellar circuits may contribute to the expression of laughter. Findings add to the literature that non-motor disorders in patients with chronic cerebellar disease are generally mild, but do not exclude that more marked disorders may show up in acute cerebellar disease and/or in more specific tests of humor appreciation.
Automatic three-dimensional quantitative analysis for evaluation of facial movement.
Hontanilla, B; Aubá, C
2008-01-01
The aim of this study is to present a new 3D capture system of facial movements called FACIAL CLIMA. It is an automatic optical motion system that involves placing special reflecting dots on the subject's face and video recording with three infrared-light cameras the subject performing several face movements such as smile, mouth puckering, eye closure and forehead elevation. Images from the cameras are automatically processed with a software program that generates customised information such as 3D data on velocities and areas. The study has been performed in 20 healthy volunteers. The accuracy of the measurement process and the intrarater and interrater reliabilities have been evaluated. Comparison of a known distance and angle with those obtained by FACIAL CLIMA shows that this system is accurate to within 0.13 mm and 0.41 degrees . In conclusion, the accuracy of the FACIAL CLIMA system for evaluation of facial movements is demonstrated and also the high intrarater and interrater reliability. It has advantages with respect to other systems that have been developed for evaluation of facial movements, such as short calibration time, short measuring time, easiness to use and it provides not only distances but also velocities and areas. Thus the FACIAL CLIMA system could be considered as an adequate tool to assess the outcome of facial paralysis reanimation surgery. Thus, patients with facial paralysis could be compared between surgical centres such that effectiveness of facial reanimation operations could be evaluated.
Facial expression system on video using widrow hoff
NASA Astrophysics Data System (ADS)
Jannah, M.; Zarlis, M.; Mawengkang, H.
2018-03-01
Facial expressions recognition is one of interesting research. This research contains human feeling to computer application Such as the interaction between human and computer, data compression, facial animation and facial detection from the video. The purpose of this research is to create facial expression system that captures image from the video camera. The system in this research uses Widrow-Hoff learning method in training and testing image with Adaptive Linear Neuron (ADALINE) approach. The system performance is evaluated by two parameters, detection rate and false positive rate. The system accuracy depends on good technique and face position that trained and tested.
Seeing emotions: a review of micro and subtle emotion expression training
NASA Astrophysics Data System (ADS)
Poole, Ernest Andre
2016-09-01
In this review I explore and discuss the use of micro and subtle expression training in the social sciences. These trainings, offered commercially, are designed and endorsed by noted psychologist Paul Ekman, co-author of the Facial Action Coding System, a comprehensive system of measuring muscular movement in the face and its relationship to the expression of emotions. The trainings build upon that seminal work and present them in a way for either the layperson or researcher to easily add to their personal toolbox for a variety of purposes. Outlined are my experiences across the training products, how they could be used in social science research, a brief comparison to automated systems, and possible next steps.
The contemptuous separation: Facial expressions of emotion and breakups in young adulthood
Heshmati, Saeideh; Sbarra, David A.; Mason, Ashley E.
2017-01-01
The importance of studying specific and expressed emotions after a stressful life event is well known, yet few studies have moved beyond assessing self-reported emotional responses to a romantic breakup. This study examined associations between computer-recognized facial expressions and self-reported breakup-related distress among recently separated college-aged young adults (N = 135; 37 men) on four visits across 9 weeks. Participants’ facial expressions were coded using the Computer Expression Recognition Toolbox while participants spoke about their breakups. Of the seven expressed emotions studied, only Contempt showed a unique association with breakup-related distress over time. At baseline, greater Contempt was associated with less breakup-related distress; however, over time, greater Contempt was associated with greater breakup-related distress. PMID:29249896
The contemptuous separation: Facial expressions of emotion and breakups in young adulthood.
Heshmati, Saeideh; Sbarra, David A; Mason, Ashley E
2017-06-01
The importance of studying specific and expressed emotions after a stressful life event is well known, yet few studies have moved beyond assessing self-reported emotional responses to a romantic breakup. This study examined associations between computer-recognized facial expressions and self-reported breakup-related distress among recently separated college-aged young adults ( N = 135; 37 men) on four visits across 9 weeks. Participants' facial expressions were coded using the Computer Expression Recognition Toolbox while participants spoke about their breakups. Of the seven expressed emotions studied, only Contempt showed a unique association with breakup-related distress over time. At baseline, greater Contempt was associated with less breakup-related distress; however, over time, greater Contempt was associated with greater breakup-related distress.
Wang, Yamin; Zhou, Lu
2016-10-01
Most young Chinese people now learn about Caucasian individuals via media, especially American and European movies and television series (AEMT). The current study aimed to explore whether long-term exposure to AEMT facilitates Caucasian face perception in young Chinese watchers. Before the experiment, we created Chinese, Caucasian, and generic average faces (generic average face was created from both Chinese and Caucasian faces) and tested participants' ability to identify them. In the experiment, we asked AEMT watchers and Chinese movie and television series (CMT) watchers to complete a facial norm detection task. This task was developed recently to detect norms used in facial perception. The results indicated that AEMT watchers coded Caucasian faces relative to a Caucasian face norm better than they did to a generic face norm, whereas no such difference was found among CMT watchers. All watchers coded Chinese faces by referencing a Chinese norm better than they did relative to a generic norm. The results suggested that long-term exposure to AEMT has the same effect as daily other-race face contact in shaping facial perception. © The Author(s) 2016.
Spatio-temporal Event Classification using Time-series Kernel based Structured Sparsity
Jeni, László A.; Lőrincz, András; Szabó, Zoltán; Cohn, Jeffrey F.; Kanade, Takeo
2016-01-01
In many behavioral domains, such as facial expression and gesture, sparse structure is prevalent. This sparsity would be well suited for event detection but for one problem. Features typically are confounded by alignment error in space and time. As a consequence, high-dimensional representations such as SIFT and Gabor features have been favored despite their much greater computational cost and potential loss of information. We propose a Kernel Structured Sparsity (KSS) method that can handle both the temporal alignment problem and the structured sparse reconstruction within a common framework, and it can rely on simple features. We characterize spatio-temporal events as time-series of motion patterns and by utilizing time-series kernels we apply standard structured-sparse coding techniques to tackle this important problem. We evaluated the KSS method using both gesture and facial expression datasets that include spontaneous behavior and differ in degree of difficulty and type of ground truth coding. KSS outperformed both sparse and non-sparse methods that utilize complex image features and their temporal extensions. In the case of early facial event classification KSS had 10% higher accuracy as measured by F1 score over kernel SVM methods1. PMID:27830214
Adaptation of facial synthesis to parameter analysis in MPEG-4 visual communication
NASA Astrophysics Data System (ADS)
Yu, Lu; Zhang, Jingyu; Liu, Yunhai
2000-12-01
In MPEG-4, Facial Definition Parameters (FDPs) and Facial Animation Parameters (FAPs) are defined to animate 1 a facial object. Most of the previous facial animation reconstruction systems were focused on synthesizing animation from manually or automatically generated FAPs but not the FAPs extracted from natural video scene. In this paper, an analysis-synthesis MPEG-4 visual communication system is established, in which facial animation is reconstructed from FAPs extracted from natural video scene.
NASA Astrophysics Data System (ADS)
Wan, Qianwen; Panetta, Karen; Agaian, Sos
2017-05-01
Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.
A multibiometric face recognition fusion framework with template protection
NASA Astrophysics Data System (ADS)
Chindaro, S.; Deravi, F.; Zhou, Z.; Ng, M. W. R.; Castro Neves, M.; Zhou, X.; Kelkboom, E.
2010-04-01
In this work we present a multibiometric face recognition framework based on combining information from 2D with 3D facial features. The 3D biometrics channel is protected by a privacy enhancing technology, which uses error correcting codes and cryptographic primitives to safeguard the privacy of the users of the biometric system at the same time enabling accurate matching through fusion with 2D. Experiments are conducted to compare the matching performance of such multibiometric systems with the individual biometric channels working alone and with unprotected multibiometric systems. The results show that the proposed hybrid system incorporating template protection, match and in some cases exceed the performance of corresponding unprotected equivalents, in addition to offering the additional privacy protection.
Computer Recognition of Facial Profiles
1974-08-01
facial recognition 20. ABSTRACT (Continue on reverse side It necessary and Identify by block number) A system for the recognition of human faces from...21 2.6 Classification Algorithms ........... ... 32 III FACIAL RECOGNITION AND AUTOMATIC TRAINING . . . 37 3.1 Facial Profile Recognition...provide a fair test of the classification system. The work of Goldstein, Harmon, and Lesk [81 indicates, however, that for facial recognition , a ten class
Automatic decoding of facial movements reveals deceptive pain expressions
Bartlett, Marian Stewart; Littlewort, Gwen C.; Frank, Mark G.; Lee, Kang
2014-01-01
Summary In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1–3]. Two motor pathways control facial movement [4–7]. A subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions. A cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8–11]. Machine vision may, however, be able to distinguish deceptive from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here we show that human observers could not discriminate real from faked expressions of pain better than chance, and after training, improved accuracy to a modest 55%. However a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system’s superiority is attributable to its ability to differentiate the dynamics of genuine from faked expressions. Thus by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling. PMID:24656830
NASA Astrophysics Data System (ADS)
Agustin, Eny Widhia; Hangga, Arimaz; Fahrian, Muhammad Iqbal; Azhari, Anis Fikri
2018-03-01
The implementation of monitoring system in the facial acupressure learning media could increase the students' proficiency. However the common learning media still has not implemented a monitoring system in their learning process. This research was conducted to implement monitoring system in the mannequin head prototype as a learning media of facial acupressure using Bluetooth, wireless and Ethernet. The results of the implementation of monitoring system in the prototype showed that there were differences in the delay time between Bluetooth and wireless or Ethernet. The results data showed no difference in the average delay time between the use of Bluetooth with wireless and the use of Bluetooth with Ethernet in monitoring system of facial acupressure learning media. From all the facial acupressure points, the forehead facial acupressure point has the longest delay time of 11.93 seconds. The average delay time in all 3 class rooms was 1.96 seconds therefore the use of Bluetooth, wireless and Ethernet is highly recommended in the monitoring system of facial acupressure.
Empathy, Challenge, and Psychophysiological Activation in Therapist–Client Interaction
Voutilainen, Liisa; Henttonen, Pentti; Kahri, Mikko; Ravaja, Niklas; Sams, Mikko; Peräkylä, Anssi
2018-01-01
Two central dimensions in psychotherapeutic work are a therapist’s empathy with clients and challenging their judgments. We investigated how they influence psychophysiological responses in the participants. Data were from psychodynamic therapy sessions, 24 sessions from 5 dyads, from which 694 therapist’s interventions were coded. Heart rate and electrodermal activity (EDA) of the participants were used to index emotional arousal. Facial muscle activity (electromyography) was used to index positive and negative emotional facial expressions. Electrophysiological data were analyzed in two time frames: (a) during the therapists’ interventions and (b) across the whole psychotherapy session. Both empathy and challenge had an effect on psychophysiological responses in the participants. Therapists’ empathy decreased clients’ and increased their own EDA across the session. Therapists’ challenge increased their own EDA in response to the interventions, but not across the sessions. Clients, on the other hand, did not respond to challenges during interventions, but challenges tended to increase EDA across a session. Furthermore, there was an interaction effect between empathy and challenge. Heart rate decreased and positive facial expressions increased in sessions where empathy and challenge were coupled, i.e., the amount of both empathy and challenge was either high or low. This suggests that these two variables work together. The results highlight the therapeutic functions and interrelation of empathy and challenge, and in line with the dyadic system theory by Beebe and Lachmann (2002), the systemic linkage between interactional expression and individual regulation of emotion. PMID:29695992
[Study on the indexes of forensic identification by the occlusal-facial digital radiology].
Gao, Dong; Wang, Hu; Hu, Jin-liang; Xu, Zhe; Deng, Zhen-hua
2006-02-01
To discuss the coding of full dentition with 32 locations and measure the characteristics of some bony indexes in occlusal-facial digital radiology (DR). To select randomly three hundred DR orthopantomogram and code the full dentition, then analyze the diversity of dental patterns. To select randomly one hundred DR lateral cephalogram and measure six indexes (N-S,N-Me,Cd-Gn,Cd-Go,NP-SN,MP-SN) separately by one odontologist and one trained forensic graduate student, then calculate the coefficient variation (CV) of every index and take a correlation analysis for the consistency between two measurements. (1) The total diversity of 300 dental patterns was 75%.It was a very high value. (2)All six quantitative variables had comparatively high CV value.(3) After the linear correlation analysis between two measurements, all six coefficient correlations were close to 1. This indicated that the measurements were stable and consistent. The method of coding full dentition in DR orthopantomogram and measuring six bony indexes in DR lateral cephalogram can be used to forensic identification.
Koudelová, J; Brůžek, J; Cagáňová, V; Krajíček, V; Velemínská, J
2015-08-01
To evaluate sexual dimorphism of facial form and shape and to describe differences between the average female and male face from 12 to 15 years. Overall 120 facial scans from healthy Caucasian children (17 boys, 13 girls) were longitudinally evaluated over a 4-year period between the ages of 12 and 15 years. Facial surface scans were obtained using a three-dimensional optical scanner Vectra-3D. Variation in facial shape and form was evaluated using geometric morphometric and statistical methods (DCA, PCA and permutation test). Average faces were superimposed, and the changes were evaluated using colour-coded maps. There were no significant sex differences (p > 0.05) in shape in any age category and no differences in form in the 12- and 13-year-olds, as the female faces were within the area of male variability. From the age of 14, a slight separation occurred, which was statistically confirmed. The differences were mainly associated with size. Generally boys had more prominent eyebrow ridges, more deeply set eyes, a flatter cheek area, and a more prominent nose and chin area. The development of facial sexual dimorphism during pubertal growth is connected with ontogenetic allometry. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Brown, William; Liu, Connie; John, Rita Marie; Ford, Phoebe
2014-01-01
Developing gross and fine motor skills and expressing complex emotion is critical for child development. We introduce "StorySense", an eBook-integrated mobile app prototype that can sense face and sound topologies and identify movement and expression to promote children's motor skills and emotional developmental. Currently, most interactive eBooks on mobile devices only leverage "low-motor" interaction (i.e. tapping or swiping). Our app senses a greater breath of motion (e.g. clapping, snapping, and face tracking), and dynamically alters the storyline according to physical responses in ways that encourage the performance of predetermined motor skills ideal for a child's gross and fine motor development. In addition, our app can capture changes in facial topology, which can later be mapped using the Facial Action Coding System (FACS) for later interpretation of emotion. StorySense expands the human computer interaction vocabulary for mobile devices. Potential clinical applications include child development, physical therapy, and autism.
iFER: facial expression recognition using automatically selected geometric eye and eyebrow features
NASA Astrophysics Data System (ADS)
Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz
2018-03-01
Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.
Spontaneous Facial Mimicry in Response to Dynamic Facial Expressions
ERIC Educational Resources Information Center
Sato, Wataru; Yoshikawa, Sakiko
2007-01-01
Based on previous neuroscientific evidence indicating activation of the mirror neuron system in response to dynamic facial actions, we hypothesized that facial mimicry would occur while subjects viewed dynamic facial expressions. To test this hypothesis, dynamic/static facial expressions of anger/happiness were presented using computer-morphing…
Effect of an observer's presence on facial behavior during dyadic communication.
Yamamoto, K; Suzuki, N
2012-06-01
In everyday life, people communicate not only with another person but also in front of other people. How do people behave during communication when observed by others? Effects of an observer (presence vs absence) and interpersonal relationship (friends vs strangers vs alone) on facial behavior were examined. Participants viewed film clips that elicited positive affect (film presentation) and discussed their impressions about the clips (conversation). Participants rated their subjective emotions and social motives. Durations of smiles, gazes, and utterances of each participant were coded. The presence of an observer did not affect facial behavior during the film presentation, but did affect gazes during conversation. Whereas the presence of an observer seemed to facilitate affiliation in pairs of strangers, communication between friends was exclusive and not affected by an observer.
Short alleles, bigger smiles? The effect of 5-HTTLPR on positive emotional expressions.
Haase, Claudia M; Beermann, Ursula; Saslow, Laura R; Shiota, Michelle N; Saturn, Sarina R; Lwi, Sandy J; Casey, James J; Nguyen, Nguyen K; Whalen, Patrick K; Keltner, Dacher; Levenson, Robert W
2015-08-01
The present research examined the effect of the 5-HTTLPR polymorphism in the serotonin transporter gene on objectively coded positive emotional expressions (i.e., laughing and smiling behavior objectively coded using the Facial Action Coding System). Three studies with independent samples of participants were conducted. Study 1 examined young adults watching still cartoons. Study 2 examined young, middle-aged, and older adults watching a thematically ambiguous yet subtly amusing film clip. Study 3 examined middle-aged and older spouses discussing an area of marital conflict (that typically produces both positive and negative emotion). Aggregating data across studies, results showed that the short allele of 5-HTTLPR predicted heightened positive emotional expressions. Results remained stable when controlling for age, gender, ethnicity, and depressive symptoms. These findings are consistent with the notion that the short allele of 5-HTTLPR functions as an emotion amplifier, which may confer heightened susceptibility to environmental conditions. (c) 2015 APA, all rights reserved).
2002-06-07
Continue to Develop and Refine Emerging Technology • Some of the emerging biometric devices, such as iris scans and facial recognition systems...such as iris scans and facial recognition systems, facial recognition systems, and speaker verification systems. (976301)
Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine
NASA Astrophysics Data System (ADS)
Lawi, Armin; Sya'Rani Machrizzandi, M.
2018-03-01
Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.
Röder, Christian H; Mohr, Harald; Linden, David E J
2011-02-01
Faces are multidimensional stimuli that convey information for complex social and emotional functions. Separate neural systems have been implicated in the recognition of facial identity (mainly extrastriate visual cortex) and emotional expression (limbic areas and the superior temporal sulcus). Working-memory (WM) studies with faces have shown different but partly overlapping activation patterns in comparison to spatial WM in parietal and prefrontal areas. However, little is known about the neural representations of the different facial dimensions during WM. In the present study 22 subjects performed a face-identity or face-emotion WM task at different load levels during functional magnetic resonance imaging. We found a fronto-parietal-visual WM-network for both tasks during maintenance, including fusiform gyrus. Limbic areas in the amygdala and parahippocampal gyrus demonstrated a stronger activation for the identity than the emotion condition. One explanation for this finding is that the repetitive presentation of faces with different identities but the same emotional expression during the identity-task is responsible for the stronger increase in BOLD signal in the amygdala. These results raise the question how different emotional expressions are coded in WM. Our findings suggest that emotional expressions are re-coded in an abstract representation that is supported at the neural level by the canonical fronto-parietal WM network. Copyright © 2010 Elsevier Ltd. All rights reserved.
A Real-Time Interactive System for Facial Makeup of Peking Opera
NASA Astrophysics Data System (ADS)
Cai, Feilong; Yu, Jinhui
In this paper we present a real-time interactive system for making facial makeup of Peking Opera. First, we analyze the process of drawing facial makeup and characteristics of the patterns used in it, and then construct a SVG pattern bank based on local features like eye, nose, mouth, etc. Next, we pick up some SVG patterns from the pattern bank and composed them to make a new facial makeup. We offer a vector-based free form deformation (FFD) tool to edit patterns and, based on editing, our system creates automatically texture maps for a template head model. Finally, the facial makeup is rendered on the 3D head model in real time. Our system offers flexibility in designing and synthesizing various 3D facial makeup. Potential applications of the system include decoration design, digital museum exhibition and education of Peking Opera.
Wickert, Natasha M; Wong Riff, Karen W Y; Mansour, Mark; Forrest, Christopher R; Goodacre, Timothy E E; Pusic, Andrea L; Klassen, Anne F
2018-01-01
Objective The aim of this systematic review was to identify patient-reported outcome (PRO) instruments used in research with children/youth with conditions associated with facial differences to identify the health concepts measured. Design MEDLINE, EMBASE, CINAHL, and PsycINFO were searched from 2004 to 2016 to identify PRO instruments used in acne vulgaris, birthmarks, burns, ear anomalies, facial asymmetries, and facial paralysis patients. We performed a content analysis whereby the items were coded to identify concepts and categorized as positive or negative content or phrasing. Results A total of 7,835 articles were screened; 6 generic and 11 condition-specific PRO instruments were used in 96 publications. Condition-specific instruments were for acne (four), oral health (two), dermatology (one), facial asymmetries (two), microtia (one), and burns (one). The PRO instruments provided 554 items (295 generic; 259 condition specific) that were sorted into 4 domains, 11 subdomains, and 91 health concepts. The most common domain was psychological (n = 224 items). Of the identified items, 76% had negative content or phrasing (e.g., "Because of the way my face looks I wish I had never been born"). Given the small number of items measuring facial appearance (n = 19) and function (n = 22), the PRO instruments reviewed lacked content validity for patients whose condition impacted facial function and/or appearance. Conclusions Treatments can change facial appearance and function. This review draws attention to a problem with content validity in existing PRO instruments. Our team is now developing a new PRO instrument called FACE-Q Kids to address this problem.
Event-related theta synchronization predicts deficit in facial affect recognition in schizophrenia.
Csukly, Gábor; Stefanics, Gábor; Komlósi, Sarolta; Czigler, István; Czobor, Pál
2014-02-01
Growing evidence suggests that abnormalities in the synchronized oscillatory activity of neurons in schizophrenia may lead to impaired neural activation and temporal coding and thus lead to neurocognitive dysfunctions, such as deficits in facial affect recognition. To gain an insight into the neurobiological processes linked to facial affect recognition, we investigated both induced and evoked oscillatory activity by calculating the Event Related Spectral Perturbation (ERSP) and the Inter Trial Coherence (ITC) during facial affect recognition. Fearful and neutral faces as well as nonface patches were presented to 24 patients with schizophrenia and 24 matched healthy controls while EEG was recorded. The participants' task was to recognize facial expressions. Because previous findings with healthy controls showed that facial feature decoding was associated primarily with oscillatory activity in the theta band, we analyzed ERSP and ITC in this frequency band in the time interval of 140-200 ms, which corresponds to the N170 component. Event-related theta activity and phase-locking to facial expressions, but not to nonface patches, predicted emotion recognition performance in both controls and patients. Event-related changes in theta amplitude and phase-locking were found to be significantly weaker in patients compared with healthy controls, which is in line with previous investigations showing decreased neural synchronization in the low frequency bands in patients with schizophrenia. Neural synchrony is thought to underlie distributed information processing. Our results indicate a less effective functioning in the recognition process of facial features, which may contribute to a less effective social cognition in schizophrenia. PsycINFO Database Record (c) 2014 APA, all rights reserved.
French-speaking children’s freely produced labels for facial expressions
Maassarani, Reem; Gosselin, Pierre; Montembeault, Patricia; Gagnon, Mathieu
2014-01-01
In this study, we investigated the labeling of facial expressions in French-speaking children. The participants were 137 French-speaking children, between the ages of 5 and 11 years, recruited from three elementary schools in Ottawa, Ontario, Canada. The facial expressions included expressions of happiness, sadness, fear, surprise, anger, and disgust. Participants were shown one facial expression at a time, and asked to say what the stimulus person was feeling. Participants’ responses were coded by two raters who made judgments concerning the specific emotion category in which the responses belonged. 5- and 6-year-olds were quite accurate in labeling facial expressions of happiness, anger, and sadness but far less accurate for facial expressions of fear, surprise, and disgust. An improvement in accuracy as a function of age was found for fear and surprise only. Labeling facial expressions of disgust proved to be very difficult for the children, even for the 11-year-olds. In order to examine the fit between the model proposed by Widen and Russell (2003) and our data, we looked at the number of participants who had the predicted response patterns. Overall, 88.52% of the participants did. Most of the participants used between 3 and 5 labels, with correspondence percentages varying between 80.00% and 100.00%. Our results suggest that the model proposed by Widen and Russell (2003) is not limited to English-speaking children, but also accounts for the sequence of emotion labeling in French-Canadian children. PMID:24926281
Tromans, James Matthew; Harris, Mitchell; Stringer, Simon Maitland
2011-01-01
Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE) respond primarily to facial identity, while cells within the superior temporal sulcus (STS) respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC) of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs), with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.
Madrigal-Garcia, Maria Isabel; Rodrigues, Marcos; Shenfield, Alex; Singer, Mervyn; Moreno-Cuesta, Jeronimo
2018-07-01
To identify facial expressions occurring in patients at risk of deterioration in hospital wards. Prospective observational feasibility study. General ward patients in a London Community Hospital, United Kingdom. Thirty-four patients at risk of clinical deterioration. A 5-minute video (25 frames/s; 7,500 images) was recorded, encrypted, and subsequently analyzed for action units by a trained facial action coding system psychologist blinded to outcome. Action units of the upper face, head position, eyes position, lips and jaw position, and lower face were analyzed in conjunction with clinical measures collected within the National Early Warning Score. The most frequently detected action units were action unit 43 (73%) for upper face, action unit 51 (11.7%) for head position, action unit 62 (5.8%) for eyes position, action unit 25 (44.1%) for lips and jaw, and action unit 15 (67.6%) for lower face. The presence of certain combined face displays was increased in patients requiring admission to intensive care, namely, action units 43 + 15 + 25 (face display 1, p < 0.013), action units 43 + 15 + 51/52 (face display 2, p < 0.003), and action units 43 + 15 + 51 + 25 (face display 3, p < 0.002). Having face display 1, face display 2, and face display 3 increased the risk of being admitted to intensive care eight-fold, 18-fold, and as a sure event, respectively. A logistic regression model with face display 1, face display 2, face display 3, and National Early Warning Score as independent covariates described admission to intensive care with an average concordance statistic (C-index) of 0.71 (p = 0.009). Patterned facial expressions can be identified in deteriorating general ward patients. This tool may potentially augment risk prediction of current scoring systems.
Lupis, Sarah B; Lerman, Michelle; Wolf, Jutta M
2014-11-01
While previous research has suggested that anger and fear responses to stress are linked to distinct sympathetic nervous system (SNS) stress responses, little is known about how these emotions predict hypothalamus-pituitary-adrenal (HPA) axis reactivity. Further, earlier research primarily relied on retrospective self-report of emotion. The current study aimed at addressing both issues in male and female individuals by assessing the role of anger and fear in predicting heart rate and cortisol stress responses using both self-report and facial coding analysis to assess emotion responses. We exposed 32 healthy students (18 female; 19.6±1.7 yr) to an acute psychosocial stress paradigm (TSST) and measured heart rate and salivary cortisol levels throughout the protocol. Anger and fear before and after stress exposure was assessed by self-report, and video recordings of the TSST were assessed by a certified facial coder to determine emotion expression (FACS). Self-reported emotions and emotion expressions did not correlate (all p>.23). Increases in self-reported fear predicted blunted cortisol responses in men (β=0.41, p=.04). Also for men, longer durations of anger expression predicted exaggerated cortisol responses (β=0.67 p=.004), and more anger incidences predicted exaggerated cortisol and heart rate responses (β=0.51, p=.033; β=0.46, p=.066, resp.). Anger and fear did not predict SNS or HPA activity for females (all p>.23). The current differential self-report and facial coding findings support the use of multiple modes of emotion assessment. Particularly, FACS but not self-report revealed a robust anger-stress association that could have important downstream health effects for men. For women, future research may clarify the role of other emotions, such as self-conscious expressions of shame, for physiological stress responses. A better understanding of the emotion-stress link may contribute to behavioral interventions targeting health-promoting ways of responding emotionally to stress. Copyright © 2014 Elsevier Ltd. All rights reserved.
Lupis, Sarah B.; Lerman, Michelle; Wolf, Jutta M.
2014-01-01
While previous research has suggested that anger and fear responses to stress are linked to distinct sympathetic nervous system (SNS) stress responses, little is known about how these emotions predict hypothalamus-pituitary-adrenal (HPA) axis reactivity. Further, earlier research primarily relied on retrospective self-report of emotion. The current study aimed at addressing both issues in male and female individuals by assessing the role of anger and fear in predicting heart rate and cortisol stress responses using both self-report and facial coding analysis to assess emotion responses. We exposed 32 healthy students (18 female; 19.6+/−1.7 yrs.) to an acute psychosocial stress paradigm (TSST) and measured heart rate and salivary cortisol levels throughout the protocol. Anger and fear before and after stress exposure was assessed by self-report, and video recordings of the TSST were assessed by a certified facial coder to determine emotion expression (FACS). Self-reported emotions and emotion expressions did not correlate (all p > .23). Increases in self-reported fear predicted blunted cortisol responses in men (β = 0.41, p = .04). Also for men, longer durations of anger expression predicted exaggerated cortisol responses (β = 0.67 p = .004), and more anger incidences predicted exaggerated cortisol and heart rate responses (β = 0.51, p = .033; β = 0.46, p = .066, resp.). Anger and fear did not predict SNS or HPA activity for females (all p > .23). The current differential self-report and facial coding findings support the use of multiple modes of emotion assessment. Particularly, FACS but not self-report revealed a robust anger-stress association that could have important downstream health effects for men. For women, future research may clarify the role of other emotions, such as self-conscious expressions of shame, for physiological stress responses. A better understanding of the emotion-stress link may contribute to behavioral interventions targeting health-promoting ways of responding emotionally to stress. PMID:25064831
Biometrics: A Look at Facial Recognition
a facial recognition system in the city’s Oceanfront tourist area. The system has been tested and has recently been fully implemented. Senator...Kenneth W. Stolle, the Chairman of the Virginia State Crime Commission, established a Facial Recognition Technology Sub-Committee to examine the issue of... facial recognition technology. This briefing begins by defining biometrics and discussing examples of the technology. It then explains how biometrics
Face Recognition Vendor Test 2000: Evaluation Report
2001-02-16
The biggest change in the facial recognition community since the completion of the FERET program has been the introduction of facial recognition products...program and significantly lowered system costs. Today there are dozens of facial recognition systems available that have the potential to meet...inquiries from numerous government agencies on the current state of facial recognition technology prompted the DoD Counterdrug Technology Development Program
United States Homeland Security and National Biometric Identification
2002-04-09
security number. Biometrics is the use of unique individual traits such as fingerprints, iris eye patterns, voice recognition, and facial recognition to...technology to control access onto their military bases using a Defense Manpower Management Command developed software application. FACIAL Facial recognition systems...installed facial recognition systems in conjunction with a series of 200 cameras to fight street crime and identify terrorists. The cameras, which are
Novel Noninvasive Brain Disease Detection System Using a Facial Image Sensor
Shu, Ting; Zhang, Bob; Tang, Yuan Yan
2017-01-01
Brain disease including any conditions or disabilities that affect the brain is fast becoming a leading cause of death. The traditional diagnostic methods of brain disease are time-consuming, inconvenient and non-patient friendly. As more and more individuals undergo examinations to determine if they suffer from any form of brain disease, developing noninvasive, efficient, and patient friendly detection systems will be beneficial. Therefore, in this paper, we propose a novel noninvasive brain disease detection system based on the analysis of facial colors. The system consists of four components. A facial image is first captured through a specialized sensor, where four facial key blocks are next located automatically from the various facial regions. Color features are extracted from each block to form a feature vector for classification via the Probabilistic Collaborative based Classifier. To thoroughly test the system and its performance, seven facial key block combinations were experimented. The best result was achieved using the second facial key block, where it showed that the Probabilistic Collaborative based Classifier is the most suitable. The overall performance of the proposed system achieves an accuracy −95%, a sensitivity −94.33%, a specificity −95.67%, and an average processing time (for one sample) of <1 min at brain disease detection. PMID:29292716
Troisi, Alfonso; Pompili, Enrico; Binello, Luigi; Sterpone, Alessandro
2007-03-30
Despite the central role of nonverbal behavior in regulating social interactions, its relationship to functional disability in schizophrenia has received little empirical attention. This study aimed at assessing the relationship of patients' spontaneous facial expressivity during the clinical interview to clinician-rated and self-reported measures of functional disability. The nonverbal behavior of 28 stabilized patients with schizophrenia was analyzed by using the Ethological Coding System for Interviews (ECSI). Functional disability was assessed using the Global Assessment of Functioning (GAF) scale and the Sheehan Disability Scale (DISS). Partial correlation analysis controlling for the confounding effects of neuroleptic treatment showed that facial expressivity was correlated with the GAF score (r=0.42, P=0.03) and the scores on the subscales of the DISS measuring work (r=-0.52, P=0.005) and social (r=-0.50, P=0.007) disability. In a multiple regression model, nonverbal behavior explained variation in patients' work and social disability better than negative symptoms. The results of this pilot study suggest that deficits in encoding affiliative signals may play a role in determining or aggravating functional disability in schizophrenia. One clinical implication of this finding is that remediation training programs designed to improve nonverbal communication could also serve as a useful adjunct for improving work and social functioning in patients with schizophrenia.
Computer-Aided Recognition of Facial Attributes for Fetal Alcohol Spectrum Disorders.
Valentine, Matthew; Bihm, Dustin C J; Wolf, Lior; Hoyme, H Eugene; May, Philip A; Buckley, David; Kalberg, Wendy; Abdul-Rahman, Omar A
2017-12-01
To compare the detection of facial attributes by computer-based facial recognition software of 2-D images against standard, manual examination in fetal alcohol spectrum disorders (FASD). Participants were gathered from the Fetal Alcohol Syndrome Epidemiology Research database. Standard frontal and oblique photographs of children were obtained during a manual, in-person dysmorphology assessment. Images were submitted for facial analysis conducted by the facial dysmorphology novel analysis technology (an automated system), which assesses ratios of measurements between various facial landmarks to determine the presence of dysmorphic features. Manual blinded dysmorphology assessments were compared with those obtained via the computer-aided system. Areas under the curve values for individual receiver-operating characteristic curves revealed the computer-aided system (0.88 ± 0.02) to be comparable to the manual method (0.86 ± 0.03) in detecting patients with FASD. Interestingly, cases of alcohol-related neurodevelopmental disorder (ARND) were identified more efficiently by the computer-aided system (0.84 ± 0.07) in comparison to the manual method (0.74 ± 0.04). A facial gestalt analysis of patients with ARND also identified more generalized facial findings compared to the cardinal facial features seen in more severe forms of FASD. We found there was an increased diagnostic accuracy for ARND via our computer-aided method. As this category has been historically difficult to diagnose, we believe our experiment demonstrates that facial dysmorphology novel analysis technology can potentially improve ARND diagnosis by introducing a standardized metric for recognizing FASD-associated facial anomalies. Earlier recognition of these patients will lead to earlier intervention with improved patient outcomes. Copyright © 2017 by the American Academy of Pediatrics.
Physical therapy for facial paralysis: a tailored treatment approach.
Brach, J S; VanSwearingen, J M
1999-04-01
Bell palsy is an acute facial paralysis of unknown etiology. Although recovery from Bell palsy is expected without intervention, clinical experience suggests that recovery is often incomplete. This case report describes a classification system used to guide treatment and to monitor recovery of an individual with facial paralysis. The patient was a 71-year-old woman with complete left facial paralysis secondary to Bell palsy. Signs and symptoms were assessed using a standardized measure of facial impairment (Facial Grading System [FGS]) and questions regarding functional limitations. A treatment-based category was assigned based on signs and symptoms. Rehabilitation involved muscle re-education exercises tailored to the treatment-based category. In 14 physical therapy sessions over 13 months, the patient had improved facial impairments (initial FGS score= 17/100, final FGS score= 68/100) and no reported functional limitations. Recovery from Bell palsy can be a complicated and lengthy process. The use of a classification system may help simplify the rehabilitation process.
Intra-temporal facial nerve centerline segmentation for navigated temporal bone surgery
NASA Astrophysics Data System (ADS)
Voormolen, Eduard H. J.; van Stralen, Marijn; Woerdeman, Peter A.; Pluim, Josien P. W.; Noordmans, Herke J.; Regli, Luca; Berkelbach van der Sprenkel, Jan W.; Viergever, Max A.
2011-03-01
Approaches through the temporal bone require surgeons to drill away bone to expose a target skull base lesion while evading vital structures contained within it, such as the sigmoid sinus, jugular bulb, and facial nerve. We hypothesize that an augmented neuronavigation system that continuously calculates the distance to these structures and warns if the surgeon drills too close, will aid in making safe surgical approaches. Contemporary image guidance systems are lacking an automated method to segment the inhomogeneous and complexly curved facial nerve. Therefore, we developed a segmentation method to delineate the intra-temporal facial nerve centerline from clinically available temporal bone CT images semi-automatically. Our method requires the user to provide the start- and end-point of the facial nerve in a patient's CT scan, after which it iteratively matches an active appearance model based on the shape and texture of forty facial nerves. Its performance was evaluated on 20 patients by comparison to our gold standard: manually segmented facial nerve centerlines. Our segmentation method delineates facial nerve centerlines with a maximum error along its whole trajectory of 0.40+/-0.20 mm (mean+/-standard deviation). These results demonstrate that our model-based segmentation method can robustly segment facial nerve centerlines. Next, we can investigate whether integration of this automated facial nerve delineation with a distance calculating neuronavigation interface results in a system that can adequately warn surgeons during temporal bone drilling, and effectively diminishes risks of iatrogenic facial nerve palsy.
Magai, C; Cohen, C I; Culver, C; Gomberg, D; Malatesta, C
1997-11-01
Twenty-seven nursing home patients with mid- to late-stage dementia participated in a study of the relation between preillness personality, as indexed by attachment and emotion regulation style, and current emotional behavior. Preillness measures were completed by family members and current assessments of emotion were supplied by nursing home aides and family members; in addition, emotion was coded during a family visit using an objective coding system for facial emotion expressions. Attachment style was found to be related to the expression of positive affect, with securely attached individuals displaying more positive affect than avoidantly attached individuals. In addition, high ratings on premorbid hostility were associated with higher rates of negative affect and lower rates of positive affect. These findings indicate that premorbid aspects of personality show continuity over time, even in mid- to late-stage dementia.
Reproducibility of the dynamics of facial expressions in unilateral facial palsy.
Alagha, M A; Ju, X; Morley, S; Ayoub, A
2018-02-01
The aim of this study was to assess the reproducibility of non-verbal facial expressions in unilateral facial paralysis using dynamic four-dimensional (4D) imaging. The Di4D system was used to record five facial expressions of 20 adult patients. The system captured 60 three-dimensional (3D) images per second; each facial expression took 3-4seconds which was recorded in real time. Thus a set of 180 3D facial images was generated for each expression. The procedure was repeated after 30min to assess the reproducibility of the expressions. A mathematical facial mesh consisting of thousands of quasi-point 'vertices' was conformed to the face in order to determine the morphological characteristics in a comprehensive manner. The vertices were tracked throughout the sequence of the 180 images. Five key 3D facial frames from each sequence of images were analyzed. Comparisons were made between the first and second capture of each facial expression to assess the reproducibility of facial movements. Corresponding images were aligned using partial Procrustes analysis, and the root mean square distance between them was calculated and analyzed statistically (paired Student t-test, P<0.05). Facial expressions of lip purse, cheek puff, and raising of eyebrows were reproducible. Facial expressions of maximum smile and forceful eye closure were not reproducible. The limited coordination of various groups of facial muscles contributed to the lack of reproducibility of these facial expressions. 4D imaging is a useful clinical tool for the assessment of facial expressions. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Kwit, Natalie A; Max, Ryan; Mead, Paul S
2018-01-01
Abstract Background Clinical features of Lyme disease (LD) range from localized skin lesions to serious disseminated disease. Information on risk factors for Lyme arthritis, facial palsy, carditis, and meningitis is limited but could facilitate disease recognition and elucidate pathophysiology. Methods Patients from high-incidence states treated for LD during 2005–2014 were identified in a nationwide insurance claims database using the International Classification of Diseases, Ninth Revision code for LD (088.81), antibiotic treatment history, and clinically compatible codiagnosis codes for LD manifestations. Results Among 88022 unique patients diagnosed with LD, 5122 (5.8%) patients with 5333 codiagnoses were identified: 2440 (2.8%) arthritis, 1853 (2.1%) facial palsy, 534 (0.6%) carditis, and 506 (0.6%) meningitis. Patients with disseminated LD had lower median age (35 vs 42 years) and higher male proportion (61% vs 50%) than nondisseminated LD. Greatest differential risks included arthritis in males aged 10–14 years (odds ratio [OR], 3.5; 95% confidence interval [CI], 3.0–4.2), facial palsy (OR, 2.1; 95% CI, 1.6–2.7) and carditis (OR, 2.4; 95% CI, 1.6–3.6) in males aged 20–24 years, and meningitis in females aged 10–14 years (OR, 3.4; 95% CI, 2.1–5.5) compared to the 55–59 year referent age group. Males aged 15–29 years had the highest risk for complete heart block, a potentially fatal condition. Conclusions The risk and manifestations of disseminated LD vary by age and sex. Provider education regarding at-risk populations and additional investigations into pathophysiology could enhance early case recognition and improve patient management. PMID:29326960
Patient experiences and outcomes following facial skin cancer surgery: A qualitative study.
Lee, Erica H; Klassen, Anne F; Lawson, Jessica L; Cano, Stefan J; Scott, Amie M; Pusic, Andrea L
2016-08-01
Early melanoma and non-melanoma skin cancer of the facial area are primarily treated with surgery. Little is known about the outcomes of treatment for facial skin cancer patients. The objective of the study was to identify concerns about aesthetics, procedures and health from the patients' perspective after facial skin surgery. Semi-structured in-depth interviews were conducted with 15 participants. Line-by-line coding was used to establish categories and develop themes. We identified five major themes on the impact of skin cancer surgery: appearance-related concerns; psychological (e.g., fear of new cancers or recurrence); social (e.g. impact on social activities and interaction); physical (e.g. pain and swelling) concerns and satisfaction with the experience of care (e.g., satisfaction with surgeon). The priority of participants was the removal of the facial skin cancer, as this reduced their overall worry. The aesthetic outcome was secondary but important, as it had important implications on the participants' social and psychological functioning. The participants' experience with the care provided by the surgeon and staff also contributed to their satisfaction with their treatment. This conceptual framework provides the basis for the development of a new patient-reported outcome instrument. © 2015 The Australasian College of Dermatologists.
Riehle, M; Mehl, S; Lincoln, T M
2018-04-17
We tested whether people with schizophrenia and prominent expressive negative symptoms (ENS) show reduced facial expressions in face-to-face social interactions and whether this expressive reduction explains negative social evaluations of these persons. We compared participants with schizophrenia with high ENS (n = 18) with participants with schizophrenia with low ENS (n = 30) and with healthy controls (n = 39). Participants engaged in an affiliative role-play that was coded for the frequency of positive and negative facial expression and rated for social performance skills and willingness for future interactions with the respective role-play partner. Participants with schizophrenia with high ENS showed significantly fewer positive facial expressions than those with low ENS and controls and were also rated significantly lower on social performance skills and willingness for future interactions. Participants with schizophrenia with low ENS did not differ from controls on these measures. The group difference in willingness for future interactions was significantly and independently mediated by the reduced positive facial expressions and social performance skills. Reduced facial expressiveness in schizophrenia is specifically related to ENS and has negative social consequences. These findings highlight the need to develop aetiological models and targeted interventions for ENS and its social consequences. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Shu, Ting; Zhang, Bob
2015-04-01
Blood tests allow doctors to check for certain diseases and conditions. However, using a syringe to extract the blood can be deemed invasive, slightly painful, and its analysis time consuming. In this paper, we propose a new non-invasive system to detect the health status (Healthy or Diseased) of an individual based on facial block texture features extracted using the Gabor filter. Our system first uses a non-invasive capture device to collect facial images. Next, four facial blocks are located on these images to represent them. Afterwards, each facial block is convolved with a Gabor filter bank to calculate its texture value. Classification is finally performed using K-Nearest Neighbor and Support Vector Machines via a Library for Support Vector Machines (with four kernel functions). The system was tested on a dataset consisting of 100 Healthy and 100 Diseased (with 13 forms of illnesses) samples. Experimental results show that the proposed system can detect the health status with an accuracy of 93 %, a sensitivity of 94 %, a specificity of 92 %, using a combination of the Gabor filters and facial blocks.
Automated Measurement of Facial Expression in Infant-Mother Interaction: A Pilot Study
Messinger, Daniel S.; Mahoor, Mohammad H.; Chow, Sy-Miin; Cohn, Jeffrey F.
2009-01-01
Automated facial measurement using computer vision has the potential to objectively document continuous changes in behavior. To examine emotional expression and communication, we used automated measurements to quantify smile strength, eye constriction, and mouth opening in two six-month-old/mother dyads who each engaged in a face-to-face interaction. Automated measurements showed high associations with anatomically based manual coding (concurrent validity); measurements of smiling showed high associations with mean ratings of positive emotion made by naive observers (construct validity). For both infants and mothers, smile strength and eye constriction (the Duchenne marker) were correlated over time, creating a continuous index of smile intensity. Infant and mother smile activity exhibited changing (nonstationary) local patterns of association, suggesting the dyadic repair and dissolution of states of affective synchrony. The study provides insights into the potential and limitations of automated measurement of facial action. PMID:19885384
2013-04-01
bioreactor systems, a microfluidic -based flexible fluid exchange patch was developed for porcine wound models. A novel design and fabrication process...to be established. 15. SUBJECT TERMS Biomask, burn injury, facial reconstruction, wound-healing, bioreactor, flexible microfluidic , and...and layers of facial skin using different cell types and matrices to produce a reliable, physiologic facial and skin construct to restore functional
Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time.
Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G
2014-01-20
Designed by biological and social evolutionary pressures, facial expressions of emotion comprise specific facial movements to support a near-optimal system of signaling and decoding. Although highly dynamical, little is known about the form and function of facial expression temporal dynamics. Do facial expressions transmit diagnostic signals simultaneously to optimize categorization of the six classic emotions, or sequentially to support a more complex communication system of successive categorizations over time? Our data support the latter. Using a combination of perceptual expectation modeling, information theory, and Bayesian classifiers, we show that dynamic facial expressions of emotion transmit an evolving hierarchy of "biologically basic to socially specific" information over time. Early in the signaling dynamics, facial expressions systematically transmit few, biologically rooted face signals supporting the categorization of fewer elementary categories (e.g., approach/avoidance). Later transmissions comprise more complex signals that support categorization of a larger number of socially specific categories (i.e., the six classic emotions). Here, we show that dynamic facial expressions of emotion provide a sophisticated signaling system, questioning the widely accepted notion that emotion communication is comprised of six basic (i.e., psychologically irreducible) categories, and instead suggesting four. Copyright © 2014 Elsevier Ltd. All rights reserved.
Facial motion parameter estimation and error criteria in model-based image coding
NASA Astrophysics Data System (ADS)
Liu, Yunhai; Yu, Lu; Yao, Qingdong
2000-04-01
Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.
Yankouskaya, Alla; Humphreys, Glyn W.; Rotshtein, Pia
2014-01-01
Facial identity and emotional expression are two important sources of information for daily social interaction. However the link between these two aspects of face processing has been the focus of an unresolved debate for the past three decades. Three views have been advocated: (1) separate and parallel processing of identity and emotional expression signals derived from faces; (2) asymmetric processing with the computation of emotion in faces depending on facial identity coding but not vice versa; and (3) integrated processing of facial identity and emotion. We present studies with healthy participants that primarily apply methods from mathematical psychology, formally testing the relations between the processing of facial identity and emotion. Specifically, we focused on the “Garner” paradigm, the composite face effect and the divided attention tasks. We further ask whether the architecture of face-related processes is fixed or flexible and whether (and how) it can be shaped by experience. We conclude that formal methods of testing the relations between processes show that the processing of facial identity and expressions interact, and hence are not fully independent. We further demonstrate that the architecture of the relations depends on experience; where experience leads to higher degree of inter-dependence in the processing of identity and expressions. We propose that this change occurs as integrative processes are more efficient than parallel. Finally, we argue that the dynamic aspects of face processing need to be incorporated into theories in this field. PMID:25452722
Brown, William; Liu, Connie; John, Rita Marie; Ford, Phoebe
2014-01-01
Developing gross and fine motor skills and expressing complex emotion is critical for child development. We introduce “StorySense”, an eBook-integrated mobile app prototype that can sense face and sound topologies and identify movement and expression to promote children’s motor skills and emotional developmental. Currently, most interactive eBooks on mobile devices only leverage “low-motor” interaction (i.e. tapping or swiping). Our app senses a greater breath of motion (e.g. clapping, snapping, and face tracking), and dynamically alters the storyline according to physical responses in ways that encourage the performance of predetermined motor skills ideal for a child’s gross and fine motor development. In addition, our app can capture changes in facial topology, which can later be mapped using the Facial Action Coding System (FACS) for later interpretation of emotion. StorySense expands the human computer interaction vocabulary for mobile devices. Potential clinical applications include child development, physical therapy, and autism. PMID:25954336
Tapia, Antonio; Ruiz-de-Erenchun, Richard; Rengifo, Miguel
2006-08-01
One of the main objectives in facial lifting is to achieve an adequate facial contour, to enhance facial characteristics. Sometimes, facial areas are more or less accentuated, resulting in an unbalanced or inharmonious facial contour; this can be resolved in the context of a face lift. In the middle third of the face, two anatomical regions define the facial silhouette: the malar contour, with its bone support and superficial structures and, at the cheek level, intimately associated with the mastication system and the facial nerve, the buccal fat pad or Bichat fat pad. The authors describe their experience since 1998 using the double approach to malar atrophy and buccal fat pad hypertrophy in 194 patients with facial aging signs undergoing a face lift. All patients were offered a face lift with partial resection of the fat pad through facial incisions and a stronger malar projection using an inverse superficial musculoaponeurotic system flap. The main complications observed regarding this surgical technique, in order of appearance, were light asymmetry, caused by a persistent hematoma or swelling; paresthesia of the buccal and zygomatic branches, which resolved spontaneously; and a light sinking of the cheek caused by excessive resection. One patient underwent correction with a fat injection. The superior superficial musculoaponeurotic system flap and buccal fat pad resection provided excellent aesthetic results for a more harmonic and proportioned facial contour during rhytidectomy. Particularly in patients with round faces, the authors were able to obtain permanent malar symmetry and projection in addition to diminishing the cheek fullness.
Hontanilla, Bernardo; Marre, Diego; Cabello, Alvaro
2013-06-01
Longstanding unilateral facial paralysis is best addressed with microneurovascular muscle transplantation. Neurotization can be obtained from the cross-facial or the masseter nerve. The authors present a quantitative comparison of both procedures using the FACIAL CLIMA system. Forty-seven patients with complete unilateral facial paralysis underwent reanimation with a free gracilis transplant neurotized to either a cross-facial nerve graft (group I, n=20) or to the ipsilateral masseteric nerve (group II, n=27). Commissural displacement and commissural contraction velocity were measured using the FACIAL CLIMA system. Postoperative intragroup commissural displacement and commissural contraction velocity means of the reanimated versus the normal side were first compared using the independent samples t test. Mean percentage of recovery of both parameters were compared between the groups using the independent samples t test. Significant differences of mean commissural displacement and commissural contraction velocity between the reanimated side and the normal side were observed in group I (p=0.001 and p=0.014, respectively) but not in group II. Intergroup comparisons showed that both commissural displacement and commissural contraction velocity were higher in group II, with significant differences for commissural displacement (p=0.048). Mean percentage of recovery of both parameters was higher in group II, with significant differences for commissural displacement (p=0.042). Free gracilis muscle transfer neurotized by the masseteric nerve is a reliable technique for reanimation of longstanding facial paralysis. Compared with cross-facial nerve graft neurotization, this technique provides better symmetry and a higher degree of recovery. Therapeutic, III.
Le, T Hoang Ngan; Luu, Khoa; Savvides, Marios
2013-08-01
Robust facial hair detection and segmentation is a highly valued soft biometric attribute for carrying out forensic facial analysis. In this paper, we propose a novel and fully automatic system, called SparCLeS, for beard/moustache detection and segmentation in challenging facial images. SparCLeS uses the multiscale self-quotient (MSQ) algorithm to preprocess facial images and deal with illumination variation. Histogram of oriented gradients (HOG) features are extracted from the preprocessed images and a dynamic sparse classifier is built using these features to classify a facial region as either containing skin or facial hair. A level set based approach, which makes use of the advantages of both global and local information, is then used to segment the regions of a face containing facial hair. Experimental results demonstrate the effectiveness of our proposed system in detecting and segmenting facial hair regions in images drawn from three databases, i.e., the NIST Multiple Biometric Grand Challenge (MBGC) still face database, the NIST Color Facial Recognition Technology FERET database, and the Labeled Faces in the Wild (LFW) database.
Outcome of facial physiotherapy in patients with prolonged idiopathic facial palsy.
Watson, G J; Glover, S; Allen, S; Irving, R M
2015-04-01
This study investigated whether patients who remain symptomatic more than a year following idiopathic facial paralysis gain benefit from tailored facial physiotherapy. A two-year retrospective review was conducted of all symptomatic patients. Data collected included: age, gender, duration of symptoms, Sunnybrook facial grading system scores pre-treatment and at last visit, and duration of treatment. The study comprised 22 patients (with a mean age of 50.5 years (range, 22-75 years)) who had been symptomatic for more than a year following idiopathic facial paralysis. The mean duration of symptoms was 45 months (range, 12-240 months). The mean duration of follow up was 10.4 months (range, 2-36 months). Prior to treatment, the mean Sunnybrook facial grading system score was 59 (standard deviation = 3.5); this had increased to 83 (standard deviation = 2.7) at the last visit, with an average improvement in score of 23 (standard deviation = 2.9). This increase was significant (p < 0.001). Tailored facial therapy can improve facial grading scores in patients who remain symptomatic for prolonged periods.
Biometric Fusion Demonstration System Scientific Report
2004-03-01
verification and facial recognition , searching watchlist databases comprised of full or partial facial images or voice recordings. Multiple-biometric...17 2.2.1.1 Fingerprint and Facial Recognition ............................... 17...iv DRDC Ottawa CR 2004 – 056 2.2.1.2 Iris Recognition and Facial Recognition ........................ 18
A View of the Therapy for Bell's Palsy Based on Molecular Biological Analyses of Facial Muscles.
Moriyama, Hiroshi; Mitsukawa, Nobuyuki; Itoh, Masahiro; Otsuka, Naruhito
2017-12-01
Details regarding the molecular biological features of Bell's palsy have not been widely reported in textbooks. We genetically analyzed facial muscles and clarified these points. We performed genetic analysis of facial muscle specimens from Japanese patients with severe (House-Brackmann facial nerve grading system V) and moderate (House-Brackmann facial nerve grading system III) dysfunction due to Bell's palsy. Microarray analysis of gene expression was performed using specimens from the healthy and affected sides, and gene expression was compared. Changes in gene expression were defined as an affected side/healthy side ratio of >1.5 or <0.5. We observed that the gene expression in Bell's palsy changes with the degree of facial nerve palsy. Especially, muscle, neuron, and energy category genes tended to fluctuate with the degree of facial nerve palsy. It is expected that this study will aid in the development of new treatments and diagnostic/prognostic markers based on the severity of facial nerve palsy.
Laptop Computer - Based Facial Recognition System Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. A. Cain; G. B. Singleton
2001-03-01
The objective of this project was to assess the performance of the leading commercial-off-the-shelf (COTS) facial recognition software package when used as a laptop application. We performed the assessment to determine the system's usefulness for enrolling facial images in a database from remote locations and conducting real-time searches against a database of previously enrolled images. The assessment involved creating a database of 40 images and conducting 2 series of tests to determine the product's ability to recognize and match subject faces under varying conditions. This report describes the test results and includes a description of the factors affecting the results.more » After an extensive market survey, we selected Visionics' FaceIt{reg_sign} software package for evaluation and a review of the Facial Recognition Vendor Test 2000 (FRVT 2000). This test was co-sponsored by the US Department of Defense (DOD) Counterdrug Technology Development Program Office, the National Institute of Justice, and the Defense Advanced Research Projects Agency (DARPA). Administered in May-June 2000, the FRVT 2000 assessed the capabilities of facial recognition systems that were currently available for purchase on the US market. Our selection of this Visionics product does not indicate that it is the ''best'' facial recognition software package for all uses. It was the most appropriate package based on the specific applications and requirements for this specific application. In this assessment, the system configuration was evaluated for effectiveness in identifying individuals by searching for facial images captured from video displays against those stored in a facial image database. An additional criterion was that the system be capable of operating discretely. For this application, an operational facial recognition system would consist of one central computer hosting the master image database with multiple standalone systems configured with duplicates of the master operating in remote locations. Remote users could perform real-time searches where network connectivity is not available. As images are enrolled at the remote locations, periodic database synchronization is necessary.« less
Automatic prediction of facial trait judgments: appearance vs. structural models.
Rojas, Mario; Masip, David; Todorov, Alexander; Vitria, Jordi
2011-01-01
Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.
Three-dimensional visualization system as an aid for facial surgical planning
NASA Astrophysics Data System (ADS)
Barre, Sebastien; Fernandez-Maloigne, Christine; Paume, Patricia; Subrenat, Gilles
2001-05-01
We present an aid for facial deformities treatment. We designed a system for surgical planning and prediction of human facial aspect after maxillo-facial surgery. We study the 3D reconstruction process of the tissues involved in the simulation, starting from CT acquisitions. 3D iso-surfaces meshes of soft tissues and bone structures are built. A sparse set of still photographs is used to reconstruct a 360 degree(s) texture of the facial surface and increase its visual realism. Reconstructed objects are inserted into an object-oriented, portable and scriptable visualization software allowing the practitioner to manipulate and visualize them interactively. Several LODs (Level-Of- Details) techniques are used to ensure usability. Bone structures are separated and moved by means of cut planes matching orthognatic surgery procedures. We simulate soft tissue deformations by creating a physically-based springs model between both tissues. The new static state of the facial model is computed by minimizing the energy of the springs system to achieve equilibrium. This process is optimized by transferring informations like participation hints at vertex-level between a warped generic model and the facial mesh.
A small-world network model of facial emotion recognition.
Takehara, Takuma; Ochiai, Fumio; Suzuki, Naoto
2016-01-01
Various models have been proposed to increase understanding of the cognitive basis of facial emotions. Despite those efforts, interactions between facial emotions have received minimal attention. If collective behaviours relating to each facial emotion in the comprehensive cognitive system could be assumed, specific facial emotion relationship patterns might emerge. In this study, we demonstrate that the frameworks of complex networks can effectively capture those patterns. We generate 81 facial emotion images (6 prototypes and 75 morphs) and then ask participants to rate degrees of similarity in 3240 facial emotion pairs in a paired comparison task. A facial emotion network constructed on the basis of similarity clearly forms a small-world network, which features an extremely short average network distance and close connectivity. Further, even if two facial emotions have opposing valences, they are connected within only two steps. In addition, we show that intermediary morphs are crucial for maintaining full network integration, whereas prototypes are not at all important. These results suggest the existence of collective behaviours in the cognitive systems of facial emotions and also describe why people can efficiently recognize facial emotions in terms of information transmission and propagation. For comparison, we construct three simulated networks--one based on the categorical model, one based on the dimensional model, and one random network. The results reveal that small-world connectivity in facial emotion networks is apparently different from those networks, suggesting that a small-world network is the most suitable model for capturing the cognitive basis of facial emotions.
Facial measurements for frame design.
Tang, C Y; Tang, N; Stewart, M C
1998-04-01
Anthropometric data for the purpose of spectacle frame design are scarce in the literature. Definitions of facial features to be measured with existing systems of facial measurement are often not specific enough for frame design and manufacturing. Currently, for individual frame design, experienced personnel collect data with facial rules or instruments. A new measuring system is proposed, making use of a template in the form of a spectacle frame. Upon fitting the template onto a subject, most of the measuring references can be defined. Such a system can be administered by lesser-trained personnel and can be used for researches covering a larger population.
Spoofing detection on facial images recognition using LBP and GLCM combination
NASA Astrophysics Data System (ADS)
Sthevanie, F.; Ramadhani, K. N.
2018-03-01
The challenge for the facial based security system is how to detect facial image falsification such as facial image spoofing. Spoofing occurs when someone try to pretend as a registered user to obtain illegal access and gain advantage from the protected system. This research implements facial image spoofing detection method by analyzing image texture. The proposed method for texture analysis combines the Local Binary Pattern (LBP) and Gray Level Co-occurrence Matrix (GLCM) method. The experimental results show that spoofing detection using LBP and GLCM combination achieves high detection rate compared to that of using only LBP feature or GLCM feature.
Zhou, Renpeng; Wang, Chen; Qian, Yunliang; Wang, Danru
2015-09-01
Facial defects are multicomponent deficiencies rather than simple soft-tissue defects. Based on different branches of the superficial temporal vascular system, various tissue components can be obtained to reconstruct facial defects individually. From January 2004 to December 2013, 31 patients underwent reconstruction of facial defects with composite flaps based on the superficial temporal vascular system. Twenty cases of nasal defects were repaired with skin and cartilage components, six cases of facial defects were treated with double island flaps of the skin and fascia, three patients underwent eyebrow and lower eyelid reconstruction with hairy and hairless flaps simultaneously, and two patients underwent soft-tissue repair with auricular combined flaps and cranial bone grafts. All flaps survived completely. Donor-site morbidity is minimal, closed primarily. Donor areas healed with acceptable cosmetic results. The final outcome was satisfactory. Combined flaps based on the superficial temporal vascular system are a useful and versatile option in facial soft-tissue reconstruction. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
A unified probabilistic framework for spontaneous facial action modeling and understanding.
Tong, Yan; Chen, Jixu; Ji, Qiang
2010-02-01
Facial expression is a natural and powerful means of human communication. Recognizing spontaneous facial actions, however, is very challenging due to subtle facial deformation, frequent head movements, and ambiguous and uncertain facial motion measurements. Because of these challenges, current research in facial expression recognition is limited to posed expressions and often in frontal view. A spontaneous facial expression is characterized by rigid head movements and nonrigid facial muscular movements. More importantly, it is the coherent and consistent spatiotemporal interactions among rigid and nonrigid facial motions that produce a meaningful facial expression. Recognizing this fact, we introduce a unified probabilistic facial action model based on the Dynamic Bayesian network (DBN) to simultaneously and coherently represent rigid and nonrigid facial motions, their spatiotemporal dependencies, and their image measurements. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, facial action recognition is accomplished through probabilistic inference by systematically integrating visual measurements with the facial action model. Experiments show that compared to the state-of-the-art techniques, the proposed system yields significant improvements in recognizing both rigid and nonrigid facial motions, especially for spontaneous facial expressions.
A wearable device for emotional recognition using facial expression and physiological response.
Jangho Kwon; Da-Hye Kim; Wanjoo Park; Laehyun Kim
2016-08-01
This paper introduces a glasses-typed wearable system to detect user's emotions using facial expression and physiological responses. The system is designed to acquire facial expression through a built-in camera and physiological responses such as photoplethysmogram (PPG) and electrodermal activity (EDA) in unobtrusive way. We used video clips for induced emotions to test the system suitability in the experiment. The results showed a few meaningful properties that associate emotions with facial expressions and physiological responses captured by the developed wearable device. We expect that this wearable system with a built-in camera and physiological sensors may be a good solution to monitor user's emotional state in daily life.
NASA Technical Reports Server (NTRS)
2002-01-01
Goddard Space Flight Center and Triangle Research & Development Corporation collaborated to create "Smart Eyes," a charge coupled device camera that, for the first time, could read and measure bar codes without the use of lasers. The camera operated in conjunction with software and algorithms created by Goddard and Triangle R&D that could track bar code position and direction with speed and precision, as well as with software that could control robotic actions based on vision system input. This accomplishment was intended for robotic assembly of the International Space Station, helping NASA to increase production while using less manpower. After successfully completing the two- phase SBIR project with Goddard, Triangle R&D was awarded a separate contract from the U.S. Department of Transportation (DOT), which was interested in using the newly developed NASA camera technology to heighten automotive safety standards. In 1990, Triangle R&D and the DOT developed a mask made from a synthetic, plastic skin covering to measure facial lacerations resulting from automobile accidents. By pairing NASA's camera technology with Triangle R&D's and the DOT's newly developed mask, a system that could provide repeatable, computerized evaluations of laceration injury was born.
Hontanilla, Bernardo; Marré, Diego
2012-11-01
Masseteric and hypoglossal nerve transfers are reliable alternatives for reanimating short-term facial paralysis. To date, few studies exist in the literature comparing these techniques. This work presents a quantitative comparison of masseter-facial transposition versus hemihypoglossal facial transposition with a nerve graft using the Facial Clima system. Forty-six patients with complete unilateral facial paralysis underwent reanimation with either hemihypoglossal transposition with a nerve graft (group I, n = 25) or direct masseteric-facial coaptation (group II, n = 21). Commissural displacement and commissural contraction velocity were measured using the Facial Clima system. Postoperative intragroup commissural displacement and commissural contraction velocity means of the reanimated versus the normal side were first compared using a paired sample t test. Then, mean percentages of recovery of both parameters were compared between the groups using an independent sample t test. Onset of movement was also compared between the groups. Significant differences of mean commissural displacement and commissural contraction velocity between the reanimated side and the normal side were observed in group I but not in group II. Mean percentage of recovery of both parameters did not differ between the groups. Patients in group II showed a significantly faster onset of movement compared with those in group I (62 ± 4.6 days versus 136 ± 7.4 days, p = 0.013). Reanimation of short-term facial paralysis can be satisfactorily addressed by means of either hemihypoglossal transposition with a nerve graft or direct masseteric-facial coaptation. However, with the latter, better symmetry and a faster onset of movement are observed. In addition, masseteric nerve transfer avoids morbidity from nerve graft harvesting. Therapeutic, III.
Heaton, James T.; Kowaleski, Jeffrey M.; Bermejo, Roberto; Zeigler, H. Philip; Ahlgren, David J.; Hadlock, Tessa A.
2008-01-01
The occurrence of inappropriate co-contraction of facially innervated muscles in humans (synkinesis) is a common sequela of facial nerve injury and recovery. We have developed a system for studying facial nerve function and synkinesis in restrained rats using non-contact opto-electronic techniques that enable simultaneous bilateral monitoring of eyelid and whisker movements. Whisking is monitored in high spatio-temporal resolution using laser micrometers, and eyelid movements are detected using infrared diode and phototransistor pairs that respond to the increased reflection when the eyelids cover the cornea. To validate the system, eight rats were tested with multiple five-minute sessions that included corneal air puffs to elicit blink and scented air flows to elicit robust whisking. Four rats then received unilateral facial nerve section and were tested at weeks 3–6. Whisking and eye blink behavior occurred both spontaneously and under stimulus control, with no detectable difference from published whisking data. Proximal facial nerve section caused an immediate ipsilateral loss of whisking and eye blink response, but some ocular closures emerged due to retractor bulbi muscle function. The independence observed between whisker and eyelid control indicates that this system may provide a powerful tool for identifying abnormal co-activation of facial zones resulting from aberrant axonal regeneration. PMID:18442856
NASA Astrophysics Data System (ADS)
Balbin, Jessie R.; Dela Cruz, Jennifer C.; Camba, Clarisse O.; Gozo, Angelo D.; Jimenez, Sheena Mariz B.; Tribiana, Aivje C.
2017-06-01
Acne vulgaris, commonly called as acne, is a skin problem that occurs when oil and dead skin cells clog up in a person's pores. This is because hormones change which makes the skin oilier. The problem is people really do not know the real assessment of sensitivity of their skin in terms of fluid development on their faces that tends to develop acne vulgaris, thus having more complications. This research aims to assess Acne Vulgaris using luminescent visualization system through optical imaging and integration of image processing algorithms. Specifically, this research aims to design a prototype for facial fluid analysis using luminescent visualization system through optical imaging and integration of fluorescent imaging system, and to classify different facial fluids present in each person. Throughout the process, some structures and layers of the face will be excluded, leaving only a mapped facial structure with acne regions. Facial fluid regions are distinguished from the acne region as they are characterized differently.
Creating speech-synchronized animation.
King, Scott A; Parent, Richard E
2005-01-01
We present a facial model designed primarily to support animated speech. Our facial model takes facial geometry as input and transforms it into a parametric deformable model. The facial model uses a muscle-based parameterization, allowing for easier integration between speech synchrony and facial expressions. Our facial model has a highly deformable lip model that is grafted onto the input facial geometry to provide the necessary geometric complexity needed for creating lip shapes and high-quality renderings. Our facial model also includes a highly deformable tongue model that can represent the shapes the tongue undergoes during speech. We add teeth, gums, and upper palate geometry to complete the inner mouth. To decrease the processing time, we hierarchically deform the facial surface. We also present a method to animate the facial model over time to create animated speech using a model of coarticulation that blends visemes together using dominance functions. We treat visemes as a dynamic shaping of the vocal tract by describing visemes as curves instead of keyframes. We show the utility of the techniques described in this paper by implementing them in a text-to-audiovisual-speech system that creates animation of speech from unrestricted text. The facial and coarticulation models must first be interactively initialized. The system then automatically creates accurate real-time animated speech from the input text. It is capable of cheaply producing tremendous amounts of animated speech with very low resource requirements.
[Surgical treatment in otogenic facial nerve palsy].
Feng, Guo-Dong; Gao, Zhi-Qiang; Zhai, Meng-Yao; Lü, Wei; Qi, Fang; Jiang, Hong; Zha, Yang; Shen, Peng
2008-06-01
To study the character of facial nerve palsy due to four different auris diseases including chronic otitis media, Hunt syndrome, tumor and physical or chemical factors, and to discuss the principles of the surgical management of otogenic facial nerve palsy. The clinical characters of 24 patients with otogenic facial nerve palsy because of the four different auris diseases were retrospectively analyzed, all the cases were performed surgical management from October 1991 to March 2007. Facial nerve function was evaluated with House-Brackmann (HB) grading system. The 24 patients including 10 males and 14 females were analysis, of whom 12 cases due to cholesteatoma, 3 cases due to chronic otitis media, 3 cases due to Hunt syndrome, 2 cases resulted from acute otitis media, 2 cases due to physical or chemical factors and 2 cases due to tumor. All cases were treated with operations included facial nerve decompression, lesion resection with facial nerve decompression and lesion resection without facial nerve decompression, 1 patient's facial nerve was resected because of the tumor. According to HB grade system, I degree recovery was attained in 4 cases, while II degree in 10 cases, III degree in 6 cases, IV degree in 2 cases, V degree in 2 cases and VI degree in 1 case. Removing the lesions completely was the basic factor to the surgery of otogenic facial palsy, moreover, it was important to have facial nerve decompression soon after lesion removal.
NASA Astrophysics Data System (ADS)
Nagata, Takeshi; Matsuzaki, Kazutoshi; Taniguchi, Kei; Ogawa, Yoshinori; Imaizumi, Kazuhiko
2017-03-01
3D Facial aging changes in more than 10 years of identical persons are being measured at National Research Institute of Police Science. We performed machine learning using such measured data as teacher data and have developed the system which convert input 2D face image into 3D face model and simulate aging. Here, we report about processing and accuracy of our system.
Dynamic facial expression recognition based on geometric and texture features
NASA Astrophysics Data System (ADS)
Li, Ming; Wang, Zengfu
2018-04-01
Recently, dynamic facial expression recognition in videos has attracted growing attention. In this paper, we propose a novel dynamic facial expression recognition method by using geometric and texture features. In our system, the facial landmark movements and texture variations upon pairwise images are used to perform the dynamic facial expression recognition tasks. For one facial expression sequence, pairwise images are created between the first frame and each of its subsequent frames. Integration of both geometric and texture features further enhances the representation of the facial expressions. Finally, Support Vector Machine is used for facial expression recognition. Experiments conducted on the extended Cohn-Kanade database show that our proposed method can achieve a competitive performance with other methods.
Ensemble coding of face identity is not independent of the coding of individual identity.
Neumann, Markus F; Ng, Ryan; Rhodes, Gillian; Palermo, Romina
2018-06-01
Information about a group of similar objects can be summarized into a compressed code, known as ensemble coding. Ensemble coding of simple stimuli (e.g., groups of circles) can occur in the absence of detailed exemplar coding, suggesting dissociable processes. Here, we investigate whether a dissociation would still be apparent when coding facial identity, where individual exemplar information is much more important. We examined whether ensemble coding can occur when exemplar coding is difficult, as a result of large sets or short viewing times, or whether the two types of coding are positively associated. We found a positive association, whereby both ensemble and exemplar coding were reduced for larger groups and shorter viewing times. There was no evidence for ensemble coding in the absence of exemplar coding. At longer presentation times, there was an unexpected dissociation, where exemplar coding increased yet ensemble coding decreased, suggesting that robust information about face identity might suppress ensemble coding. Thus, for face identity, we did not find the classic dissociation-of access to ensemble information in the absence of detailed exemplar information-that has been used to support claims of distinct mechanisms for ensemble and exemplar coding.
Toward DNA-based facial composites: preliminary results and validation.
Claes, Peter; Hill, Harold; Shriver, Mark D
2014-11-01
The potential of constructing useful DNA-based facial composites is forensically of great interest. Given the significant identity information coded in the human face these predictions could help investigations out of an impasse. Although, there is substantial evidence that much of the total variation in facial features is genetically mediated, the discovery of which genes and gene variants underlie normal facial variation has been hampered primarily by the multipartite nature of facial variation. Traditionally, such physical complexity is simplified by simple scalar measurements defined a priori, such as nose or mouth width or alternatively using dimensionality reduction techniques such as principal component analysis where each principal coordinate is then treated as a scalar trait. However, as shown in previous and related work, a more impartial and systematic approach to modeling facial morphology is available and can facilitate both the gene discovery steps, as we recently showed, and DNA-based facial composite construction, as we show here. We first use genomic ancestry and sex to create a base-face, which is simply an average sex and ancestry matched face. Subsequently, the effects of 24 individual SNPs that have been shown to have significant effects on facial variation are overlaid on the base-face forming the predicted-face in a process akin to a photomontage or image blending. We next evaluate the accuracy of predicted faces using cross-validation. Physical accuracy of the facial predictions either locally in particular parts of the face or in terms of overall similarity is mainly determined by sex and genomic ancestry. The SNP-effects maintain the physical accuracy while significantly increasing the distinctiveness of the facial predictions, which would be expected to reduce false positives in perceptual identification tasks. To the best of our knowledge this is the first effort at generating facial composites from DNA and the results are preliminary but certainly promising, especially considering the limited amount of genetic information about the face contained in these 24 SNPs. This approach can incorporate additional SNPs as these are discovered and their effects documented. In this context we discuss three main avenues of research: expanding our knowledge of the genetic architecture of facial morphology, improving the predictive modeling of facial morphology by exploring and incorporating alternative prediction models, and increasing the value of the results through the weighted encoding of physical measurements in terms of human perception of faces. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Matsumiya, Lynn C; Sorge, Robert E; Sotocinal, Susana G; Tabaka, John M; Wieskopf, Jeffrey S; Zaloum, Austin; King, Oliver D; Mogil, Jeffrey S
2012-01-01
Postoperative pain management in animals is complicated greatly by the inability to recognize pain. As a result, the choice of analgesics and their doses has been based on extrapolation from greatly differing pain models or the use of measures with unclear relevance to pain. We recently developed the Mouse Grimace Scale (MGS), a facial-expression–based pain coding system adapted directly from scales used in nonverbal human populations. The MGS has shown to be a reliable, highly accurate measure of spontaneous pain of moderate duration, and therefore is particularly useful in the quantification of postoperative pain. In the present study, we quantified the relative intensity and duration of postoperative pain after a sham ventral ovariectomy (laparotomy) in outbred mice. In addition, we compiled dose–response data for 4 commonly used analgesics: buprenorphine, carprofen, ketoprofen, and acetaminophen. We found that postoperative pain in mice, as defined by facial grimacing, lasts for 36 to 48 h, and appears to show relative exacerbation during the early dark (active) photophase. We find that buprenorphine was highly effective in inhibiting postoperative pain-induced facial grimacing in mice at doses equal to or lower than current recommendations, that carprofen and ketoprofen are effective only at doses markedly higher than those currently recommended, and that acetaminophen was ineffective at any dose used. We suggest the revision of practices for postoperative pain management in mice in light of these findings. PMID:22330867
Marsh, Penny; Beauchaine, Theodore P.; Williams, Bailey
2009-01-01
Although deficiencies in emotional responding have been linked to externalizing behaviors in children, little is known about how discrete response systems (e.g., expressive, physiological) are coordinated during emotional challenge among these youth. We examined time-linked correspondence of sad facial expressions and autonomic reactivity during an empathy-eliciting task among boys with disruptive behavior disorders (n = 31) and controls (n = 23). For controls, sad facial expressions were associated with reduced sympathetic (lower skin conductance level, lengthened cardiac preejection period [PEP]) and increased parasympathetic (higher respiratory sinus arrhythmia [RSA]) activity. In contrast, no correspondence between facial expressions and autonomic reactivity was observed among boys with conduct problems. Furthermore, low correspondence between facial expressions and PEP predicted externalizing symptom severity, whereas low correspondence between facial expressions and RSA predicted internalizing symptom severity. PMID:17868261
NASA Astrophysics Data System (ADS)
Nozawa, Akio; Takei, Yuya
The aim of study was to quantitatively evaluate the effects of self-administered facial massage, which was done by hand or facial roller. In this study, the psychophysiological effects of facial massage were evaluated. The central nerves system and the autonomic nervous system were administered to evaluate physiological system. The central nerves system was assessed by Electroencephalogram (EEG). The autonomic nervous system were assessed by peripheral skin temperature(PST) and heart rate variability (HRV) with spectral analysis. In the spectral analysis of HRV, the high-frequency components (HF) were evaluated. State-Trait Anxiety Inventory (STAI), Profile of Mood Status (POMS) and subjective sensory amount with Visual Analog Scale (VAS) were administered to evaluate psychological status. These results suggest that kept brain activity and had strong effects on stress alleviation.
2004-05-01
Army Soldier System Command: http://www.natick.armv.mil Role Name Facial Recognition Program Manager, Army Technical Lead Mark Chandler...security force with a facial recognition system. Mike Holloran, technology officer with the 6 Fleet, directed LCDR Hoa Ho and CAPT(s) Todd Morgan to...USN 6th Fleet was accomplished with the admiral expressing his support for continuing the evaluation of the a facial recognition system. This went
Wireless electronic-tattoo for long-term high fidelity facial muscle recordings
NASA Astrophysics Data System (ADS)
Inzelberg, Lilah; David Pur, Moshe; Steinberg, Stanislav; Rand, David; Farah, Maroun; Hanein, Yael
2017-05-01
Facial surface electromyography (sEMG) is a powerful tool for objective evaluation of human facial expressions and was accordingly suggested in recent years for a wide range of psychological and neurological assessment applications. Owing to technical challenges, in particular the cumbersome gelled electrodes, the use of facial sEMG was so far limited. Using innovative facial temporary tattoos optimized specifically for facial applications, we demonstrate the use of sEMG as a platform for robust identification of facial muscle activation. In particular, differentiation between diverse facial muscles is demonstrated. We also demonstrate a wireless version of the system. The potential use of the presented technology for user-experience monitoring and objective psychological and neurological evaluations is discussed.
2012-03-13
aspects associated with the use of fingerprinting. Another form of physical biometrics is facial recognition . ― Facial recognition unlike other...have originated back to the early 1960s. ―One of the leading pioneers in facial recognition biometrics was Woodrow W. Bledsoe who developed a...identified match. There are several advantages associated with Facial recognition . It is highly reliable, used extensively in security systems, and
Orthogonal-blendshape-based editing system for facial motion capture data.
Li, Qing; Deng, Zhigang
2008-01-01
The authors present a novel data-driven 3D facial motion capture data editing system using automated construction of an orthogonal blendshape face model and constrained weight propagation, aiming to bridge the popular facial motion capture technique and blendshape approach. In this work, a 3D facial-motion-capture-editing problem is transformed to a blendshape-animation-editing problem. Given a collected facial motion capture data set, we construct a truncated PCA space spanned by the greatest retained eigenvectors and a corresponding blendshape face model for each anatomical region of the human face. As such, modifying blendshape weights (PCA coefficients) is equivalent to editing their corresponding motion capture sequence. In addition, a constrained weight propagation technique allows animators to balance automation and flexible controls.
Two-Stream Transformer Networks for Video-based Face Alignment.
Liu, Hao; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie
2017-08-01
In this paper, we propose a two-stream transformer networks (TSTN) approach for video-based face alignment. Unlike conventional image-based face alignment approaches which cannot explicitly model the temporal dependency in videos and motivated by the fact that consistent movements of facial landmarks usually occur across consecutive frames, our TSTN aims to capture the complementary information of both the spatial appearance on still frames and the temporal consistency information across frames. To achieve this, we develop a two-stream architecture, which decomposes the video-based face alignment into spatial and temporal streams accordingly. Specifically, the spatial stream aims to transform the facial image to the landmark positions by preserving the holistic facial shape structure. Accordingly, the temporal stream encodes the video input as active appearance codes, where the temporal consistency information across frames is captured to help shape refinements. Experimental results on the benchmarking video-based face alignment datasets show very competitive performance of our method in comparisons to the state-of-the-arts.
Contrasting Specializations for Facial Motion Within the Macaque Face-Processing System
Fisher, Clark; Freiwald, Winrich A.
2014-01-01
SUMMARY Facial motion transmits rich and ethologically vital information [1, 2], but how the brain interprets this complex signal is poorly understood. Facial form is analyzed by anatomically distinct face patches in the macaque brain [3, 4], and facial motion activates these patches and surrounding areas [5, 6]. Yet it is not known whether facial motion is processed by its own distinct and specialized neural machinery, and if so, what that machinery’s organization might be. To address these questions, we used functional magnetic resonance imaging (fMRI) to monitor the brain activity of macaque monkeys while they viewed low- and high-level motion and form stimuli. We found that, beyond classical motion areas and the known face patch system, moving faces recruited a heretofore-unrecognized face patch. Although all face patches displayed distinctive selectivity for face motion over object motion, only two face patches preferred naturally moving faces, while three others preferred randomized, rapidly varying sequences of facial form. This functional divide was anatomically specific, segregating dorsal from ventral face patches, thereby revealing a new organizational principle of the macaque face-processing system. PMID:25578903
Ravaja, Niklas
2004-01-01
We examined the moderating influence of dispositional behavioral inhibition system and behavioral activation system (BAS) sensitivities, Negative Affect, and Positive Affect on the relationship between a small moving vs. static facial image and autonomic responses when viewing/listening to news messages read by a newscaster among 36 young adults. Autonomic parameters measured were respiratory sinus arrhythmia (RSA), low-frequency (LF) component of heart rate variability (HRV), electrodermal activity, and pulse transit time (PTT). The results showed that dispositional BAS sensitivity, particularly BAS Fun Seeking, and Negative Affect interacted with facial image motion in predicting autonomic nervous system activity. A moving facial image was related to lower RSA and LF component of HRV and shorter PTTs as compared to a static facial image among high BAS individuals. Even a small talking facial image may contribute to sustained attentional engagement among high BAS individuals, given that the BAS directs attention toward the positive cue and a moving social stimulus may act as a positive incentive for high BAS individuals.
Face Recognition Vendor Test 2000: Appendices
2001-02-01
DARPA), NAVSEA Crane Division and NAVSEA Dahlgren Division are sponsoring an evaluation of commercial off the shelf (COTS) facial recognition products...The purpose of these evaluations is to accurately gauge the capabilities of facial recognition biometric systems that are currently available for...or development efforts. Participation in these tests is open to all facial recognition systems on the US commercial market. The U.S. Government will
Cognitive Processing Hardware Elements
2005-01-31
characters. Results will be presented below. 1 4. Recognition of human faces. There are many other possible applications such as facial recognition and...For the experiments in facial recognition , we have used a 3-layer autoassociative neural network having the following specifications: "* The input...using the facial recognition system described in the section above as an example. This system uses an autoassociative neural network containing over 10
Choi, Kyung-Sik; Kim, Min-Su; Kwon, Hyeok-Gyu; Jang, Sung-Ho
2014-01-01
Objective Facial nerve palsy is a common complication of treatment for vestibular schwannoma (VS), so preserving facial nerve function is important. The preoperative visualization of the course of facial nerve in relation to VS could help prevent injury to the nerve during the surgery. In this study, we evaluate the accuracy of diffusion tensor tractography (DTT) for preoperative identification of facial nerve. Methods We prospectively collected data from 11 patients with VS, who underwent preoperative DTT for facial nerve. Imaging results were correlated with intraoperative findings. Postoperative DTT was performed at postoperative 3 month. Facial nerve function was clinically evaluated according to the House-Brackmann (HB) facial nerve grading system. Results Facial nerve courses on preoperative tractography were entirely correlated with intraoperative findings in all patients. Facial nerve was located on the anterior of the tumor surface in 5 cases, on anteroinferior in 3 cases, on anterosuperior in 2 cases, and on posteroinferior in 1 case. In postoperative facial nerve tractography, preservation of facial nerve was confirmed in all patients. No patient had severe facial paralysis at postoperative one year. Conclusion This study shows that DTT for preoperative identification of facial nerve in VS surgery could be a very accurate and useful radiological method and could help to improve facial nerve preservation. PMID:25289119
Heaton, James T.; Knox, Christopher; Malo, Juan; Kobler, James B.; Hadlock, Tessa A.
2013-01-01
Functional recovery is typically poor after facial nerve transection and surgical repair. In rats, whisking amplitude remains greatly diminished after facial nerve regeneration, but can recover more completely if the whiskers are periodically mechanically stimulated during recovery. Here we present a robotic “whisk assist” system for mechanically driving whisker movement after facial nerve injury. Movement patterns were either pre-programmed to reflect natural amplitudes and frequencies, or movements of the contralateral (healthy) side of the face were detected and used to control real-time mirror-like motion on the denervated side. In a pilot study, twenty rats were divided into nine groups and administered one of eight different whisk assist driving patterns (or control) for 5–20 minutes, five days per week, across eight weeks of recovery after unilateral facial nerve cut and suture repair. All rats tolerated the mechanical stimulation well. Seven of the eight treatment groups recovered average whisking amplitudes that exceeded controls, although small group sizes precluded statistical confirmation of group differences. The potential to substantially improve facial nerve recovery through mechanical stimulation has important clinical implications, and we have developed a system to control the pattern and dose of stimulation in the rat facial nerve model. PMID:23475376
Fairbairn, Catharine E.; Sayette, Michael A.; Aalen, Odd O.; Frigessi, Arnoldo
2014-01-01
Researchers have hypothesized that men gain greater reward from alcohol than women. However, alcohol-administration studies testing participants drinking alone have offered weak support for this hypothesis. Research suggests that social processes may be implicated in gender differences in drinking patterns. We examined the impact of gender and alcohol on “emotional contagion”—a social mechanism central to bonding and cohesion. Social drinkers (360 male, 360 female) consumed alcohol, placebo, or control beverages in groups of three. Social interactions were video recorded, and both Duchenne and non-Duchenne smiling were continuously coded using the Facial Action Coding System. Results revealed that Duchenne smiling (but not non-Duchenne smiling) contagion correlated with self-reported reward and typical drinking patterns. Importantly, Duchenne smiles were significantly less “infectious” among sober male versus female groups, and alcohol eliminated these gender differences in smiling contagion. Findings identify new directions for research exploring social-reward processes in the etiology of alcohol problems. PMID:26504673
Fairbairn, Catharine E; Sayette, Michael A; Aalen, Odd O; Frigessi, Arnoldo
2015-09-01
Researchers have hypothesized that men gain greater reward from alcohol than women. However, alcohol-administration studies testing participants drinking alone have offered weak support for this hypothesis. Research suggests that social processes may be implicated in gender differences in drinking patterns. We examined the impact of gender and alcohol on "emotional contagion"-a social mechanism central to bonding and cohesion. Social drinkers (360 male, 360 female) consumed alcohol, placebo, or control beverages in groups of three. Social interactions were video recorded, and both Duchenne and non-Duchenne smiling were continuously coded using the Facial Action Coding System . Results revealed that Duchenne smiling (but not non-Duchenne smiling) contagion correlated with self-reported reward and typical drinking patterns. Importantly, Duchenne smiles were significantly less "infectious" among sober male versus female groups, and alcohol eliminated these gender differences in smiling contagion. Findings identify new directions for research exploring social-reward processes in the etiology of alcohol problems.
An Argument for the Use of Biometrics to Prevent Terrorist Access to the United States
2003-12-06
that they are who they claim to be. Remote methods such as facial recognition do not rely on interaction with the individual, and can be used with or...quickly, although there is a relatively high error rate. Acsys Biometric Systems, a leader in facial recognition , reports their best system has only a...change their appearance. The facial recognition system also presents a privacy concern in the minds of many individuals. By remotely scanning without an
Ansó, Juan; Dür, Cilgia; Gavaghan, Kate; Rohrbach, Helene; Gerber, Nicolas; Williamson, Tom; Calvo, Enric M; Balmer, Thomas Wyss; Precht, Christina; Ferrario, Damien; Dettmer, Matthias S; Rösler, Kai M; Caversaccio, Marco D; Bell, Brett; Weber, Stefan
2016-01-01
A multielectrode probe in combination with an optimized stimulation protocol could provide sufficient sensitivity and specificity to act as an effective safety mechanism for preservation of the facial nerve in case of an unsafe drill distance during image-guided cochlear implantation. A minimally invasive cochlear implantation is enabled by image-guided and robotic-assisted drilling of an access tunnel to the middle ear cavity. The approach requires the drill to pass at distances below 1 mm from the facial nerve and thus safety mechanisms for protecting this critical structure are required. Neuromonitoring is currently used to determine facial nerve proximity in mastoidectomy but lacks sensitivity and specificity necessaries to effectively distinguish the close distance ranges experienced in the minimally invasive approach, possibly because of current shunting of uninsulated stimulating drilling tools in the drill tunnel and because of nonoptimized stimulation parameters. To this end, we propose an advanced neuromonitoring approach using varying levels of stimulation parameters together with an integrated bipolar and monopolar stimulating probe. An in vivo study (sheep model) was conducted in which measurements at specifically planned and navigated lateral distances from the facial nerve were performed to determine if specific sets of stimulation parameters in combination with the proposed neuromonitoring system could reliably detect an imminent collision with the facial nerve. For the accurate positioning of the neuromonitoring probe, a dedicated robotic system for image-guided cochlear implantation was used and drilling accuracy was corrected on postoperative microcomputed tomographic images. From 29 trajectories analyzed in five different subjects, a correlation between stimulus threshold and drill-to-facial nerve distance was found in trajectories colliding with the facial nerve (distance <0.1 mm). The shortest pulse duration that provided the highest linear correlation between stimulation intensity and drill-to-facial nerve distance was 250 μs. Only at low stimulus intensity values (≤0.3 mA) and with the bipolar configurations of the probe did the neuromonitoring system enable sufficient lateral specificity (>95%) at distances to the facial nerve below 0.5 mm. However, reduction in stimulus threshold to 0.3 mA or lower resulted in a decrease of facial nerve distance detection range below 0.1 mm (>95% sensitivity). Subsequent histopathology follow-up of three representative cases where the neuromonitoring system could reliably detect a collision with the facial nerve (distance <0.1 mm) revealed either mild or inexistent damage to the nerve fascicles. Our findings suggest that although no general correlation between facial nerve distance and stimulation threshold existed, possibly because of variances in patient-specific anatomy, correlations at very close distances to the facial nerve and high levels of specificity would enable a binary response warning system to be developed using the proposed probe at low stimulation currents.
The Effects of Alcohol on the Emotional Displays of Whites in Interracial Groups
Fairbairn, Catharine E.; Sayette, Michael A.; Levine, John M.; Cohn, Jeffrey F.; Creswell, Kasey G.
2017-01-01
Discomfort during interracial interactions is common among Whites in the U.S. and is linked to avoidance of interracial encounters. While the negative consequences of interracial discomfort are well-documented, understanding of its causes is still incomplete. Alcohol consumption has been shown to decrease negative emotions caused by self-presentational concern but increase negative emotions associated with racial prejudice. Using novel behavioral-expressive measures of emotion, we examined the impact of alcohol on displays of discomfort among 92 White individuals interacting in all-White or interracial groups. We used the Facial Action Coding System and comprehensive content-free speech analyses to examine affective and behavioral dynamics during these 36-minute exchanges (7.9 million frames of video data). Among Whites consuming nonalcoholic beverages, those assigned to interracial groups evidenced more facial and speech displays of discomfort than those in all-White groups. In contrast, among intoxicated Whites there were no differences in displays of discomfort between interracial and all-White groups. Results highlight the central role of self-presentational concerns in interracial discomfort and offer new directions for applying theory and methods from emotion science to the examination of intergroup relations. PMID:23356562
The effects of alcohol on the emotional displays of Whites in interracial groups.
Fairbairn, Catharine E; Sayette, Michael A; Levine, John M; Cohn, Jeffrey F; Creswell, Kasey G
2013-06-01
Discomfort during interracial interactions is common among Whites in the U.S. and is linked to avoidance of interracial encounters. While the negative consequences of interracial discomfort are well-documented, understanding of its causes is still incomplete. Alcohol consumption has been shown to decrease negative emotions caused by self-presentational concern but increase negative emotions associated with racial prejudice. Using novel behavioral-expressive measures of emotion, we examined the impact of alcohol on displays of discomfort among 92 White individuals interacting in all-White or interracial groups. We used the Facial Action Coding System and comprehensive content-free speech analyses to examine affective and behavioral dynamics during these 36-min exchanges (7.9 million frames of video data). Among Whites consuming nonalcoholic beverages, those assigned to interracial groups evidenced more facial and speech displays of discomfort than those in all-White groups. In contrast, among intoxicated Whites there were no differences in displays of discomfort between interracial and all-White groups. Results highlight the central role of self-presentational concerns in interracial discomfort and offer new directions for applying theory and methods from emotion science to the examination of intergroup relations.
Selective stimulation of facial muscles with a penetrating electrode array in the feline model
Sahyouni, Ronald; Bhatt, Jay; Djalilian, Hamid R.; Tang, William C.; Middlebrooks, John C.; Lin, Harrison W.
2017-01-01
Objective Permanent facial nerve injury is a difficult challenge for both patients and physicians given its potential for debilitating functional, cosmetic, and psychological sequelae. Although current surgical interventions have provided considerable advancements in facial nerve rehabilitation, they often fail to fully address all impairments. We aim to introduce an alternative approach to facial nerve rehabilitation. Study design Acute experiments in animals with normal facial function. Methods The study included three anesthetized cats. Four facial muscles (levator auris longus, orbicularis oculi, nasalis, and orbicularis oris) were monitored with a standard electromyographic (EMG) facial nerve monitoring system with needle electrodes. The main trunk of the facial nerve was exposed and a 16-channel penetrating electrode array was placed into the nerve. Electrical current pulses were delivered to each stimulating electrode individually. Elicited EMG voltage outputs were recorded for each muscle. Results Stimulation through individual channels selectively activated restricted nerve populations, resulting in selective contraction of individual muscles. Increasing stimulation current levels resulted in increasing EMG voltage responses. Typically, selective activation of two or more distinct muscles was successfully achieved via a single placement of the multi-channel electrode array by selection of appropriate stimulation channels. Conclusion We have established in the animal model the ability of a penetrating electrode array to selectively stimulate restricted fiber populations within the facial nerve and to selectively elicit contractions in specific muscles and regions of the face. These results show promise for the development of a facial nerve implant system. PMID:27312936
Face processing regions are sensitive to distinct aspects of temporal sequence in facial dynamics.
Reinl, Maren; Bartels, Andreas
2014-11-15
Facial movement conveys important information for social interactions, yet its neural processing is poorly understood. Computational models propose that shape- and temporal sequence sensitive mechanisms interact in processing dynamic faces. While face processing regions are known to respond to facial movement, their sensitivity to particular temporal sequences has barely been studied. Here we used fMRI to examine the sensitivity of human face-processing regions to two aspects of directionality in facial movement trajectories. We presented genuine movie recordings of increasing and decreasing fear expressions, each of which were played in natural or reversed frame order. This two-by-two factorial design matched low-level visual properties, static content and motion energy within each factor, emotion-direction (increasing or decreasing emotion) and timeline (natural versus artificial). The results showed sensitivity for emotion-direction in FFA, which was timeline-dependent as it only occurred within the natural frame order, and sensitivity to timeline in the STS, which was emotion-direction-dependent as it only occurred for decreased fear. The occipital face area (OFA) was sensitive to the factor timeline. These findings reveal interacting temporal sequence sensitive mechanisms that are responsive to both ecological meaning and to prototypical unfolding of facial dynamics. These mechanisms are temporally directional, provide socially relevant information regarding emotional state or naturalness of behavior, and agree with predictions from modeling and predictive coding theory. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
A Facial Control Method Using Emotional Parameters in Sensibility Robot
NASA Astrophysics Data System (ADS)
Shibata, Hiroshi; Kanoh, Masayoshi; Kato, Shohei; Kunitachi, Tsutomu; Itoh, Hidenori
The “Ifbot” robot communicates with people by considering its own “emotions”. Ifbot has many facial expressions to communicate enjoyment. These are used to express its internal emotions, purposes, reactions caused by external stimulus, and entertainment such as singing songs. All these facial expressions are developed by designers manually. Using this approach, we must design all facial motions, if we want Ifbot to express them. It, however, is not realistic. We have therefore developed a system which convert Ifbot's emotions to its facial expressions automatically. In this paper, we propose a method for creating Ifbot's facial expressions from parameters, emotional parameters, which handle its internal emotions computationally.
Hierarchical Encoding of Social Cues in Primate Inferior Temporal Cortex
Morin, Elyse L.; Hadj-Bouziane, Fadila; Stokes, Mark; Ungerleider, Leslie G.; Bell, Andrew H.
2015-01-01
Faces convey information about identity and emotional state, both of which are important for our social interactions. Models of face processing propose that changeable versus invariant aspects of a face, specifically facial expression/gaze direction versus facial identity, are coded by distinct neural pathways and yet neurophysiological data supporting this separation are incomplete. We recorded activity from neurons along the inferior bank of the superior temporal sulcus (STS), while monkeys viewed images of conspecific faces and non-face control stimuli. Eight monkey identities were used, each presented with 3 different facial expressions (neutral, fear grin, and threat). All facial expressions were displayed with both a direct and averted gaze. In the posterior STS, we found that about one-quarter of face-responsive neurons are sensitive to social cues, the majority of which being sensitive to only one of these cues. In contrast, in anterior STS, not only did the proportion of neurons sensitive to social cues increase, but so too did the proportion of neurons sensitive to conjunctions of identity with either gaze direction or expression. These data support a convergence of signals related to faces as one moves anteriorly along the inferior bank of the STS, which forms a fundamental part of the face-processing network. PMID:24836688
Relations of Early Goal-Blockage Response and Gender to Subsequent Tantrum Behavior
ERIC Educational Resources Information Center
Sullivan, Margaret W.; Lewis, Michael
2012-01-01
Infants and their mothers participated in a longitudinal study of the sequelae of infant goal-blockage responses. Four-month-old infants participated in a standard contingency learning and goal-blockage procedure during which anger and sad facial expressions to the blockage were coded. When infants were 12 and 20 months old, mothers completed a…
Generating and Describing Affective Eye Behaviors
NASA Astrophysics Data System (ADS)
Mao, Xia; Li, Zheng
The manner of a person's eye movement conveys much about nonverbal information and emotional intent beyond speech. This paper describes work on expressing emotion through eye behaviors in virtual agents based on the parameters selected from the AU-Coded facial expression database and real-time eye movement data (pupil size, blink rate and saccade). A rule-based approach to generate primary (joyful, sad, angry, afraid, disgusted and surprise) and intermediate emotions (emotions that can be represented as the mixture of two primary emotions) utilized the MPEG4 FAPs (facial animation parameters) is introduced. Meanwhile, based on our research, a scripting tool, named EEMML (Emotional Eye Movement Markup Language) that enables authors to describe and generate emotional eye movement of virtual agents, is proposed.
Automatically Log Off Upon Disappearance of Facial Image
2005-03-01
log off a PC when the user’s face disappears for an adjustable time interval. Among the fundamental technologies of biometrics, facial recognition is... facial recognition products. In this report, a brief overview of face detection technologies is provided. The particular neural network-based face...ensure that the user logging onto the system is the same person. Among the fundamental technologies of biometrics, facial recognition is the only
Fisher, Katie; Towler, John; Eimer, Martin
2016-01-08
It is frequently assumed that facial identity and facial expression are analysed in functionally and anatomically distinct streams within the core visual face processing system. To investigate whether expression and identity interact during the visual processing of faces, we employed a sequential matching procedure where participants compared either the identity or the expression of two successively presented faces, and ignored the other irrelevant dimension. Repetitions versus changes of facial identity and expression were varied independently across trials, and event-related potentials (ERPs) were recorded during task performance. Irrelevant facial identity and irrelevant expression both interfered with performance in the expression and identity matching tasks. These symmetrical interference effects show that neither identity nor expression can be selectively ignored during face matching, and suggest that they are not processed independently. N250r components to identity repetitions that reflect identity matching mechanisms in face-selective visual cortex were delayed and attenuated when there was an expression change, demonstrating that facial expression interferes with visual identity matching. These findings provide new evidence for interactions between facial identity and expression within the core visual processing system, and question the hypothesis that these two attributes are processed independently. Copyright © 2015 Elsevier Ltd. All rights reserved.
Tzou, Chieh-Han John; Pona, Igor; Placheta, Eva; Hold, Alina; Michaelidou, Maria; Artner, Nicole; Kropatsch, Walter; Gerber, Hans; Frey, Manfred
2012-08-01
Since the implementation of the computer-aided system for assessing facial palsy in 1999 by Frey et al (Plast Reconstr Surg. 1999;104:2032-2039), no similar system that can make an objective, three-dimensional, quantitative analysis of facial movements has been marketed. This system has been in routine use since its launch, and it has proven to be reliable, clinically applicable, and therapeutically accurate. With the cooperation of international partners, more than 200 patients were analyzed. Recent developments in computer vision--mostly in the area of generative face models, applying active--appearance models (and extensions), optical flow, and video-tracking-have been successfully incorporated to automate the prototype system. Further market-ready development and a business partner will be needed to enable the production of this system to enhance clinical methodology in diagnostic and prognostic accuracy as a personalized therapy concept, leading to better results and higher quality of life for patients with impaired facial function.
Seager, Dennis Craig; Kau, Chung How; English, Jeryl D; Tawfik, Wael; Bussa, Harry I; Ahmed, Abou El Yazeed M
2009-09-01
To compare the facial morphologies of an adult Egyptian population with those of a Houstonian white population. The three-dimensional (3D) images were acquired via a commercially available stereophotogrammetric camera capture system. The 3dMDface System photographed 186 subjects from two population groups (Egypt and Houston). All of the participants from both population groups were between 18 and 30 years of age and had no apparent facial anomalies. All facial images were overlaid and superimposed, and a complex mathematical algorithm was performed to generate a composite facial average (one male and one female) for each subgroup (EGY-M: Egyptian male subjects; EGY-F: Egyptian female subjects; HOU-M: Houstonian male subjects; and HOU-F: Houstonian female subjects). The computer-generated facial averages were superimposed based on a previously validated superimposition method, and the facial differences were evaluated and quantified. Distinct facial differences were evident between the subgroups evaluated, involving various regions of the face including the slant of the forehead, and the nasal, malar, and labial regions. Overall, the mean facial differences between the Egyptian and Houstonian female subjects were 1.33 +/- 0.93 mm, while the differences in Egyptian and Houstonian male subjects were 2.32 +/- 2.23 mm. The range of differences for the female population pairings and the male population pairings were 14.34 mm and 13.71 mm, respectively. The average adult Egyptian and white Houstonian face possess distinct differences. Different populations and ethnicities have different facial features and averages.
Intelligent Facial Recognition Systems: Technology advancements for security applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, C.L.
1993-07-01
Insider problems such as theft and sabotage can occur within the security and surveillance realm of operations when unauthorized people obtain access to sensitive areas. A possible solution to these problems is a means to identify individuals (not just credentials or badges) in a given sensitive area and provide full time personnel accountability. One approach desirable at Department of Energy facilities for access control and/or personnel identification is an Intelligent Facial Recognition System (IFRS) that is non-invasive to personnel. Automatic facial recognition does not require the active participation of the enrolled subjects, unlike most other biological measurement (biometric) systems (e.g.,more » fingerprint, hand geometry, or eye retinal scan systems). It is this feature that makes an IFRS attractive for applications other than access control such as emergency evacuation verification, screening, and personnel tracking. This paper discusses current technology that shows promising results for DOE and other security applications. A survey of research and development in facial recognition identified several companies and universities that were interested and/or involved in the area. A few advanced prototype systems were also identified. Sandia National Laboratories is currently evaluating facial recognition systems that are in the advanced prototype stage. The initial application for the evaluation is access control in a controlled environment with a constant background and with cooperative subjects. Further evaluations will be conducted in a less controlled environment, which may include a cluttered background and subjects that are not looking towards the camera. The outcome of the evaluations will help identify areas of facial recognition systems that need further development and will help to determine the effectiveness of the current systems for security applications.« less
On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information
NASA Astrophysics Data System (ADS)
Stathopoulou, I.-O.; Alepis, E.; Tsihrintzis, G. A.; Virvou, M.
Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.
Expressive facial animation synthesis by learning speech coarticulation and expression spaces.
Deng, Zhigang; Neumann, Ulrich; Lewis, J P; Kim, Tae-Yong; Bulut, Murtaza; Narayanan, Shrikanth
2006-01-01
Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A Phoneme-Independent Expression Eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and Principal Component Analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation.
Performance of a Working Face Recognition Machine using Cortical Thought Theory
1984-12-04
been considered (2). Recommendations from Bledsoe’s study included research on facial - recognition systems that are "completely automatic (remove the...C. L. Location of some facial features . computer, Palo Alto: Panoramic Research, Aug 1966. 2. Bledsoe, W. W. Man-machine facial recognition : Is...34 image?" It would seem - that the location and size of the features left in this contrast-expanded image contain the essential information of facial
Rhodes, Gillian; Burton, Nichola; Jeffery, Linda; Read, Ainsley; Taylor, Libby; Ewing, Louise
2018-05-01
Individuals with autism spectrum disorder (ASD) can have difficulty recognizing emotional expressions. Here, we asked whether the underlying perceptual coding of expression is disrupted. Typical individuals code expression relative to a perceptual (average) norm that is continuously updated by experience. This adaptability of face-coding mechanisms has been linked to performance on various face tasks. We used an adaptation aftereffect paradigm to characterize expression coding in children and adolescents with autism. We asked whether face expression coding is less adaptable in autism and whether there is any fundamental disruption of norm-based coding. If expression coding is norm-based, then the face aftereffects should increase with adaptor expression strength (distance from the average expression). We observed this pattern in both autistic and typically developing participants, suggesting that norm-based coding is fundamentally intact in autism. Critically, however, expression aftereffects were reduced in the autism group, indicating that expression-coding mechanisms are less readily tuned by experience. Reduced adaptability has also been reported for coding of face identity and gaze direction. Thus, there appears to be a pervasive lack of adaptability in face-coding mechanisms in autism, which could contribute to face processing and broader social difficulties in the disorder. © 2017 The British Psychological Society.
Communicating with Virtual Humans.
ERIC Educational Resources Information Center
Thalmann, Nadia Magnenat
The face is a small part of a human, but it plays an essential role in communication. An open hybrid system for facial animation is presented. It encapsulates a considerable amount of information regarding facial models, movements, expressions, emotions, and speech. The complex description of facial animation can be handled better by assigning…
2013-06-01
fixed sensors located along the perimeter of the FOB. The video is analyzed for facial recognition to alert the Network Operations Center (NOC...the UAV is processed on board for facial recognition and video for behavior analysis is sent directly to the Network Operations Center (NOC). Video...captured by the fixed sensors are sent directly to the NOC for facial recognition and behavior analysis processing. The multi- directional signal
Schultheiss, Oliver C; Wirth, Michelle M; Waugh, Christian E; Stanton, Steven J; Meier, Elizabeth A; Reuter-Lorenz, Patricia
2008-12-01
This study tested the hypothesis that implicit power motivation (nPower), in interaction with power incentives, influences activation of brain systems mediating motivation. Twelve individuals low (lowest quartile) and 12 individuals high (highest quartile) in nPower, as assessed per content coding of picture stories, were selected from a larger initial participant pool and participated in a functional magnetic resonance imaging study during which they viewed high-dominance (angry faces), low-dominance (surprised faces) and control stimuli (neutral faces, gray squares) under oddball-task conditions. Consistent with hypotheses, high-power participants showed stronger activation in response to emotional faces in brain structures involved in emotion and motivation (insula, dorsal striatum, orbitofrontal cortex) than low-power participants.
Effects of age and mild cognitive impairment on the pain response system.
Kunz, Miriam; Mylius, Veit; Schepelmann, Karsten; Lautenbacher, Stefan
2009-01-01
Both age and dementia have been shown to have an effect on nociception and pain processing. The question arises whether mild cognitive impairment (MCI), which is thought to be a transitional stage between normal ageing and dementia, is also associated with alterations in pain processing. The aim of the present study was to answer this question by investigating the impact of age and MCI on the pain response system. Forty young subjects, 45 cognitively unimpaired elderly subjects and 42 subjects with MCI were investigated by use of an experimental multi-method approach. The subjects were tested for their subjective (pain ratings), motor (RIII reflex), facial (Facial Action Coding System) and their autonomic (sympathetic skin response and evoked heart rate response) responses to noxious electrical stimulation of the nervus suralis. We found significant group differences in the autonomic responses to noxious stimulation. The sympathetic skin response amplitude was significantly reduced in the cognitively unimpaired elderly subjects compared to younger subjects and to an even greater degree in subjects with MCI. The evoked heart rate response was reduced to a similar degree in both groups of aged subjects. Regression analyses within the two groups of the elderly subjects revealed that age and, in the MCI group, cognitive status were significant predictors of the decrease in autonomic responsiveness to noxious stimulation. Except for the autonomic parameters, no other pain parameter differed between the three groups. The pain response system appeared to be quite unaltered in MCI patients compared to cognitively unimpaired individuals of the same age. Only the sympathetic responsiveness qualified as an indicator of early aging effects as well as of pathophysiology associated with MCI, which both seemed to affect the pain system independently from each other.
Lee, Kang-Woo; Kim, Sang-Hwan; Gil, Young-Chun; Hu, Kyung-Seok; Kim, Hee-Jin
2017-10-01
Three-dimensional (3 D)-scanning-based morphological studies of the face are commonly included in various clinical procedures. This study evaluated validity and reliability of a 3 D scanning system by comparing the ultrasound (US) imaging system versus the direct measurement of facial skin. The facial skin thickness at 19 landmarks was measured using the three different methods in 10 embalmed adult Korean cadavers. Skin thickness was first measured using the ultrasound device, then 3 D scanning of the facial skin surface was performed. After the skin on the left half of face was gently dissected, deviating slightly right of the midline, to separate it from the subcutaneous layer, and the harvested facial skin's thickness was measured directly using neck calipers. The dissected specimen was then scanned again, then the scanned images of undissected and dissected faces were superimposed using Morpheus Plastic Solution (version 3.0) software. Finally, the facial skin thickness was calculated from the superimposed images. The ICC value for the correlations between the 3 D scanning system and direct measurement showed excellent reliability (0.849, 95% confidence interval = 0.799-0.887). Bland-Altman analysis showed a good level of agreement between the 3 D scanning system and direct measurement (bias = 0.49 ± 0.49 mm, mean±SD). These results demonstrate that the 3 D scanning system precisely reflects structural changes before and after skin dissection. Therefore, an in-depth morphological study using this 3 D scanning system could provide depth data about the main anatomical structures of face, thereby providing crucial anatomical knowledge for utilization in various clinical applications. Clin. Anat. 30:878-886, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Hutto, Justin R; Vattoth, Surjith
2015-01-01
In this article, we elaborate a practical approach to superficial facial anatomy enabling easy identification of the facial mimic muscles by classifying them according to their shared common insertion sites. The facial mimic muscles are often difficult to identify on imaging. By tracing them from their common group insertion sites back to their individual origins as well as understanding key anatomic relationships, radiologists can more accurately identify these muscles.
Adaptation effects to attractiveness of face photographs and art portraits are domain-specific
Hayn-Leichsenring, Gregor U.; Kloth, Nadine; Schweinberger, Stefan R.; Redies, Christoph
2013-01-01
We studied the neural coding of facial attractiveness by investigating effects of adaptation to attractive and unattractive human faces on the perceived attractiveness of veridical human face pictures (Experiment 1) and art portraits (Experiment 2). Experiment 1 revealed a clear pattern of contrastive aftereffects. Relative to a pre-adaptation baseline, the perceived attractiveness of faces was increased after adaptation to unattractive faces, and was decreased after adaptation to attractive faces. Experiment 2 revealed similar aftereffects when art portraits rather than face photographs were used as adaptors and test stimuli, suggesting that effects of adaptation to attractiveness are not restricted to facial photographs. Additionally, we found similar aftereffects in art portraits for beauty, another aesthetic feature that, unlike attractiveness, relates to the properties of the image (rather than to the face displayed). Importantly, Experiment 3 showed that aftereffects were abolished when adaptors were art portraits and face photographs were test stimuli. These results suggest that adaptation to facial attractiveness elicits aftereffects in the perception of subsequently presented faces, for both face photographs and art portraits, and that these effects do not cross image domains. PMID:24349690
Bidirectional Gender Face Aftereffects: Evidence Against Normative Facial Coding.
Cronin, Sophie L; Spence, Morgan L; Miller, Paul A; Arnold, Derek H
2017-02-01
Facial appearance can be altered, not just by restyling but also by sensory processes. Exposure to a female face can, for instance, make subsequent faces look more masculine than they would otherwise. Two explanations exist. According to one, exposure to a female face renormalizes face perception, making that female and all other faces look more masculine as a consequence-a unidirectional effect. According to that explanation, exposure to a male face would have the opposite unidirectional effect. Another suggestion is that face gender is subject to contrastive aftereffects. These should make some faces look more masculine than the adaptor and other faces more feminine-a bidirectional effect. Here, we show that face gender aftereffects are bidirectional, as predicted by the latter hypothesis. Images of real faces rated as more and less masculine than adaptors at baseline tended to look even more and less masculine than adaptors post adaptation. This suggests that, rather than mental representations of all faces being recalibrated to better reflect the prevailing statistics of the environment, mental operations exaggerate differences between successive faces, and this can impact facial gender perception.
Wirthlin, J; Kau, C H; English, J D; Pan, F; Zhou, H
2013-09-01
The objective of this study was to compare the facial morphologies of an adult Chinese population to a Houstonian white population. Three-dimensional (3D) images were acquired via a commercially available stereophotogrammetric camera system, 3dMDface™. Using the system, 100 subjects from a Houstonian population and 71 subjects from a Chinese population were photographed. A complex mathematical algorithm was performed to generate a composite facial average (one for males and one for females) for each subgroup. The computer-generated facial averages were then superimposed based on a previously validated superimposition method. The facial averages were evaluated for differences. Distinct facial differences were evident between the subgroups evaluated. These areas included the nasal tip, the peri-orbital area, the malar process, the labial region, the forehead, and the chin. Overall, the mean facial difference between the Chinese and Houstonian female averages was 2.73±2.20mm, while the difference between the Chinese and Houstonian males was 2.83±2.20mm. The percent similarity for the female population pairings and male population pairings were 10.45% and 12.13%, respectively. The average adult Chinese and Houstonian faces possess distinct differences. Different populations and ethnicities have different facial features and averages that should be considered in the planning of treatment. Copyright © 2013 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
HAPPEN CAN'T HEAR: An Analysis of Code-Blends in Hearing, Native Signers of American Sign Language
ERIC Educational Resources Information Center
Bishop, Michele
2011-01-01
Hearing native signers often learn sign language as their first language and acquire features that are characteristic of sign languages but are not present in equivalent ways in English (e.g., grammatical facial expressions and the structured use of space for setting up tokens and surrogates). Previous research has indicated that bimodal…
Standardization of Code on Dental Procedures
1992-02-13
oral hard and soft tissues using a periodontal probe, mirror, and explorer, and bitewing, panoramic, or other radiographs as...of living tissue or inert material into periodontal osseous defects to regenerate new periodontal attachment (bone, periodontal ligament, and cementum...Simple (up to 5 cm). Repair and/or suturing of simple to moderately complicated wounds of facial and/or oral soft tissues . 7211 1.8 Repair
Measurement of facial movements with Photoshop software during treatment of facial nerve palsy*
Pourmomeny, Abbas Ali; Zadmehr, Hassan; Hossaini, Mohsen
2011-01-01
BACKGROUND: Evaluating the function of facial nerve is essential in order to determine the influences of various treatment methods. The aim of this study was to evaluate and assess the agreement of Photoshop scaling system versus the facial grading system (FGS). METHODS: In this semi-experimental study, thirty subjects with facial nerve paralysis were recruited. The evaluation of all patients before and after the treatment was performed by FGS and Photoshop measurements. RESULTS: The mean values of FGS before and after the treatment were 35 ± 25 and 67 ± 24, respectively (p < 0.001). In Photoshop assessment, mean changes of face expressions in the impaired side relative to the normal side in rest position and three main movements of the face were 3.4 ± 0.55 and 4.04 ± 0.49 millimeter before and after the treatment, respectively (p < 0.001). Spearman's correlation coefficient between different values in the two methods was 0.66 (p < 0.001). CONCLUSIONS: Evaluating the facial nerve palsy using Photoshop was more objective than using FGS. Therefore, it may be recommended to use this method instead. PMID:22973325
Measurement of facial movements with Photoshop software during treatment of facial nerve palsy.
Pourmomeny, Abbas Ali; Zadmehr, Hassan; Hossaini, Mohsen
2011-10-01
Evaluating the function of facial nerve is essential in order to determine the influences of various treatment methods. The aim of this study was to evaluate and assess the agreement of Photoshop scaling system versus the facial grading system (FGS). In this semi-experimental study, thirty subjects with facial nerve paralysis were recruited. The evaluation of all patients before and after the treatment was performed by FGS and Photoshop measurements. The mean values of FGS before and after the treatment were 35 ± 25 and 67 ± 24, respectively (p < 0.001). In Photoshop assessment, mean changes of face expressions in the impaired side relative to the normal side in rest position and three main movements of the face were 3.4 ± 0.55 and 4.04 ± 0.49 millimeter before and after the treatment, respectively (p < 0.001). Spearman's correlation coefficient between different values in the two methods was 0.66 (p < 0.001). Evaluating the facial nerve palsy using Photoshop was more objective than using FGS. Therefore, it may be recommended to use this method instead.
Implementation of facial recognition with Microsoft Kinect v2 sensor for patient verification.
Silverstein, Evan; Snyder, Michael
2017-06-01
The aim of this study was to present a straightforward implementation of facial recognition using the Microsoft Kinect v2 sensor for patient identification in a radiotherapy setting. A facial recognition system was created with the Microsoft Kinect v2 using a facial mapping library distributed with the Kinect v2 SDK as a basis for the algorithm. The system extracts 31 fiducial points representing various facial landmarks which are used in both the creation of a reference data set and subsequent evaluations of real-time sensor data in the matching algorithm. To test the algorithm, a database of 39 faces was created, each with 465 vectors derived from the fiducial points, and a one-to-one matching procedure was performed to obtain sensitivity and specificity data of the facial identification system. ROC curves were plotted to display system performance and identify thresholds for match determination. In addition, system performance as a function of ambient light intensity was tested. Using optimized parameters in the matching algorithm, the sensitivity of the system for 5299 trials was 96.5% and the specificity was 96.7%. The results indicate a fairly robust methodology for verifying, in real-time, a specific face through comparison from a precollected reference data set. In its current implementation, the process of data collection for each face and subsequent matching session averaged approximately 30 s, which may be too onerous to provide a realistic supplement to patient identification in a clinical setting. Despite the time commitment, the data collection process was well tolerated by all participants and most robust when consistent ambient light conditions were maintained across both the reference recording session and subsequent real-time identification sessions. A facial recognition system can be implemented for patient identification using the Microsoft Kinect v2 sensor and the distributed SDK. In its present form, the system is accurate-if time consuming-and further iterations of the method could provide a robust, easy to implement, and cost-effective supplement to traditional patient identification methods. © 2017 American Association of Physicists in Medicine.
The face is not an empty canvas: how facial expressions interact with facial appearance.
Hess, Ursula; Adams, Reginald B; Kleck, Robert E
2009-12-12
Faces are not simply blank canvases upon which facial expressions write their emotional messages. In fact, facial appearance and facial movement are both important social signalling systems in their own right. We here provide multiple lines of evidence for the notion that the social signals derived from facial appearance on the one hand and facial movement on the other interact in a complex manner, sometimes reinforcing and sometimes contradicting one another. Faces provide information on who a person is. Sex, age, ethnicity, personality and other characteristics that can define a person and the social group the person belongs to can all be derived from the face alone. The present article argues that faces interact with the perception of emotion expressions because this information informs a decoder's expectations regarding an expresser's probable emotional reactions. Facial appearance also interacts more directly with the interpretation of facial movement because some of the features that are used to derive personality or sex information are also features that closely resemble certain emotional expressions, thereby enhancing or diluting the perceived strength of particular expressions.
Pose-variant facial expression recognition using an embedded image system
NASA Astrophysics Data System (ADS)
Song, Kai-Tai; Han, Meng-Ju; Chang, Shuo-Hung
2008-12-01
In recent years, one of the most attractive research areas in human-robot interaction is automated facial expression recognition. Through recognizing the facial expression, a pet robot can interact with human in a more natural manner. In this study, we focus on the facial pose-variant problem. A novel method is proposed in this paper to recognize pose-variant facial expressions. After locating the face position in an image frame, the active appearance model (AAM) is applied to track facial features. Fourteen feature points are extracted to represent the variation of facial expressions. The distance between feature points are defined as the feature values. These feature values are sent to a support vector machine (SVM) for facial expression determination. The pose-variant facial expression is classified into happiness, neutral, sadness, surprise or anger. Furthermore, in order to evaluate the performance for practical applications, this study also built a low resolution database (160x120 pixels) using a CMOS image sensor. Experimental results show that the recognition rate is 84% with the self-built database.
Novel dynamic Bayesian networks for facial action element recognition and understanding
NASA Astrophysics Data System (ADS)
Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong
2011-12-01
In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.
Liu, Zhi-dan; He, Jiang-bo; Guo, Si-si; Yang, Zhi-xin; Shen, Jun; Li, Xiao-yan; Liang, Wei; Shen, Wei-dong
2015-08-25
Although many patients with facial paralysis have obtained benefits or completely recovered after acupuncture or electroacupuncture therapy, it is still difficult to list intuitive evidence besides evaluation using neurological function scales and a few electrophysiologic data. Hence, the aim of this study is to use more intuitive and reliable detection techniques such as facial nerve magnetic resonance imaging (MRI), nerve electromyography, and F waves to observe changes in the anatomic morphology of facial nerves and nerve conduction before and after applying acupuncture or electroacupuncture, and to verify their effectiveness by combining neurological function scales. A total of 132 patients with Bell's palsy (grades III and IV in the House-Brackmann [HB] Facial Nerve Grading System) will be randomly divided into electroacupuncture, manual acupuncture, non-acupuncture, and medicine control groups. All the patients will be given electroacupuncture treatment after the acute period, except for patients in the medicine control group. The acupuncture or electroacupuncture treatments will be performed every 2 days until the patients recover or withdraw from the study. The primary outcome is analysis based on facial nerve functional scales (HB scale and Sunnybrook facial grading system), and the secondary outcome is analysis based on MRI, nerve electromyography and F-wave detection. All the patients will undergo MRI within 3 days after Bell's palsy onset for observation of the signal intensity and facial nerve swelling of the unaffected and affected sides. They will also undergo facial nerve electromyography and F-wave detection within 1 week after onset of Bell's palsy. Nerve function will be evaluated using the HB scale and Sunnybrook facial grading system at each hospital visit for treatment until the end of the study. The MRI, nerve electromyography, and F-wave detection will be performed again at 1 month after the onset of Bell's palsy. Chinese Clinical Trials Register identifier: ChiCTR-IPR-14005730. Registered on 23 December 2014.
Scherr, Jessica F; Hogan, Abigail L; Hatton, Deborah; Roberts, Jane E
2017-12-01
This study investigated behavioral indicators of social fear in preschool boys with fragile X syndrome (FXS) with a low degree of autism spectrum disorder (ASD) symptoms (FXS-Low; n = 29), FXS with elevated ASD symptoms (FXS-High; n = 25), idiopathic ASD (iASD; n = 11), and typical development (TD; n = 36). Gaze avoidance, escape behaviors, and facial fear during a stranger approach were coded. Boys with elevated ASD symptoms displayed more avoidant gaze, looking less at the stranger and parent than those with low ASD symptoms across etiologies. The iASD group displayed more facial fear than the other groups. Results suggest etiologically distinct behavioral patterns of social fear in preschoolers with elevated ASD symptoms.
Marshall, Christopher D; Vaughn, Susan D; Sarko, Diana K; Reep, Roger L
2007-01-01
Florida manatees (Trichechus manatus latirostris) possess modified vibrissae that are used in conjunction with specialized perioral musculature to manipulate vegetation for ingestion, and aid in the tactile exploration of their environment. Therefore it is expected that manatees possess a large facial motor nucleus that exhibits a complex organization relative to other taxa. The topographical organization of the facial motor nucleus of five adult Florida manatees was analyzed using neuroanatomical methods. Cresyl violet and hematoxylin staining were used to localize the rostrocaudal extent of the facial motor nucleus as well as the organization and location of subdivisions within this nucleus. Differences in size, length, and organization of the facial motor nucleus among mammals correspond to the functional importance of the superficial facial muscles, including perioral musculature involved in the movement of mystacial vibrissae. The facial motor nucleus of Florida manatees was divided into seven subnuclei. The mean rostrocaudal length, width, and height of the entire Florida manatee facial motor nucleus was 6.6 mm (SD 8 0.51; range: 6.2-7.5 mm), 4.7 mm (SD 8 0.65; range: 4.0-5.6 mm), and 3.9 mm (SD 8 0.26; range: 3.5-4.2 mm), respectively. It is speculated that manatees could possess direct descending corticomotorneuron projections to the facial motornucleus. This conjecture is based on recent data for rodents, similiarities in the rodent and sirenian muscular-vibrissal complex, and the analogous nature of the sirenian cortical Rindenkerne system with the rodent barrel system. Copyright (c) 2007 S. Karger AG, Basel.
Agency and facial emotion judgment in context.
Ito, Kenichi; Masuda, Takahiko; Li, Liman Man Wai
2013-06-01
Past research showed that East Asians' belief in holism was expressed as their tendencies to include background facial emotions into the evaluation of target faces more than North Americans. However, this pattern can be interpreted as North Americans' tendency to downplay background facial emotions due to their conceptualization of facial emotion as volitional expression of internal states. Examining this alternative explanation, we investigated whether different types of contextual information produce varying degrees of effect on one's face evaluation across cultures. In three studies, European Canadians and East Asians rated the intensity of target facial emotions surrounded with either affectively salient landscape sceneries or background facial emotions. The results showed that, although affectively salient landscapes influenced the judgment of both cultural groups, only European Canadians downplayed the background facial emotions. The role of agency as differently conceptualized across cultures and multilayered systems of cultural meanings are discussed.
Facial color is an efficient mechanism to visually transmit emotion
Benitez-Quiroz, Carlos F.; Srinivasan, Ramprakash
2018-01-01
Facial expressions of emotion in humans are believed to be produced by contracting one’s facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion. PMID:29555780
Facial color is an efficient mechanism to visually transmit emotion.
Benitez-Quiroz, Carlos F; Srinivasan, Ramprakash; Martinez, Aleix M
2018-04-03
Facial expressions of emotion in humans are believed to be produced by contracting one's facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion. Copyright © 2018 the Author(s). Published by PNAS.
Hierarchical Encoding of Social Cues in Primate Inferior Temporal Cortex.
Morin, Elyse L; Hadj-Bouziane, Fadila; Stokes, Mark; Ungerleider, Leslie G; Bell, Andrew H
2015-09-01
Faces convey information about identity and emotional state, both of which are important for our social interactions. Models of face processing propose that changeable versus invariant aspects of a face, specifically facial expression/gaze direction versus facial identity, are coded by distinct neural pathways and yet neurophysiological data supporting this separation are incomplete. We recorded activity from neurons along the inferior bank of the superior temporal sulcus (STS), while monkeys viewed images of conspecific faces and non-face control stimuli. Eight monkey identities were used, each presented with 3 different facial expressions (neutral, fear grin, and threat). All facial expressions were displayed with both a direct and averted gaze. In the posterior STS, we found that about one-quarter of face-responsive neurons are sensitive to social cues, the majority of which being sensitive to only one of these cues. In contrast, in anterior STS, not only did the proportion of neurons sensitive to social cues increase, but so too did the proportion of neurons sensitive to conjunctions of identity with either gaze direction or expression. These data support a convergence of signals related to faces as one moves anteriorly along the inferior bank of the STS, which forms a fundamental part of the face-processing network. Published by Oxford University Press 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Ahmed, Lubna
2018-03-01
The ability to correctly interpret facial expressions is key to effective social interactions. People are well rehearsed and generally very efficient at correctly categorizing expressions. However, does their ability to do so depend on how cognitively loaded they are at the time? Using repeated-measures designs, we assessed the sensitivity of facial expression categorization to cognitive resources availability by measuring people's expression categorization performance during concurrent low and high cognitive load situations. In Experiment1, participants categorized the 6 basic upright facial expressions in a 6-automated-facial-coding response paradigm while maintaining low or high loading information in working memory (N = 40; 60 observations per load condition). In Experiment 2, they did so for both upright and inverted faces (N = 46; 60 observations per load and inversion condition). In both experiments, expression categorization for upright faces was worse during high versus low load. Categorization rates actually improved with increased load for the inverted faces. The opposing effects of cognitive load on upright and inverted expressions are explained in terms of a cognitive load-related dispersion in the attentional window. Overall, the findings support that expression categorization is sensitive to cognitive resources availability and moreover suggest that, in this paradigm, it is the perceptual processing stage of expression categorization that is affected by cognitive load. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Gor, Troy; Kau, Chung How; English, Jeryl D; Lee, Robert P; Borbely, Peter
2010-03-01
The aim of this study was to assess the use of 3-dimensional facial averages in determining facial morphologic differences in 2 white population groups. Three-dimensional images were obtained in a reproducible and controlled environment from a commercially available stereo-photogrammetric camera capture system. The 3dMDface system (3dMD, Atlanta, Ga) photographed 200 subjects from 2 population groups (Budapest, Hungary, and Houston, Tex); each group included 50 men and 50 women, aged 18 to 30 years. Each face was obtained as a facial mesh and orientated along a triangulated axis. All faces were overlaid, one on top of the other, and a complex mathematical algorithm was used until an average composite face of 1 man and 1 woman was obtained for each subgroup (Hungarian men, Hungarian women, Texas men, and Texas women). These average facial composites were superimposed (men and women) based on a previously validated superimposition method, and the facial differences were quantified. Distinct facial differences were observed between the population groups. These differences could be seen in the nasal, malar, lips, and lower facial regions. In general, the mean facial differences were 0.55 +/- 0.60 mm between the Hungarian and Texas women, and 0.44 +/- 0.42 mm between the Hungarian and Texas men. The ranges of differences were -2.02 to 3.77 and -2.05 to 1.94 mm for the female and male pairings, respectively. Three-dimensional facial averages representing the facial soft-tissue morphology of adults can be used to assess diagnostic and treatment regimens for patients by population. Each population is different with respect to their soft-tissue structures, and traditional soft-tissue normative data (eg, white norms) should be altered and used for specific groups. American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
Analysis of facial expressions in parkinson's disease through video-based automatic methods.
Bandini, Andrea; Orlandi, Silvia; Escalante, Hugo Jair; Giovannelli, Fabio; Cincotta, Massimo; Reyes-Garcia, Carlos A; Vanni, Paola; Zaccara, Gaetano; Manfredi, Claudia
2017-04-01
The automatic analysis of facial expressions is an evolving field that finds several clinical applications. One of these applications is the study of facial bradykinesia in Parkinson's disease (PD), which is a major motor sign of this neurodegenerative illness. Facial bradykinesia consists in the reduction/loss of facial movements and emotional facial expressions called hypomimia. In this work we propose an automatic method for studying facial expressions in PD patients relying on video-based METHODS: 17 Parkinsonian patients and 17 healthy control subjects were asked to show basic facial expressions, upon request of the clinician and after the imitation of a visual cue on a screen. Through an existing face tracker, the Euclidean distance of the facial model from a neutral baseline was computed in order to quantify the changes in facial expressivity during the tasks. Moreover, an automatic facial expressions recognition algorithm was trained in order to study how PD expressions differed from the standard expressions. Results show that control subjects reported on average higher distances than PD patients along the tasks. This confirms that control subjects show larger movements during both posed and imitated facial expressions. Moreover, our results demonstrate that anger and disgust are the two most impaired expressions in PD patients. Contactless video-based systems can be important techniques for analyzing facial expressions also in rehabilitation, in particular speech therapy, where patients could get a definite advantage from a real-time feedback about the proper facial expressions/movements to perform. Copyright © 2017 Elsevier B.V. All rights reserved.
Implant-retained craniofacial prostheses for facial defects
Federspil, Philipp A.
2012-01-01
Craniofacial prostheses, also known as epistheses, are artificial substitutes for facial defects. The breakthrough for rehabilitation of facial defects with implant-retained prostheses came with the development of the modern silicones and bone anchorage. Following the discovery of the osseointegration of titanium in the 1950s, dental implants have been made of titanium in the 1960s. In 1977, the first extraoral titanium implant was inserted in a patient. Later, various solitary extraoral implant systems were developed. Grouped implant systems have also been developed which may be placed more reliably in areas with low bone presentation, as in the nasal and orbital region, or the ideally pneumatised mastoid process. Today, even large facial prostheses may be securely retained. The classical atraumatic surgical technique has remained an unchanged prerequisite for successful implantation of any system. This review outlines the basic principles of osseointegration as well as the main features of extraoral implantology. PMID:22073096
Feature Selection on Hyperspectral Data for Dismount Skin Analysis
2014-03-27
19 2.4.1 Melanosome Estimation . . . . . . . . . . . . . . . . . . . . . . . 19 2.4.2 Facial Recognition using...require compliant interaction in order to establish their identification. Previously, traditional facial recognition systems have been enhanced by HSI by...calculated as a fundamental method to differentiate between people [38]. In addition, the area of facial recognition has benefited from the rich spectral
ERIC Educational Resources Information Center
Bekele, Esubalew; Crittendon, Julie; Zheng, Zhi; Swanson, Amy; Weitlauf, Amy; Warren, Zachary; Sarkar, Nilanjan
2014-01-01
Teenagers with autism spectrum disorder (ASD) and age-matched controls participated in a dynamic facial affect recognition task within a virtual reality (VR) environment. Participants identified the emotion of a facial expression displayed at varied levels of intensity by a computer generated avatar. The system assessed performance (i.e.,…
Longfier, Laetitia; Soussignan, Robert; Reissland, Nadja; Leconte, Mathilde; Marret, Stéphane; Schaal, Benoist; Mellier, Daniel
2016-12-01
Facial expressions of 5-6 month-old infants born preterm and at term were compared while tasting for the first time solid foods (two fruit and two vegetable purées) given by the mother. Videotapes of facial reactions to these foods were objectively coded during the first six successive spoons of each test food using Baby FACS and subjectively rated by naïve judges. Infant temperament was also assessed by the parents using the Infant Behaviour Questionnaire. Contrary to our expectations, infants born preterm expressed fewer negative emotions than infants born full-term. Naïve judges rated infants born preterm as displaying more liking than their full-term counterparts when tasting the novel foods. The analysis of facial expressions during the six spoonfuls of four successive meals (at 1-week intervals) suggested a familiarization effect with the frequency of negative expressions decreasing after tasting the second spoon, regardless of infant age, type of food and order of presentation. Finally, positive and negative dimensions of temperament reported by the parents were related with objective and subjective coding of affective reactions toward foods in infants born preterm or full-term. Our research indicates that premature infants are more accepting of novel foods than term infants and this could be used for supporting the development of healthy eating patterns in premature infants. Further research is needed to clarify whether reduced negativity by infants born prematurely to the exposure to novel solid foods reflects a reduction of an adaptive avoidant behaviour during the introduction of novel foods. Copyright © 2016. Published by Elsevier Ltd.
Differential patterns of implicit emotional processing in Alzheimer's disease and healthy aging.
García-Rodríguez, Beatriz; Fusari, Anna; Rodríguez, Beatriz; Hernández, José Martín Zurdo; Ellgring, Heiner
2009-01-01
Implicit memory for emotional facial expressions (EFEs) was investigated in young adults, healthy old adults, and mild Alzheimer's disease (AD) patients. Implicit memory is revealed by the effect of experience on performance by studying previously encoded versus novel stimuli, a phenomenon referred to as perceptual priming. The aim was to assess the changes in the patterns of priming as a function of aging and dementia. Participants identified EFEs taken from the Facial Action Coding System and the stimuli used represented the emotions of happiness, sadness, surprise, fear, anger, and disgust. In the study phase, participants rated the pleasantness of 36 faces using a Likert-type scale. Subsequently, the response to the 36 previously studied and 36 novel EFEs was tested when they were randomly presented in a cued naming task. The results showed that implicit memory for EFEs is preserved in AD and aging, and no specific age-related effects on implicit memory for EFEs were observed. However, different priming patterns were evident in AD patients that may reflect pathological brain damage and the effect of stimulus complexity. These findings provide evidence of how progressive neuropathological changes in the temporal and frontal areas may affect emotional processing in more advanced stages of the disease.
Humor, laughter, and the cerebellum: insights from patients with acute cerebellar stroke.
Frank, B; Andrzejewski, K; Göricke, S; Wondzinski, E; Siebler, M; Wild, B; Timmann, D
2013-12-01
Extent of cerebellar involvement in cognition and emotion is still a topic of ongoing research. In particular, the cerebellar role in humor processing and control of laughter is not well known. A hypermetric dysregulation of affective behavior has been assumed in cerebellar damage. Thus, we aimed at investigating humor comprehension and appreciation as well as the expression of laughter in 21 patients in the acute or subacute state after stroke restricted to the cerebellum, and in the same number of matched healthy control subjects. Patients with acute and subacute cerebellar damage showed preserved comprehension and appreciation of humor using a validated humor test evaluating comprehension, funniness and aversiveness of cartoons ("3WD Humor Test"). Additionally, there was no difference when compared to healthy controls in the number and intensity of facial reactions and laughter while observing jokes, humorous cartoons, or video sketches measured by the Facial Action Coding System. However, as depression scores were significantly increased in patients with cerebellar stroke, a concealing effect of accompanying depression cannot be excluded. Current findings add to descriptions in the literature that cognitive or affective disorders in patients with lesions restricted to the cerebellum, even in the acute state after damage, are frequently mild and might only be present in more sensitive or specific tests.
Hernández, Rosendo G.; Silva-Hucha, Silvia; Morcuende, Sara; de la Cruz, Rosa R.; Pastor, Angel M.; Benítez-Temiño, Beatriz
2017-01-01
Extraocular motoneurons resist degeneration in diseases such as amyotrophic lateral sclerosis. The main objective of the present work was to characterize the presence of neurotrophins in extraocular motoneurons and muscles of the adult rat. We also compared these results with those obtained from other cranial motor systems, such as facial and hypoglossal, which indeed suffer neurodegeneration. Immunocytochemical analysis was used to describe the expression of nerve growth factor, brain-derived neurotrophic factor and neurotrophin-3 in oculomotor, trochlear, abducens, facial, and hypoglossal nuclei of adult rats, and Western blots were used to describe the presence of neurotrophins in extraocular, facial (buccinator), and tongue muscles, which are innervated by the above-mentioned motoneurons. In brainstem samples, brain-derived neurotrophic factor was present both in extraocular and facial motoneuron somata, and to a lesser degree, in hypoglossal motoneurons. Neurotrophin-3 was present in extraocular motor nuclei, while facial and hypoglossal motoneurons were almost devoid of this protein. Finally, nerve growth factor was not present in the soma of any group of motoneurons, although it was present in dendrites of motoneurons located in the neuropil. Neuropil optical density levels were higher in extraocular motoneuron nuclei when compared with facial and hypoglossal nuclei. Neurotrophins could be originated in target muscles, since Western blot analyses revealed the presence of the three molecules in all sampled muscles, to a larger extent in extraocular muscles when compared with facial and tongue muscles. We suggest that the different neurotrophin availability could be related to the particular resistance of extraocular motoneurons to neurodegeneration. PMID:28744196
Bedeschi, Maria Francesca; Marangi, Giuseppe; Calvello, Maria Rosaria; Ricciardi, Stefania; Leone, Francesca Pia Chiara; Baccarin, Marco; Guerneri, Silvana; Orteschi, Daniela; Murdolo, Marina; Lattante, Serena; Frangella, Silvia; Keena, Beth; Harr, Margaret H; Zackai, Elaine; Zollino, Marcella
2017-11-01
Pitt-Hopkins syndrome is a neurodevelopmental disorder characterized by severe intellectual disability and a distinctive facial gestalt. It is caused by haploinsufficiency of the TCF4 gene. The TCF4 protein has different functional domains, with the NLS (nuclear localization signal) domain coded by exons 7-8 and the bHLH (basic Helix-Loop-Helix) domain coded by exon 18. Several alternatively spliced TCF4 variants have been described, allowing for translation of variable protein isoforms. Typical PTHS patients have impairment of at least the bHLH domain. To which extent impairment of the remaining domains contributes to the final phenotype is not clear. There is recent evidence that certain loss-of-function variants disrupting TCF4 are associated with mild ID, but not with typical PTHS. We describe a frameshift-causing partial gene deletion encompassing exons 4-6 of TCF4 in an adult patient with mild ID and nonspecific facial dysmorphisms but without the typical features of PTHS, and a c.520C > T nonsense variant within exon 8 in a child presenting with a severe phenotype largely mimicking PTHS, but lacking the typical facial dysmorphism. Investigation on mRNA, along with literature review, led us to suggest a preliminary phenotypic map of loss-of-function variants affecting TCF4. An intragenic phenotypic map of loss-of-function variants in TCF4 is suggested here for the first time: variants within exons 1-4 and exons 4-6 give rise to a recurrent phenotype with mild ID not in the spectrum of Pitt-Hopkins syndrome (biallelic preservation of both the NLS and bHLH domains); variants within exons 7-8 cause a severe phenotype resembling PTHS but in absence of the typical facial dysmorphism (impairment limited to the NLS domain); variants within exons 9-19 cause typical Pitt-Hopkins syndrome (impairment of at least the bHLH domain). Understanding the TCF4 molecular syndromology can allow for proper nosology in the current era of whole genomic investigations. Copyright © 2017. Published by Elsevier Masson SAS.
Age and sex-related differences in 431 pediatric facial fractures at a level 1 trauma center.
Hoppe, Ian C; Kordahi, Anthony M; Paik, Angie M; Lee, Edward S; Granick, Mark S
2014-10-01
Age and sex-related changes in the pattern of fractures and concomitant injuries observed in this patient population is helpful in understanding craniofacial development and the treatment of these unique injuries. The goal of this study was to examine all facial fractures occurring in a child and adolescent population (age 18 or less) at a trauma center to determine any age or sex-related variability amongst fracture patterns and concomitant injuries. All facial fractures occurring at a trauma center were collected over a 12-year period based on International Classification of Disease, rev. 9 codes. This was delimited to include only those patients 18 years of age or younger. Age, sex, mechanism, and fracture types were collected and analyzed. During this time period, there were 3147 patients with facial fractures treated at our institution, 353 of which were in children and adolescent patients. Upon further review 68 patients were excluded due to insufficient data for analysis, leaving 285 patients for review, with a total of 431 fractures. The most common etiology of injury was assault for males and motor vehicle accidents (MVA) for females. The most common fracture was of the mandible in males and of the orbit in females. The most common etiology in younger age groups includes falls and pedestrian struck. Older age groups exhibit a higher incidence of assault-related injuries. Younger age groups showed a propensity for orbital fractures as opposed to older age groups where mandibular fractures predominated. Intracranial hemorrhage was the most common concomitant injury across most age groups. The differences noted in etiology of injury, fracture patterns, and concomitant injuries between sexes and different age groups likely reflects the differing activities that each group engages in predominantly. In addition the growing facial skeleton offers varying degrees of protection to the cranial contents as force-absorbing mechanisms develop. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
When Early Experiences Build a Wall to Others’ Emotions: An Electrophysiological and Autonomic Study
Ardizzi, Martina; Martini, Francesca; Umiltà, Maria Alessandra; Sestito, Mariateresa; Ravera, Roberto; Gallese, Vittorio
2013-01-01
Facial expression of emotions is a powerful vehicle for communicating information about others’ emotional states and it normally induces facial mimicry in the observers. The aim of this study was to investigate if early aversive experiences could interfere with emotion recognition, facial mimicry, and with the autonomic regulation of social behaviors. We conducted a facial emotion recognition task in a group of “street-boys” and in an age-matched control group. We recorded facial electromyography (EMG), a marker of facial mimicry, and respiratory sinus arrhythmia (RSA), an index of the recruitment of autonomic system promoting social behaviors and predisposition, in response to the observation of facial expressions of emotions. Results showed an over-attribution of anger, and reduced EMG responses during the observation of both positive and negative expressions only among street-boys. Street-boys also showed lower RSA after observation of facial expressions and ineffective RSA suppression during presentation of non-threatening expressions. Our findings suggest that early aversive experiences alter not only emotion recognition but also facial mimicry of emotions. These deficits affect the autonomic regulation of social behaviors inducing lower social predisposition after the visualization of facial expressions and an ineffective recruitment of defensive behavior in response to non-threatening expressions. PMID:23593374
The review and results of different methods for facial recognition
NASA Astrophysics Data System (ADS)
Le, Yifan
2017-09-01
In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.
Masseteric nerve for reanimation of the smile in short-term facial paralysis.
Hontanilla, Bernardo; Marre, Diego; Cabello, Alvaro
2014-02-01
Our aim was to describe our experience with the masseteric nerve in the reanimation of short term facial paralysis. We present our outcomes using a quantitative measurement system and discuss its advantages and disadvantages. Between 2000 and 2012, 23 patients had their facial paralysis reanimated by masseteric-facial coaptation. All patients are presented with complete unilateral paralysis. Their background, the aetiology of the paralysis, and the surgical details were recorded. A retrospective study of movement analysis was made using an automatic optical system (Facial Clima). Commissural excursion and commissural contraction velocity were also recorded. The mean age at reanimation was 43(8) years. The aetiology of the facial paralysis included acoustic neurinoma, fracture of the skull base, schwannoma of the facial nerve, resection of a cholesteatoma, and varicella zoster infection. The mean time duration of facial paralysis was 16(5) months. Follow-up was more than 2 years in all patients except 1 in whom it was 12 months. The mean duration to recovery of tone (as reported by the patient) was 67(11) days. Postoperative commissural excursion was 8(4)mm for the reanimated side and 8(3)mm for the healthy side (p=0.4). Likewise, commissural contraction velocity was 38(10)mm/s for the reanimated side and 43(12)mm/s for the healthy side (p=0.23). Mean percentage of recovery was 92(5)mm for commissural excursion and 79(15)mm/s for commissural contraction velocity. Masseteric nerve transposition is a reliable and reproducible option for the reanimation of short term facial paralysis with reduced donor site morbidity and good symmetry with the opposite healthy side. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
A 3-dimensional anthropometric evaluation of facial morphology among Chinese and Greek population.
Liu, Yun; Kau, Chung How; Pan, Feng; Zhou, Hong; Zhang, Qiang; Zacharopoulos, Georgios Vasileiou
2013-07-01
The use of 3-dimensional (3D) facial imaging has taken greater importance as orthodontists use the soft tissue paradigm in the evaluation of skeletal disproportion. Studies have shown that faces defer in populations. To date, no anthropometric evaluations have been made of Chinese and Greek faces. The aim of this study was to compare facial morphologies of Greeks and Chinese using 3D facial anthropometric landmarks. Three-dimensional facial images were acquired via a commercially available stereophotogrammetric camera capture system. The 3dMD face system captured 245 subjects from 2 population groups (Chinese [n = 72] and Greek [n = 173]), and each population was categorized into male and female groups for evaluation. All subjects in the group were between 18 and 30 years old and had no apparent facial anomalies. Twenty-five anthropometric landmarks were identified on the 3D faces of each subject. Soft tissue nasion was set as the "zeroed" reference landmark. Twenty landmark distances were constructed and evaluated within 3 dimensions of space. Six angles, 4 proportions, and 1 construct were also calculated. Student t test was used to analyze each data set obtained within each subgroup. Distinct facial differences were noted between the subgroups evaluated. When comparing differences of sexes in 2 populations (eg, male Greeks and male Chinese), significant differences were noted in more than 80% of the landmark distances calculated. One hundred percent of the angular were significant, and the Chinese were broader in width to height facial proportions. In evaluating the lips to the esthetic line, the Chinese population had more protrusive lips. There are differences in the facial morphologies of subjects obtained from a Chinese population versus that of a Greek population.
Blink Prosthesis For Facial Paralysis Patients
2016-10-01
predisposes patients to corneal exposure and dry eye complications that are difficult to effectively treat. The proposed innovation will provide a...aesthetic and functional use of the paralyzed eyelid by preventing painful dry eye complications and profound facial disfiguration. The goal of this program... eye blink in patients with unilateral facial nerve paralysis. The system will electrically stimulate the paretic eyelid when EMG electrodes detect
[Partial facial duplication (a rare diprosopus): Case report and review of the literature].
Es-Seddiki, A; Rkain, M; Ayyad, A; Nkhili, H; Amrani, R; Benajiba, N
2015-12-01
Diprosopus, or partial facial duplication, is a very rare congenital abnormality. It is a rare form of conjoined twins. Partial facial duplication may be symmetric or not and may involve the nose, the maxilla, the mandible, the palate, the tongue and the mouth. A male newborn springing from inbred parents was admitted at his first day of life for facial deformity. He presented with hypertelorism, 2 eyes, a tendency to nose duplication (flatted large nose, 2 columellae, 2 lateral nostrils separated in the midline by a third deformed hole), two mouths and a duplicated maxilla. Laboratory tests were normal. The cranio-facial CT confirmed the maxillary duplication. This type of cranio-facial duplication is a rare entity with about 35 reported cases in the literature. Our patient was similar to a rare case of living diprosopus reported by Stiehm in 1972. Diprosopus is often associated with abnormalities of the gastrointestinal tract, the central nervous system, the cardiovascular and respiratory systems and with a high incidence of cleft lip and palate. Surgical treatment consists in the resection of the duplicated components. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Peripheral facial weakness (Bell's palsy).
Basić-Kes, Vanja; Dobrota, Vesna Dermanović; Cesarik, Marijan; Matovina, Lucija Zadro; Madzar, Zrinko; Zavoreo, Iris; Demarin, Vida
2013-06-01
Peripheral facial weakness is a facial nerve damage that results in muscle weakness on one side of the face. It may be idiopathic (Bell's palsy) or may have a detectable cause. Almost 80% of peripheral facial weakness cases are primary and the rest of them are secondary. The most frequent causes of secondary peripheral facial weakness are systemic viral infections, trauma, surgery, diabetes, local infections, tumor, immune disorders, drugs, degenerative diseases of the central nervous system, etc. The diagnosis relies upon the presence of typical signs and symptoms, blood chemistry tests, cerebrospinal fluid investigations, nerve conduction studies and neuroimaging methods (cerebral MRI, x-ray of the skull and mastoid). Treatment of secondary peripheral facial weakness is based on therapy for the underlying disorder, unlike the treatment of Bell's palsy that is controversial due to the lack of large, randomized, controlled, prospective studies. There are some indications that steroids or antiviral agents are beneficial but there are also studies that show no beneficial effect. Additional treatments include eye protection, physiotherapy, acupuncture, botulinum toxin, or surgery. Bell's palsy has a benign prognosis with complete recovery in about 80% of patients, 15% experience some mode of permanent nerve damage and severe consequences remain in 5% of patients.
Facial recognition in education system
NASA Astrophysics Data System (ADS)
Krithika, L. B.; Venkatesh, K.; Rathore, S.; Kumar, M. Harish
2017-11-01
Human beings exploit emotions comprehensively for conveying messages and their resolution. Emotion detection and face recognition can provide an interface between the individuals and technologies. The most successful applications of recognition analysis are recognition of faces. Many different techniques have been used to recognize the facial expressions and emotion detection handle varying poses. In this paper, we approach an efficient method to recognize the facial expressions to track face points and distances. This can automatically identify observer face movements and face expression in image. This can capture different aspects of emotion and facial expressions.
Expression transmission using exaggerated animation for Elfoid
Hori, Maiya; Tsuruda, Yu; Yoshimura, Hiroki; Iwai, Yoshio
2015-01-01
We propose an expression transmission system using a cellular-phone-type teleoperated robot called Elfoid. Elfoid has a soft exterior that provides the look and feel of human skin, and is designed to transmit the speaker's presence to their communication partner using a camera and microphone. To transmit the speaker's presence, Elfoid sends not only the voice of the speaker but also the facial expression captured by the camera. In this research, facial expressions are recognized using a machine learning technique. Elfoid cannot, however, display facial expressions because of its compactness and a lack of sufficiently small actuator motors. To overcome this problem, facial expressions are displayed using Elfoid's head-mounted mobile projector. In an experiment, we built a prototype system and experimentally evaluated it's subjective usability. PMID:26347686
Pannucci, Christopher J; Reavey, Patrick L; Kaweski, Susan; Hamill, Jennifer B; Hume, Keith M; Wilkins, Edwin G; Pusic, Andrea L
2011-03-01
The Skin Products Assessment Research Committee was created by the Plastic Surgery Educational Foundation in 2006. The Skin Products Assessment Research study aims were to (1) develop an infrastructure for Plastic Surgery Educational Foundation-conducted, industry-sponsored research in facial aesthetic surgery and (2) test the research process by comparing outcomes of the Obagi Nu-Derm System versus conventional therapy as treatment adjuncts for facial resurfacing procedures. The Skin Products Assessment Research study was designed as a multicenter, double-blind, randomized, controlled trial. The study was conducted in women with Fitzpatrick type I to IV skin, moderate to severe facial photodamage, and periocular and/or perioral fine wrinkles. Patients underwent chemical peel or laser facial resurfacing and were randomized to the Obagi Nu-Derm System or a standard care regimen. The study endpoints were time to reepithelialization, erythema, and pigmentation changes. Fifty-six women were enrolled and 82 percent were followed beyond reepithelialization. There were no significant differences in mean time to reepithelialization between Obagi Nu-Derm System and control groups. The Obagi Nu-Derm System group had a significantly higher median erythema score on the day of surgery (after 4 weeks of product use) that did not persist after surgery. Test-retest photographic evaluations demonstrated that both interrater and intrarater reliability were adequate for primary study outcomes. The authors demonstrated no significant difference in time to reepithelialization between patients who used the Obagi Nu-Derm System or a standard care regimen as an adjunct to facial resurfacing procedures. The Skin Products Assessment Research team has also provided a discussion of future challenges for Plastic Surgery Educational Foundation-sponsored clinical research for readers of this article.
Biometric iris image acquisition system with wavefront coding technology
NASA Astrophysics Data System (ADS)
Hsieh, Sheng-Hsun; Yang, Hsi-Wen; Huang, Shao-Hung; Li, Yung-Hui; Tien, Chung-Hao
2013-09-01
Biometric signatures for identity recognition have been practiced for centuries. Basically, the personal attributes used for a biometric identification system can be classified into two areas: one is based on physiological attributes, such as DNA, facial features, retinal vasculature, fingerprint, hand geometry, iris texture and so on; the other scenario is dependent on the individual behavioral attributes, such as signature, keystroke, voice and gait style. Among these features, iris recognition is one of the most attractive approaches due to its nature of randomness, texture stability over a life time, high entropy density and non-invasive acquisition. While the performance of iris recognition on high quality image is well investigated, not too many studies addressed that how iris recognition performs subject to non-ideal image data, especially when the data is acquired in challenging conditions, such as long working distance, dynamical movement of subjects, uncontrolled illumination conditions and so on. There are three main contributions in this paper. Firstly, the optical system parameters, such as magnification and field of view, was optimally designed through the first-order optics. Secondly, the irradiance constraints was derived by optical conservation theorem. Through the relationship between the subject and the detector, we could estimate the limitation of working distance when the camera lens and CCD sensor were known. The working distance is set to 3m in our system with pupil diameter 86mm and CCD irradiance 0.3mW/cm2. Finally, We employed a hybrid scheme combining eye tracking with pan and tilt system, wavefront coding technology, filter optimization and post signal recognition to implement a robust iris recognition system in dynamic operation. The blurred image was restored to ensure recognition accuracy over 3m working distance with 400mm focal length and aperture F/6.3 optics. The simulation result as well as experiment validates the proposed code apertured imaging system, where the imaging volume was 2.57 times extended over the traditional optics, while keeping sufficient recognition accuracy.
Rymarczyk, Krystyna; Żurawski, Łukasz; Jankowiak-Siuda, Kamila; Szatkowska, Iwona
2018-01-01
Facial mimicry (FM) is an automatic response to imitate the facial expressions of others. However, neural correlates of the phenomenon are as yet not well established. We investigated this issue using simultaneously recorded EMG and BOLD signals during perception of dynamic and static emotional facial expressions of happiness and anger. During display presentations, BOLD signals and zygomaticus major (ZM), corrugator supercilii (CS) and orbicularis oculi (OO) EMG responses were recorded simultaneously from 46 healthy individuals. Subjects reacted spontaneously to happy facial expressions with increased EMG activity in ZM and OO muscles and decreased CS activity, which was interpreted as FM. Facial muscle responses correlated with BOLD activity in regions associated with motor simulation of facial expressions [i.e., inferior frontal gyrus, a classical Mirror Neuron System (MNS)]. Further, we also found correlations for regions associated with emotional processing (i.e., insula, part of the extended MNS). It is concluded that FM involves both motor and emotional brain structures, especially during perception of natural emotional expressions. PMID:29467691
NASA Astrophysics Data System (ADS)
Petpairote, Chayanut; Madarasmi, Suthep; Chamnongthai, Kosin
2018-01-01
The practical identification of individuals using facial recognition techniques requires the matching of faces with specific expressions to faces from a neutral face database. A method for facial recognition under varied expressions against neutral face samples of individuals via recognition of expression warping and the use of a virtual expression-face database is proposed. In this method, facial expressions are recognized and the input expression faces are classified into facial expression groups. To aid facial recognition, the virtual expression-face database is sorted into average facial-expression shapes and by coarse- and fine-featured facial textures. Wrinkle information is also employed in classification by using a process of masking to adjust input faces to match the expression-face database. We evaluate the performance of the proposed method using the CMU multi-PIE, Cohn-Kanade, and AR expression-face databases, and we find that it provides significantly improved results in terms of face recognition accuracy compared to conventional methods and is acceptable for facial recognition under expression variation.
Balconi, Michela; Mazza, Guido
2010-05-01
Asymmetry in comprehension of facial expression of emotions was explored in the present study by analysing alpha band variation within the right and left cortical sides. Second, the behavioural activation system (BAS) and behavioural inhibition system (BIS) were considered as an explicative factor to verify the effect of a motivational/emotional variable on alpha activity. A total of 19 participants looked at an ample range of facial expressions of emotions (anger, fear, surprise, disgust, happiness, sadness, and neutral) in random order. The results demonstrated that anterior frontal sites were more active than central and parietal sites in response to facial stimuli. Moreover, right and left side responses varied as a function of emotional types, with an increased right frontal activity for negative, aversive emotions vs an increased left response for positive emotion. Finally, whereas higher BIS participants generated more right hemisphere activation for some negative emotions (such as fear, anger, surprise, and disgust), BAS participants were more responsive to positive emotion (happiness) within the left hemisphere. Motivational significance of facial expressions was considered to elucidate cortical differences in participants' responses to emotional types.
Supplemental oxygen: ensuring its safe delivery during facial surgery.
Reyes, R J; Smith, A A; Mascaro, J R; Windle, B H
1995-04-01
Electrosurgical coagulation in the presence of blow-by oxygen is a potential source of fire in facial surgery. A case report of a patient sustaining partial-thickness facial burns secondary to such a flash fire is presented. A fiberglass facial model is then used to study the variables involved in providing supplemental oxygen when an electrosurgical unit is employed. Oxygen flow, oxygen delivery systems, distance from the oxygen source, and coagulation current levels were varied. A nasal cannula and an adapted suction tubing provided the oxygen delivery systems on the model. Both the "displaced" nasal cannula and the adapted suction tubing ignited at a minimum coagulation level of 30 W, an oxygen flow of 2 liters/minute, and a linear distance of 5 cm from the oxygen source. The properly placed nasal cannula did not ignite at any combination of oxygen flow, coagulation current level, or distance from the oxygen source. Facial cutaneous surgery in patients provided supplemental oxygen should be practiced with caution when an electrosurgical unit is used for coagulation. The oxygen delivery systems adapted for use are hazardous and should not be used until their safety has been demonstrated.
Digital assessment of the fetal alcohol syndrome facial phenotype: reliability and agreement study.
Tsang, Tracey W; Laing-Aiken, Zoe; Latimer, Jane; Fitzpatrick, James; Oscar, June; Carter, Maureen; Elliott, Elizabeth J
2017-01-01
To examine the three facial features of fetal alcohol syndrome (FAS) in a cohort of Australian Aboriginal children from two-dimensional digital facial photographs to: (1) assess intrarater and inter-rater reliability; (2) identify the racial norms with the best fit for this population; and (3) assess agreement with clinician direct measures. Photographs and clinical data for 106 Aboriginal children (aged 7.4-9.6 years) were sourced from the Lililwan Project . Fifty-eight per cent had a confirmed prenatal alcohol exposure and 13 (12%) met the Canadian 2005 criteria for FAS/partial FAS. Photographs were analysed using the FAS Facial Photographic Analysis Software to generate the mean PFL three-point ABC-Score, five-point lip and philtrum ranks and four-point face rank in accordance with the 4-Digit Diagnostic Code. Intrarater and inter-rater reliability of digital ratings was examined in two assessors. Caucasian or African American racial norms for PFL and lip thickness were assessed for best fit; and agreement between digital and direct measurement methods was assessed. Reliability of digital measures was substantial within (kappa: 0.70-1.00) and between assessors (kappa: 0.64-0.89). Clinician and digital ratings showed moderate agreement (kappa: 0.47-0.58). Caucasian PFL norms and the African American Lip-Philtrum Guide 2 provided the best fit for this cohort. In an Aboriginal cohort with a high rate of FAS, assessment of facial dysmorphology using digital methods showed substantial inter- and intrarater reliability. Digital measurement of features has high reliability and until data are available from a larger population of Aboriginal children, the African American Lip-Philtrum Guide 2 and Caucasian (Strömland) PFL norms provide the best fit for Australian Aboriginal children.
Initial assessment of facial nerve paralysis based on motion analysis using an optical flow method.
Samsudin, Wan Syahirah W; Sundaraj, Kenneth; Ahmad, Amirozi; Salleh, Hasriah
2016-01-01
An initial assessment method that can classify as well as categorize the severity of paralysis into one of six levels according to the House-Brackmann (HB) system based on facial landmarks motion using an Optical Flow (OF) algorithm is proposed. The desired landmarks were obtained from the video recordings of 5 normal and 3 Bell's Palsy subjects and tracked using the Kanade-Lucas-Tomasi (KLT) method. A new scoring system based on the motion analysis using area measurement is proposed. This scoring system uses the individual scores from the facial exercises and grades the paralysis based on the HB system. The proposed method has obtained promising results and may play a pivotal role towards improved rehabilitation programs for patients.
Importance of the brow in facial expressiveness during human communication.
Neely, John Gail; Lisker, Paul; Drapekin, Jesse
2014-03-01
The objective of this study was to evaluate laterality and upper/lower face dominance of expressiveness during prescribed speech using a unique validated image subtraction system capable of sensitive and reliable measurement of facial surface deformation. Observations and experiments of central control of facial expressions during speech and social utterances in humans and animals suggest that the right mouth moves more than the left during nonemotional speech. However, proficient lip readers seem to attend to the whole face to interpret meaning from expressed facial cues, also implicating a horizontal (upper face-lower face) axis. Prospective experimental design. Experimental maneuver: recited speech. image-subtraction strength-duration curve amplitude. Thirty normal human adults were evaluated during memorized nonemotional recitation of 2 short sentences. Facial movements were assessed using a video-image subtractions system capable of simultaneously measuring upper and lower specific areas of each hemiface. The results demonstrate both axes influence facial expressiveness in human communication; however, the horizontal axis (upper versus lower face) would appear dominant, especially during what would appear to be spontaneous breakthrough unplanned expressiveness. These data are congruent with the concept that the left cerebral hemisphere has control over nonemotionally stimulated speech; however, the multisynaptic brainstem extrapyramidal pathways may override hemiface laterality and preferentially take control of the upper face. Additionally, these data demonstrate the importance of the often-ignored brow in facial expressiveness. Experimental study. EBM levels not applicable.
Lee, I-Jui; Chen, Chien-Hsu; Lin, Ling-Yi
2016-01-01
Autism spectrum disorders (ASD) are characterized by a reduced ability to understand the emotional expressions on other people's faces. Increasing evidence indicates that children with ASD might not recognize or understand crucial nonverbal behaviors, which likely causes them to ignore nonverbal gestures and social cues, like facial expressions, that usually aid social interaction. In this study, we used software technology to create half-static and dynamic video materials to teach adolescents with ASD how to become aware of six basic facial expressions observed in real situations. This intervention system provides a half-way point via a dynamic video of a specific element within a static-surrounding frame to strengthen the ability of the six adolescents with ASD to attract their attention on the relevant dynamic facial expressions and ignore irrelevant ones. Using a multiple baseline design across participants, we found that the intervention learning system provided a simple yet effective way for adolescents with ASD to attract their attention on the nonverbal facial cues; the intervention helped them better understand and judge others' facial emotions. We conclude that the limited amount of information with structured and specific close-up visual social cues helped the participants improve judgments of the emotional meaning of the facial expressions of others.
Strauss, G; Strauss, M; Lüders, C; Stopp, S; Shi, J; Dietz, A; Lüth, T
2008-10-01
PROBLEM DEFINITION: The goal of this work is the integration of the information of the intraoperative EMG monitoring of the facial nerve into the radiological data of the petrous bone. The following hypotheses are to be examined: (I) the N. VII can be determined intraoperatively with a high reliability by the stimulation-probe. A computer program is able to discriminate true-positive EMG signals from false-positive artifacts. (II) The course of the facial nerve can be registered in a three-dimensional area by EMG signals at a nerve model in the lab test. The individual items of the nerve can be combined into a route model. The route model can be integrated into the data of digital volume tomography (DVT). (I) Intraoperative EMG signals of the facial nerve were classified at 128 measurements by an automatic software. The results were correlated with the actual intraoperative situation. (II) The nerve phantom was designed and a DVT data set was provided. Phantom was registered with a navigation system (Karl Storz NPU, Tuttlingen, Germany). The stimulation probe of the EMG-system was tracked by the navigation system. The navigation system was extended by a processing unit (MiMed, Technische Universität München, Germany). Thus the classified EMG parameters of the facial route can be received, processed and be generated to a model of the facial nerve route. The operability was examined at 120 (10 x 12) measuring points. The evaluation of the examined algorithm for classification EMG-signals of the facial nerve resulted as correct in all measuring events. In all 10 attempts it succeeded to visualize the nerve route as three-dimensional model. The different sizes of the individual measuring points reflect the appropriate values of Istim and UEMG correctly. This work proves the feasibility of an automatic classification of an intraoperative EMG signal of the facial nerve by a processing unit. Furthermore the work shows the feasibility of tracking of the position of the stimulation probe and its integration into amodel of the route of the facial nerve (e. g. DVT). The rediability, with which the position of the nerve can be seized by the stimulation probe, is also included into the resulting route model.
Improvement of emotional healthcare system with stress detection from ECG signal.
Tivatansakul, S; Ohkura, M
2015-01-01
Our emotional healthcare system is designed to cope with users' negative emotions in daily life. To make the system more intelligent, we integrated emotion recognition by facial expression to provide appropriate services based on user's current emotional state. Our emotion recognition by facial expression has confusion issue to recognize some positive, neutral and negative emotions that make the emotional healthcare system provide a relaxation service even though users don't have negative emotions. Therefore, to increase the effectiveness of the system to provide the relaxation service, we integrate stress detection from ECG signal. The stress detection might be able to address the confusion issue of emotion recognition by facial expression to provide the service. Indeed, our results show that integration of stress detection increases the effectiveness and efficiency of the emotional healthcare system to provide services.
Atmospheric turbulence and sensor system effects on biometric algorithm performance
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Leonard, Kevin R.; Byrd, Kenneth A.; Potvin, Guy
2015-05-01
Biometric technologies composed of electro-optical/infrared (EO/IR) sensor systems and advanced matching algorithms are being used in various force protection/security and tactical surveillance applications. To date, most of these sensor systems have been widely used in controlled conditions with varying success (e.g., short range, uniform illumination, cooperative subjects). However the limiting conditions of such systems have yet to be fully studied for long range applications and degraded imaging environments. Biometric technologies used for long range applications will invariably suffer from the effects of atmospheric turbulence degradation. Atmospheric turbulence causes blur, distortion and intensity fluctuations that can severely degrade image quality of electro-optic and thermal imaging systems and, for the case of biometrics technology, translate to poor matching algorithm performance. In this paper, we evaluate the effects of atmospheric turbulence and sensor resolution on biometric matching algorithm performance. We use a subset of the Facial Recognition Technology (FERET) database and a commercial algorithm to analyze facial recognition performance on turbulence degraded facial images. The goal of this work is to understand the feasibility of long-range facial recognition in degraded imaging conditions, and the utility of camera parameter trade studies to enable the design of the next generation biometrics sensor systems.
Sardaru, D; Pendefunda, L
2013-01-01
Facial paralysis, in the form of Bell's syndrome, is an acute paralysis of idiopathic origin. Disability in patients with this medical condition is the result of impairment or loss of complex and multidimensional functions of the face like emotion expression through facial mimics, facial identity and communication. This study aimed to present new and improved practical manual techniques in the area of facial neuromuscular facilitations and to review the literature for disability indexes and facial nerve grading. We present the practical modality of using neuro-proprioceptive facilitation techniques, such as rhythmic initiation, repeated stretch (repeated contractions), combination of isotonics and percussion, and also report the effects of these techniques in three Bell's syndrome patients which were previously evaluated. Recovery from facial paralysis can be a difficult and long lasting process and the utilization of a grading system may help the physical therapist. The effects of this type of therapy may help_benefit the patient if the therapist is well trained and familiar with the neurophysiological background.
Use of 3-dimensional surface acquisition to study facial morphology in 5 populations.
Kau, Chung How; Richmond, Stephen; Zhurov, Alexei; Ovsenik, Maja; Tawfik, Wael; Borbely, Peter; English, Jeryl D
2010-04-01
The aim of this study was to assess the use of 3-dimensional facial averages for determining morphologic differences from various population groups. We recruited 473 subjects from 5 populations. Three-dimensional images of the subjects were obtained in a reproducible and controlled environment with a commercially available stereo-photogrammetric camera capture system. Minolta VI-900 (Konica Minolta, Tokyo, Japan) and 3dMDface (3dMD LLC, Atlanta, Ga) systems were used. Each image was obtained as a facial mesh and orientated along a triangulated axis. All faces were overlaid, one on top of the other, and a complex mathematical algorithm was performed until average composite faces of 1 man and 1 woman were achieved for each subgroup. These average facial composites were superimposed based on a previously validated superimposition method, and the facial differences were quantified. Distinct facial differences were observed among the groups. The linear differences between surface shells ranged from 0.37 to 1.00 mm for the male groups. The linear differences ranged from 0.28 and 0.87 mm for the women. The color histograms showed that the similarities in facial shells between the subgroups by sex ranged from 26.70% to 70.39% for men and 36.09% to 79.83% for women. The average linear distance from the signed color histograms for the male subgroups ranged from -6.30 to 4.44 mm. The female subgroups ranged from -6.32 to 4.25 mm. Average faces can be efficiently and effectively created from a sample of 3-dimensional faces. Average faces can be used to compare differences in facial morphologies for various populations and sexes. Facial morphologic differences were greatest when totally different ethnic variations were compared. Facial morphologic similarities were present in comparable groups, but there were large variations in concentrated areas of the face. Copyright 2010 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
Chen, Kuan-Hua; Lwi, Sandy J.; Hua, Alice Y.; Haase, Claudia M.; Miller, Bruce L.; Levenson, Robert W.
2017-01-01
Although laboratory procedures are designed to produce specific emotions, participants often experience mixed emotions (i.e., target and non-target emotions). We examined non-target emotions in patients with frontotemporal dementia (FTD), Alzheimer’s disease (AD), other neurodegenerative diseases, and healthy controls. Participants watched film clips designed to produce three target emotions. Subjective experience of non-target emotions was assessed and emotional facial expressions were coded. Compared to patients with other neurodegenerative diseases and healthy controls, FTD patients reported more positive and negative non-target emotions, whereas AD patients reported more positive non-target emotions. There were no group differences in facial expressions of non-target emotions. We interpret these findings as reflecting deficits in processing interoceptive and contextual information resulting from neurodegeneration in brain regions critical for creating subjective emotional experience. PMID:29457053
Emotional System for Military Target Identification
2009-10-01
algorithm [23], and used it to solve a facial recognition problem. In other works [24,25], we explored the potential of using emotional neural...other application areas, such as security ( facial recognition ) and medical (blood cell identification), can be also efficiently used in military...Application of an emotional neural network to facial recognition . Neural Computing and Applications, 18(4), 309-320. [25] Khashman, A. (2009). Blood cell
2014-09-01
biometrics technologies. 14. SUBJECT TERMS Facial recognition, systems engineering, live video streaming, security cameras, national security ...national security by sharing biometric facial recognition data in real-time utilizing infrastructures currently in place. It should be noted that the...9/11),law enforcement (LE) and Intelligence community (IC)authorities responsible for protecting citizens from threats against national security
INFRARED- BASED BLINK DETECTING GLASSES FOR FACIAL PACING: TOWARDS A BIONIC BLINK
Frigerio, Alice; Hadlock, Tessa A; Murray, Elizabeth H; Heaton, James T
2015-01-01
IMPORTANCE Facial paralysis remains one of the most challenging conditions to effectively manage, often causing life-altering deficits in both function and appearance. Facial rehabilitation via pacing and robotic technology has great yet unmet potential. A critical first step towards reanimating symmetrical facial movement in cases of unilateral paralysis is the detection of healthy movement to use as a trigger for stimulated movement. OBJECTIVE To test a blink detection system that can be attached to standard eyeglasses and used as part of a closed-loop facial pacing system. DESIGN Standard safety glasses were equipped with an infrared (IR) emitter/detector pair oriented horizontally across the palpebral fissure, creating a monitored IR beam that became interrupted when the eyelids closed. SETTING Tertiary care Facial Nerve Center. PARTICIPANTS 24 healthy volunteers. MAIN OUTCOME MEASURE Video-quantified blinking was compared with both IR sensor signal magnitude and rate of change in healthy participants with their gaze in repose, while they shifted gaze from central to far peripheral positions, and during the production of particular facial expressions. RESULTS Blink detection based on signal magnitude achieved 100% sensitivity in forward gaze, but generated false-detections on downward gaze. Calculations of peak rate of signal change (first derivative) typically distinguished blinks from gaze-related lid movements. During forward gaze, 87% of detected blink events were true positives, 11% were false positives, and 2% false negatives. Of the 11% false positives, 6% were associated with partial eyelid closures. During gaze changes, false blink detection occurred 6.3% of the time during lateral eye movements, 10.4% during upward movements, 46.5% during downward movements, and 5.6% for movements from an upward or downward gaze back to the primary gaze. Facial expressions disrupted sensor output if they caused substantial squinting or shifted the glasses. CONCLUSION AND RELEVANCE Our blink detection system provides a reliable, non-invasive indication of eyelid closure using an invisible light beam passing in front of the eye. Future versions will aim to mitigate detection errors by using multiple IR emitter/detector pairs mounted on the glasses, and alternative frame designs may reduce shifting of the sensors relative to the eye during facial movements. PMID:24699708
Effect of a Facial Muscle Exercise Device on Facial Rejuvenation
Hwang, Ui-jae; Kwon, Oh-yun; Jung, Sung-hoon; Ahn, Sun-hee; Gwak, Gyeong-tae
2018-01-01
Abstract Background The efficacy of facial muscle exercises (FMEs) for facial rejuvenation is controversial. In the majority of previous studies, nonquantitative assessment tools were used to assess the benefits of FMEs. Objectives This study examined the effectiveness of FMEs using a Pao (MTG, Nagoya, Japan) device to quantify facial rejuvenation. Methods Fifty females were asked to perform FMEs using a Pao device for 30 seconds twice a day for 8 weeks. Facial muscle thickness and cross-sectional area were measured sonographically. Facial surface distance, surface area, and volumes were determined using a laser scanning system before and after FME. Facial muscle thickness, cross-sectional area, midfacial surface distances, jawline surface distance, and lower facial surface area and volume were compared bilaterally before and after FME using a paired Student t test. Results The cross-sectional areas of the zygomaticus major and digastric muscles increased significantly (right: P < 0.001, left: P = 0.015), while the midfacial surface distances in the middle (right: P = 0.005, left: P = 0.047) and lower (right: P = 0.028, left: P = 0.019) planes as well as the jawline surface distances (right: P = 0.004, left: P = 0.003) decreased significantly after FME using the Pao device. The lower facial surface areas (right: P = 0.005, left: P = 0.006) and volumes (right: P = 0.001, left: P = 0.002) were also significantly reduced after FME using the Pao device. Conclusions FME using the Pao device can increase facial muscle thickness and cross-sectional area, thus contributing to facial rejuvenation. Level of Evidence: 4 PMID:29365050
[Developmental change in facial recognition by premature infants during infancy].
Konishi, Yukihiko; Kusaka, Takashi; Nishida, Tomoko; Isobe, Kenichi; Itoh, Susumu
2014-09-01
Premature infants are thought to be at increased risk for developmental disorders. We evaluated facial recognition by premature infants during early infancy, as this ability has been reported to be impaired commonly in developmentally disabled children. In premature infants and full-term infants at the age of 4 months (4 corrected months for premature infants), visual behaviors while performing facial recognition tasks were determined and analyzed using an eye-tracking system (Tobii T60 manufactured by Tobii Technologics, Sweden). Both types of infants had a preference towards normal facial expressions; however, no preference towards the upper face was observed in premature infants. Our study suggests that facial recognition ability in premature infants may develop differently from that in full-term infants.
NASA Astrophysics Data System (ADS)
Harit, Aditya; Joshi, J. C., Col; Gupta, K. K.
2018-03-01
The paper proposed an automatic facial emotion recognition algorithm which comprises of two main components: feature extraction and expression recognition. The algorithm uses a Gabor filter bank on fiducial points to find the facial expression features. The resulting magnitudes of Gabor transforms, along with 14 chosen FAPs (Facial Animation Parameters), compose the feature space. There are two stages: the training phase and the recognition phase. Firstly, for the present 6 different emotions, the system classifies all training expressions in 6 different classes (one for each emotion) in the training stage. In the recognition phase, it recognizes the emotion by applying the Gabor bank to a face image, then finds the fiducial points, and then feeds it to the trained neural architecture.
[Establishment of the database of the 3D facial models for the plastic surgery based on network].
Liu, Zhe; Zhang, Hai-Lin; Zhang, Zheng-Guo; Qiao, Qun
2008-07-01
To collect the three-dimensional (3D) facial data of 30 facial deformity patients by the 3D scanner and establish a professional database based on Internet. It can be helpful for the clinical intervention. The primitive point data of face topography were collected by the 3D scanner. Then the 3D point cloud was edited by reverse engineering software to reconstruct the 3D model of the face. The database system was divided into three parts, including basic information, disease information and surgery information. The programming language of the web system is Java. The linkages between every table of the database are credibility. The query operation and the data mining are convenient. The users can visit the database via the Internet and use the image analysis system to observe the 3D facial models interactively. In this paper we presented a database and a web system adapt to the plastic surgery of human face. It can be used both in clinic and in basic research.
Young, Garry
2009-09-01
Explanations of Capgras delusion and prosopagnosia typically incorporate a dual-route approach to facial recognition in which a deficit in overt or covert processing in one condition is mirror-reversed in the other. Despite this double dissociation, experiences of either patient-group are often reported in the same way--as lacking a sense of familiarity toward familiar faces. In this paper, deficits in the facial processing of these patients are compared to other facial recognition pathologies, and their experiential characteristics mapped onto the dual-route model in order to provide a less ambiguous link between facial processing and experiential content. The paper concludes that the experiential states of Capgras delusion, prosopagnosia, and related facial pathologies are quite distinct, and that this descriptive distinctiveness finds explanatory equivalence at the level of anatomical and functional disruption within the face recognition system. The role of skin conductance response (SCR) as a measure of 'familiarity' is also clarified.
Learning representative features for facial images based on a modified principal component analysis
NASA Astrophysics Data System (ADS)
Averkin, Anton; Potapov, Alexey
2013-05-01
The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.
Complications in Pediatric Facial Fractures
Chao, Mimi T.; Losee, Joseph E.
2009-01-01
Despite recent advances in the diagnosis, treatment, and prevention of pediatric facial fractures, little has been published on the complications of these fractures. The existing literature is highly variable regarding both the definition and the reporting of adverse events. Although the incidence of pediatric facial fractures is relative low, they are strongly associated with other serious injuries. Both the fractures and their treatment may have long-term consequence on growth and development of the immature face. This article is a selective review of the literature on facial fracture complications with special emphasis on the complications unique to pediatric patients. We also present our classification system to evaluate adverse outcomes associated with pediatric facial fractures. Prospective, long-term studies are needed to fully understand and appreciate the complexity of treating children with facial fractures and determining the true incidence, subsequent growth, and nature of their complications. PMID:22110803
Chronic, burning facial pain following cosmetic facial surgery.
Eisenberg, E; Yaari, A; Har-Shai, Y
1996-01-01
Chronic, burning facial pain as a result of cosmetic facial surgery has rarely been reported. During the year of 1994, two female patients presented themselves at our Pain Relief Clinic with chronic facial pain that developed following aesthetic facial surgery. One patient underwent bilateral transpalpebral surgery for removal of intraorbital fat for the correction of the exophthalmus, and the other had classical face and anterior hairline forehead lifts. Pain in both patients was similar in that it was bilateral, symmetric, burning in quality, and aggravated by external stimuli, mainly light touch. It was resistant to multiple analgesic medications, and was associated with significant depression and disability. Diagnostic local (lidocaine) and systemic (lidocaine and phentolamine) nerve blocks failed to provide relief. Psychological evaluation revealed that the two patients had clear psychosocial factors that seemed to have further compounded their pain complaints. Tricyclic antidepressants (and biofeedback training in one patient) were modestly effective and produced only partial pain relief.
Real-time speech-driven animation of expressive talking faces
NASA Astrophysics Data System (ADS)
Liu, Jia; You, Mingyu; Chen, Chun; Song, Mingli
2011-05-01
In this paper, we present a real-time facial animation system in which speech drives mouth movements and facial expressions synchronously. Considering five basic emotions, a hierarchical structure with an upper layer of emotion classification is established. Based on the recognized emotion label, the under-layer classification at sub-phonemic level has been modelled on the relationship between acoustic features of frames and audio labels in phonemes. Using certain constraint, the predicted emotion labels of speech are adjusted to gain the facial expression labels which are combined with sub-phonemic labels. The combinations are mapped into facial action units (FAUs), and audio-visual synchronized animation with mouth movements and facial expressions is generated by morphing between FAUs. The experimental results demonstrate that the two-layer structure succeeds in both emotion and sub-phonemic classifications, and the synthesized facial sequences reach a comparative convincing quality.
Parameterized Facial Expression Synthesis Based on MPEG-4
NASA Astrophysics Data System (ADS)
Raouzaiou, Amaryllis; Tsapatsoulis, Nicolas; Karpouzis, Kostas; Kollias, Stefanos
2002-12-01
In the framework of MPEG-4, one can include applications where virtual agents, utilizing both textual and multisensory data, including facial expressions and nonverbal speech help systems become accustomed to the actual feelings of the user. Applications of this technology are expected in educational environments, virtual collaborative workplaces, communities, and interactive entertainment. Facial animation has gained much interest within the MPEG-4 framework; with implementation details being an open research area (Tekalp, 1999). In this paper, we describe a method for enriching human computer interaction, focusing on analysis and synthesis of primary and intermediate facial expressions (Ekman and Friesen (1978)). To achieve this goal, we utilize facial animation parameters (FAPs) to model primary expressions and describe a rule-based technique for handling intermediate ones. A relation between FAPs and the activation parameter proposed in classical psychological studies is established, leading to parameterized facial expression analysis and synthesis notions, compatible with the MPEG-4 standard.
Thermal Face Protection Delays Finger Cooling and Improves Thermal Comfort during Cold Air Exposure
2011-01-01
code) 2011 Journal Article-Eur Journal of Applied Physiology Thermal face protection delays Fnger cooling and improves thermal comfort during cold air...remains exposed. Facial cooling can decrease finger blood flow, reducing finger temperature (Tf). This study examined whether thermal face protection...limits Wnger cooling and thereby improves thermal comfort and manual dexterity during prolonged cold exposure. Tf was measured in ten volunteers dressed
Robust representation and recognition of facial emotions using extreme sparse learning.
Shojaeilangari, Seyedehsamaneh; Yau, Wei-Yun; Nandakumar, Karthik; Li, Jun; Teoh, Eam Khwang
2015-07-01
Recognition of natural emotions from human faces is an interesting topic with a wide range of potential applications, such as human-computer interaction, automated tutoring systems, image and video retrieval, smart environments, and driver warning systems. Traditionally, facial emotion recognition systems have been evaluated on laboratory controlled data, which is not representative of the environment faced in real-world applications. To robustly recognize the facial emotions in real-world natural situations, this paper proposes an approach called extreme sparse learning, which has the ability to jointly learn a dictionary (set of basis) and a nonlinear classification model. The proposed approach combines the discriminative power of extreme learning machine with the reconstruction property of sparse representation to enable accurate classification when presented with noisy signals and imperfect data recorded in natural settings. In addition, this paper presents a new local spatio-temporal descriptor that is distinctive and pose-invariant. The proposed framework is able to achieve the state-of-the-art recognition accuracy on both acted and spontaneous facial emotion databases.
The development of automated behavior analysis software
NASA Astrophysics Data System (ADS)
Jaana, Yuki; Prima, Oky Dicky A.; Imabuchi, Takashi; Ito, Hisayoshi; Hosogoe, Kumiko
2015-03-01
The measurement of behavior for participants in a conversation scene involves verbal and nonverbal communications. The measurement validity may vary depending on the observers caused by some aspects such as human error, poorly designed measurement systems, and inadequate observer training. Although some systems have been introduced in previous studies to automatically measure the behaviors, these systems prevent participants to talk in a natural way. In this study, we propose a software application program to automatically analyze behaviors of the participants including utterances, facial expressions (happy or neutral), head nods, and poses using only a single omnidirectional camera. The camera is small enough to be embedded into a table to allow participants to have spontaneous conversation. The proposed software utilizes facial feature tracking based on constrained local model to observe the changes of the facial features captured by the camera, and the Japanese female facial expression database to recognize expressions. Our experiment results show that there are significant correlations between measurements observed by the observers and by the software.
Classifying and Standardizing Panfacial Trauma With a New Bony Facial Trauma Score.
Casale, Garrett G A; Fishero, Brian A; Park, Stephen S; Sochor, Mark; Heltzel, Sara B; Christophel, J Jared
2017-01-01
The practice of facial trauma surgery would benefit from a useful quantitative scale that measures the extent of injury. To develop a facial trauma scale that incorporates only reducible fractures and is able to be reliably communicated to health care professionals. A cadaveric tissue study was conducted from October 1 to 3, 2014. Ten cadaveric heads were subjected to various degrees of facial trauma by dropping a fixed mass onto each head. The heads were then imaged with fine-cut computed tomography. A Bony Facial Trauma Scale (BFTS) for grading facial trauma was developed based only on clinically relevant (reducible) fractures. The traumatized cadaveric heads were then scored using this scale as well as 3 existing scoring systems. Regression analysis was used to determine correlation between degree of incursion of the fixed mass on the cadaveric heads and trauma severity as rated by the scoring systems. Statistical analysis was performed to determine correlation of the scores obtained using the BFTS with those of the 3 existing scoring systems. Scores obtained using the BFTS were not correlated with dentition (95% CI, -0.087 to 1.053; P = .08; measured as absolute number of teeth) or age of the cadaveric donor (95% CI, -0.068 to 0.944; P = .08). Facial trauma scores. Among all 10 cadaveric specimens (9 male donors and 1 female donor; age range, 41-87 years; mean age, 57.2 years), the facial trauma scores obtained using the BFTS correlated with depth of penetration of the mass into the face (odds ratio, 4.071; 95% CI, 1.676-6.448) P = .007) when controlling for presence of dentition and age. The BFTS scores also correlated with scores obtained using 3 existing facial trauma models (Facial Fracture Severity Scale, rs = 0.920; Craniofacial Disruption Score, rs = 0.945; and ZS Score, rs = 0.902; P < .001 for all 3 models). In addition, the BFTS was found to have excellent interrater reliability (0.908; P = .001), which was similar to the interrater reliability of the other 3 tested trauma scales. Scores obtained using the BFTS were not correlated with dentition (odds ratio, .482; 95% CI, -0.087 to 1.053; P = .08; measured as absolute number of teeth) or age of the cadaveric donor (odds ratio, .436; 95% CI, -0.068 to 0.944; P = .08). Facial trauma severity as measured by the BFTS correlated with depth of penetration of the fixed mass into the face. In this study, the BFTS was clinically relevant, had high fidelity in communicating the fractures sustained in facial trauma, and correlated well with previously validated models. NA.
Topical Rapamycin Therapy to Alleviate Cutaneous Manifestations of Tuberous Sclerosis Complex
2012-09-01
in the formation of visible facial angiofibromas over time. The lesions appear as red or pink papules distributed over the central face...especially on the nasolabial folds, cheeks, and chin. Lesions appear in early childhood and are present in up to 80% of TSC patients. Facial angiofibromas ...facial angiofibromas without causing side effects seen with systemic administration. This project is a multi-center prospective, randomized
Unmanned Aircraft Systems Sensors
2005-05-01
to development of UAS and UA sensor capabilities UNCLASSIFIED Small UA EO/IR Sensors • EO – Requirement for a facial recognition capability while...UNCLASSIFIED Tactical UA EO/IR Sensors • EO – Requirement for a facial recognition capability while remaining undetected. (NIIRS 8+) • IR – Requirement for...Operational & Theater UA EO/IR Sensors • EO – Requirement for a facial recognition capability while remaining undetected. (NIIRS 8+) • IR – Requirement
Image ratio features for facial expression recognition application.
Song, Mingli; Tao, Dacheng; Liu, Zicheng; Li, Xuelong; Zhou, Mengchu
2010-06-01
Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.
Burriss, Robert P.; Troscianko, Jolyon; Lovell, P. George; Fulford, Anthony J. C.; Stevens, Martin; Quigley, Rachael; Payne, Jenny; Saxton, Tamsin K.; Rowland, Hannah M.
2015-01-01
Human ovulation is not advertised, as it is in several primate species, by conspicuous sexual swellings. However, there is increasing evidence that the attractiveness of women’s body odor, voice, and facial appearance peak during the fertile phase of their ovulatory cycle. Cycle effects on facial attractiveness may be underpinned by changes in facial skin color, but it is not clear if skin color varies cyclically in humans or if any changes are detectable. To test these questions we photographed women daily for at least one cycle. Changes in facial skin redness and luminance were then quantified by mapping the digital images to human long, medium, and shortwave visual receptors. We find cyclic variation in skin redness, but not luminance. Redness decreases rapidly after menstrual onset, increases in the days before ovulation, and remains high through the luteal phase. However, we also show that this variation is unlikely to be detectable by the human visual system. We conclude that changes in skin color are not responsible for the effects of the ovulatory cycle on women’s attractiveness. PMID:26134671
Burriss, Robert P; Troscianko, Jolyon; Lovell, P George; Fulford, Anthony J C; Stevens, Martin; Quigley, Rachael; Payne, Jenny; Saxton, Tamsin K; Rowland, Hannah M
2015-01-01
Human ovulation is not advertised, as it is in several primate species, by conspicuous sexual swellings. However, there is increasing evidence that the attractiveness of women's body odor, voice, and facial appearance peak during the fertile phase of their ovulatory cycle. Cycle effects on facial attractiveness may be underpinned by changes in facial skin color, but it is not clear if skin color varies cyclically in humans or if any changes are detectable. To test these questions we photographed women daily for at least one cycle. Changes in facial skin redness and luminance were then quantified by mapping the digital images to human long, medium, and shortwave visual receptors. We find cyclic variation in skin redness, but not luminance. Redness decreases rapidly after menstrual onset, increases in the days before ovulation, and remains high through the luteal phase. However, we also show that this variation is unlikely to be detectable by the human visual system. We conclude that changes in skin color are not responsible for the effects of the ovulatory cycle on women's attractiveness.
Multivectored Superficial Muscular Aponeurotic System Suspension for Facial Paralysis.
Leach, Garrison; Kurnik, Nicole; Joganic, Jessica; Joganic, Edward
2017-06-01
Facial paralysis is a devastating condition that may cause severe cosmetic and functional deformities. In this study we describe our technique for superficial muscular aponeurotic system (SMAS) suspension using barbed suture and compare the vectors of suspension in relation to the underlying musculature. This study also quantifies the improvements in postoperative symmetry using traditional anthropologic landmarks. The efficacy of this procedure for improving facial paralysis was determined by comparing anthropometric indices and using Procrustes distance between 4 groupings of homologous landmarks plotted on each patient's preoperative and postoperative photos. Geometric morphometrics was used to evaluate change in facial shape and improvement in symmetry postoperatively.To analyze the vector of suspension in relation to the underlying musculature, specific anthropologic landmarks were used to calculate the vector of the musculature in 3 facial hemispheres from cadaveric controls against the vector of repair in our patients. Ten patients were included in our study. Subjectively, great improvement in functional status was achieved. Geometric morphometric analysis demonstrated a statistically significant improvement in facial symmetry. Cadaveric dissection demonstrated that the suture should be placed in the SMAS in vectors parallel to the underlying musculature to achieve these results. There were no complications in our study to date. In conclusion, multivectored SMAS suture suspension is an effective method for restoring static suspension of the face after facial paralysis. This method has the benefit of producing quick, reliable results with improved function, low cost, and low morbidity.
Cler, Meredith J.; Stepp, Cara E.
2015-01-01
Individuals with high spinal cord injuries are unable to operate a keyboard and mouse with their hands. In this experiment, we compared two systems using surface electromyography (sEMG) recorded from facial muscles to control an onscreen keyboard to type five-letter words. Both systems used five sEMG sensors to capture muscle activity during five distinct facial gestures that were mapped to five cursor commands: move left, move right, move up, move down, and “click”. One system used a discrete movement and feedback algorithm in which the user produced one quick facial gesture, causing a corresponding discrete movement to an adjacent letter. The other system was continuously updated and allowed the user to control the cursor’s velocity by relative activation between different sEMG channels. Participants were trained on one system for four sessions on consecutive days, followed by one crossover session on the untrained system. Information transfer rates (ITRs) were high for both systems compared to other potential input modalities, both initially and with training (Session 1: 62.1 bits/min, Session 4: 105.1 bits/min). Users of the continuous system showed significantly higher ITRs than the discrete users. Future development will focus on improvements to both systems, which may offer differential advantages for users with various motor impairments. PMID:25616053
The Influence of Facial Signals on the Automatic Imitation of Hand Actions
Butler, Emily E.; Ward, Robert; Ramsey, Richard
2016-01-01
Imitation and facial signals are fundamental social cues that guide interactions with others, but little is known regarding the relationship between these behaviors. It is clear that during expression detection, we imitate observed expressions by engaging similar facial muscles. It is proposed that a cognitive system, which matches observed and performed actions, controls imitation and contributes to emotion understanding. However, there is little known regarding the consequences of recognizing affective states for other forms of imitation, which are not inherently tied to the observed emotion. The current study investigated the hypothesis that facial cue valence would modulate automatic imitation of hand actions. To test this hypothesis, we paired different types of facial cue with an automatic imitation task. Experiments 1 and 2 demonstrated that a smile prompted greater automatic imitation than angry and neutral expressions. Additionally, a meta-analysis of this and previous studies suggests that both happy and angry expressions increase imitation compared to neutral expressions. By contrast, Experiments 3 and 4 demonstrated that invariant facial cues, which signal trait-levels of agreeableness, had no impact on imitation. Despite readily identifying trait-based facial signals, levels of agreeableness did not differentially modulate automatic imitation. Further, a Bayesian analysis showed that the null effect was between 2 and 5 times more likely than the experimental effect. Therefore, we show that imitation systems are more sensitive to prosocial facial signals that indicate “in the moment” states than enduring traits. These data support the view that a smile primes multiple forms of imitation including the copying actions that are not inherently affective. The influence of expression detection on wider forms of imitation may contribute to facilitating interactions between individuals, such as building rapport and affiliation. PMID:27833573
Matsukawa, Kanji; Endo, Kana; Asahara, Ryota; Yoshikawa, Miho; Kusunoki, Shinya; Ishida, Tomoko
2017-11-01
Our laboratory reported that facial skin blood flow may serve as a sensitive tool to assess an emotional status. Cerebral neural correlates during emotional interventions should be sought in relation to the changes in facial skin blood flow. To test the hypothesis that prefrontal activity has positive relation to the changes in facial skin blood flow during emotionally charged stimulation, we examined the dynamic changes in prefrontal oxygenation (with near-infrared spectroscopy) and facial skin blood flows (with two-dimensional laser speckle and Doppler flowmetry) during emotionally charged audiovisual challenges for 2 min (by viewing comedy, landscape, and horror movie) in 14 subjects. Hand skin blood flow and systemic hemodynamics were simultaneously measured. The extents of pleasantness and consciousness for each emotional stimulus were estimated by subjective rating from -5 (the most unpleasant; the most unconscious) to +5 (the most pleasant; the most conscious). Positively charged emotional stimulation (comedy) simultaneously decreased ( P < 0.05) prefrontal oxygenation and facial skin blood flow, whereas negatively charged (horror) or neutral (landscape) emotional stimulation did not alter or slightly decreased them. Any of hand skin blood flow and systemic cardiovascular variables did not change significantly during positively charged emotional stimulation. The changes in prefrontal oxygenation had a highly positive correlation with the changes in facial skin blood flow without altering perfusion pressure, and they were inversely correlated with the subjective rating of pleasantness. The reduction in prefrontal oxygenation during positively charged emotional stimulation suggests a decrease in prefrontal neural activity, which may in turn elicit neurally mediated vasoconstriction of facial skin blood vessels. © 2017 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of The Physiological Society and the American Physiological Society.
The Influence of Facial Signals on the Automatic Imitation of Hand Actions.
Butler, Emily E; Ward, Robert; Ramsey, Richard
2016-01-01
Imitation and facial signals are fundamental social cues that guide interactions with others, but little is known regarding the relationship between these behaviors. It is clear that during expression detection, we imitate observed expressions by engaging similar facial muscles. It is proposed that a cognitive system, which matches observed and performed actions, controls imitation and contributes to emotion understanding. However, there is little known regarding the consequences of recognizing affective states for other forms of imitation, which are not inherently tied to the observed emotion. The current study investigated the hypothesis that facial cue valence would modulate automatic imitation of hand actions. To test this hypothesis, we paired different types of facial cue with an automatic imitation task. Experiments 1 and 2 demonstrated that a smile prompted greater automatic imitation than angry and neutral expressions. Additionally, a meta-analysis of this and previous studies suggests that both happy and angry expressions increase imitation compared to neutral expressions. By contrast, Experiments 3 and 4 demonstrated that invariant facial cues, which signal trait-levels of agreeableness, had no impact on imitation. Despite readily identifying trait-based facial signals, levels of agreeableness did not differentially modulate automatic imitation. Further, a Bayesian analysis showed that the null effect was between 2 and 5 times more likely than the experimental effect. Therefore, we show that imitation systems are more sensitive to prosocial facial signals that indicate "in the moment" states than enduring traits. These data support the view that a smile primes multiple forms of imitation including the copying actions that are not inherently affective. The influence of expression detection on wider forms of imitation may contribute to facilitating interactions between individuals, such as building rapport and affiliation.
Tirant, M; Bayer, P; Hercogovấ, J; Fioranelli, M; Gianfaldoni, S; Chokoeva, A A; Tchernev, G; Wollina, U; Novotny, F; Roccia, M G; Maximov, G K; França, K; Lotti, T
2016-01-01
Systemic lupus erythematosus (SLE) is a complex autoimmune disease in which the bodys immune system mistakenly attacks healthy tissue. It can affect the skin, joints, kidneys, brain and other organs. We report the case of a 7-year-old female patient with facial lesions of SLE since the age of 5. There was no significant family history and patient had been a healthy child from birth. The child presented with a malar rash, also known as a butterfly rash, with distribution over the cheeks but sparing the nasal bridge. This case represents the efficacy of the Dr. Michaels® (Soratinex®) product family in the successful resolution of facial lesions of SLE.
Alternating facial paralysis in a girl with hypertension: case report.
Bağ, Özlem; Karaarslan, Utku; Acar, Sezer; Işgüder, Rana; Unalp, Aycan; Öztürk, Aysel
2013-12-01
Bell's palsy is the most common cause of acquired unilateral facial nerve palsy in childhood. Although the diagnosis depends on the exclusion of less common causes such as infectious, traumatic, malignancy associated and hypertension associated etiologies, pediatricians tend to diagnose idiopatic Bell's palsy whenever a child admits with acquired facial weakness. In this report, we present an eight year old girl, presenting with recurrent and alternant facial palsy as the first symptom of systemic hypertension. She received steroid treatment without measuring blood pressure and this could worsen hypertension. Clinicians should be aware of this association and not neglect to measure the blood pressure before considering steroid therapy for Bell's palsy. In addition, the less common causes of acquired facial palsy should be kept in mind, especially when recurrent and alternant courses occur.
Multi-layer sparse representation for weighted LBP-patches based facial expression recognition.
Jia, Qi; Gao, Xinkai; Guo, He; Luo, Zhongxuan; Wang, Yi
2015-03-19
In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the existing training methods have quite low recognition rates. Motivated by sparse representation, the problem can be solved by finding sparse coefficients of the test image by the whole training set. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. We evaluate facial representation based on weighted local binary patterns, and Fisher separation criterion is used to calculate the weighs of patches. A multi-layer sparse representation framework is proposed for multi-intensity facial expression recognition, especially for low-intensity expressions and noisy expressions in reality, which is a critical problem but seldom addressed in the existing works. To this end, several experiments based on low-resolution and multi-intensity expressions are carried out. Promising results on publicly available databases demonstrate the potential of the proposed approach.
Feng, Zhi-hong; Dong, Yan; Bai, Shi-zhu; Wu, Guo-feng; Bi, Yun-peng; Wang, Bo; Zhao, Yi-min
2010-01-01
The aim of this article was to demonstrate a novel approach to designing facial prostheses using the transplantation concept and computer-assisted technology for extensive, large, maxillofacial defects that cross the facial midline. The three-dimensional (3D) facial surface images of a patient and his relative were reconstructed using data obtained through optical scanning. Based on these images, the corresponding portion of the relative's face was transplanted to the patient's where the defect was located, which could not be rehabilitated using mirror projection, to design the virtual facial prosthesis without the eye. A 3D model of an artificial eye that mimicked the patient's remaining one was developed, transplanted, and fit onto the virtual prosthesis. A personalized retention structure for the artificial eye was designed on the virtual facial prosthesis. The wax prosthesis was manufactured through rapid prototyping, and the definitive silicone prosthesis was completed. The size, shape, and cosmetic appearance of the prosthesis were satisfactory and matched the defect area well. The patient's facial appearance was recovered perfectly with the prosthesis, as determined through clinical evaluation. The optical 3D imaging and computer-aided design/computer-assisted manufacturing system used in this study can design and fabricate facial prostheses more precisely than conventional manual sculpturing techniques. The discomfort generally associated with such conventional methods was decreased greatly. The virtual transplantation used to design the facial prosthesis for the maxillofacial defect, which crossed the facial midline, and the development of the retention structure for the eye were both feasible.
Volk, Gerd Fabian; Pohlmann, Martin; Finkensieper, Mira; Chalmers, Heather J; Guntinas-Lichius, Orlando
2014-01-01
While standardized methods are established to examine the pathway from motorcortex to the peripheral nerve in patients with facial palsy, a reliable method to evaluate the facial muscles in patients with long-term palsy for therapy planning is lacking. A 3D ultrasonographic (US) acquisition system driven by a motorized linear mover combined with conventional US probe was used to acquire 3D data sets of several facial muscles on both sides of the face in a healthy subject and seven patients with different types of unilateral degenerative facial nerve lesions. The US results were correlated to the duration of palsy and the electromyography results. Consistent 3D US based volumetry through bilateral comparison was feasible for parts of the frontalis muscle, orbicularis oculi muscle, depressor anguli oris muscle, depressor labii inferioris muscle, and mentalis muscle. With the exception of the frontal muscle, the facial muscles volumes were much smaller on the palsy side (minimum: 3% for the depressor labii inferior muscle) than on the healthy side in patients with severe facial nerve lesion. In contrast, the frontal muscles did not show a side difference. In the two patients with defective healing after spontaneous regeneration a decrease in muscle volume was not seen. Synkinesis and hyperkinesis was even more correlated to muscle hypertrophy on the palsy compared with the healthy side. 3D ultrasonography seems to be a promising tool for regional and quantitative evaluation of facial muscles in patients with facial palsy receiving a facial reconstructive surgery or conservative treatment.
2014-01-01
Background While standardized methods are established to examine the pathway from motorcortex to the peripheral nerve in patients with facial palsy, a reliable method to evaluate the facial muscles in patients with long-term palsy for therapy planning is lacking. Methods A 3D ultrasonographic (US) acquisition system driven by a motorized linear mover combined with conventional US probe was used to acquire 3D data sets of several facial muscles on both sides of the face in a healthy subject and seven patients with different types of unilateral degenerative facial nerve lesions. Results The US results were correlated to the duration of palsy and the electromyography results. Consistent 3D US based volumetry through bilateral comparison was feasible for parts of the frontalis muscle, orbicularis oculi muscle, depressor anguli oris muscle, depressor labii inferioris muscle, and mentalis muscle. With the exception of the frontal muscle, the facial muscles volumes were much smaller on the palsy side (minimum: 3% for the depressor labii inferior muscle) than on the healthy side in patients with severe facial nerve lesion. In contrast, the frontal muscles did not show a side difference. In the two patients with defective healing after spontaneous regeneration a decrease in muscle volume was not seen. Synkinesis and hyperkinesis was even more correlated to muscle hypertrophy on the palsy compared with the healthy side. Conclusion 3D ultrasonography seems to be a promising tool for regional and quantitative evaluation of facial muscles in patients with facial palsy receiving a facial reconstructive surgery or conservative treatment. PMID:24782657
Assessing photoplethysmographic imaging performance beyond facial perfusion analysis
NASA Astrophysics Data System (ADS)
Amelard, Robert; Hughson, Richard L.; Greaves, Danielle K.; Clausi, David A.; Wong, Alexander
2017-02-01
Photoplethysmographic imaging (PPGI) systems are relatively new non-contact biophotonic diffuse reflectance systems able to assess arterial pulsations through transient changes in light-tissue interaction. Many PPGI studies have focused on extracting heart rate from the face or hand. Though PPGI systems can be used for widefield imaging of any anatomical area, whole-body investigations are lacking. Here, using a novel PPGI system, coded hemodynamic imaging (CHI), we explored and analyzed the pulsatility at major arterial locations across the whole body, including the neck (carotid artery), arm/wrist (brachial, radial and ulnar arteries), and leg/feet (popliteal and tibial arteries). CHI was positioned 1.5 m from the participant, and diffuse reactance from a broadband tungsten-halogen illumination was filtered using 850{1000 nm bandpass filter for deep tissue penetration. Images were acquired over a highly varying 24-participant sample (11/13 female/male, age 28.7+/-12.4 years, BMI 25.5+/-5.2 kg/m2), and a preliminary case study was performed. B-mode ultrasound images were acquired to validate observations through planar arterial characteristics.
Non-Cooperative Facial Recognition Video Dataset Collection Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimura, Marcia L.; Erikson, Rebecca L.; Lombardo, Nicholas J.
The Pacific Northwest National Laboratory (PNNL) will produce a non-cooperative (i.e. not posing for the camera) facial recognition video data set for research purposes to evaluate and enhance facial recognition systems technology. The aggregate data set consists of 1) videos capturing PNNL role players and public volunteers in three key operational settings, 2) photographs of the role players for enrolling in an evaluation database, and 3) ground truth data that documents when the role player is within various camera fields of view. PNNL will deliver the aggregate data set to DHS who may then choose to make it available tomore » other government agencies interested in evaluating and enhancing facial recognition systems. The three operational settings that will be the focus of the video collection effort include: 1) unidirectional crowd flow 2) bi-directional crowd flow, and 3) linear and/or serpentine queues.« less
Li, Qiang; Zhou, Xu; Wang, Yue; Qian, Jin; Zhang, Qingguo
2018-05-15
Although facial paralysis is a fundamental feature of hemifacial microsomia, the frequency and distribution of nerve abnormalities in patients with hemifacial microsomia remain unclear. In this study, the authors classified 1125 cases with microtia (including 339 patients with hemifacial microsomia and 786 with isolated microtia) according to Orbital Distortion Mandibular Hypoplasia Ear Anomaly Nerve Involvement Soft Tissue Dependency (OMENS) scheme. Then, the authors performed an independent analysis to describe the distribution feature of nerve abnormalities and reveal the possible relationships between facial paralysis and the other 4 fundamental features in the OMENS system. Results revealed that facial paralysis is present 23.9% of patients with hemifacial microsomia. The frontal-temporal branch is the most vulnerable branch in the total 1125 cases with microtia. The occurrence of facial paralysis is positively correlated with mandibular hypoplasia and soft tissue deficiency both in the total 1125 cases and the hemifacial microsomia patients. Orbital asymmetry is related to facial paralysis only in the total microtia cases, and ear deformity is related to facial paralysis only in hemifacial microsomia patients. No significant association was found between the severity of facial paralysis and any of the other 4 OMENS anomalies. These data suggest that the occurrence of facial paralysis may be associated with other OMENS abnormalities. The presence of serious mandibular hypoplasia or soft tissue deficiency should alert the clinician to a high possibility but not a high severity of facial paralysis.
2004-10-25
FUSEDOT does not require facial recognition , or video surveillance of public areas, both of which are apparently a component of TIA ([26], pp...does not use fuzzy signal detection. Involves facial recognition and video surveillance of public areas. Involves monitoring the content of voice...fuzzy signal detection, which TIA does not. Second, FUSEDOT would be easier to develop, because it does not require the development of facial
Differential hemispheric and visual stream contributions to ensemble coding of crowd emotion
Im, Hee Yeon; Albohn, Daniel N.; Steiner, Troy G.; Cushing, Cody A.; Adams, Reginald B.; Kveraga, Kestutis
2017-01-01
In crowds, where scrutinizing individual facial expressions is inefficient, humans can make snap judgments about the prevailing mood by reading “crowd emotion”. We investigated how the brain accomplishes this feat in a set of behavioral and fMRI studies. Participants were asked to either avoid or approach one of two crowds of faces presented in the left and right visual hemifields. Perception of crowd emotion was improved when crowd stimuli contained goal-congruent cues and was highly lateralized to the right hemisphere. The dorsal visual stream was preferentially activated in crowd emotion processing, with activity in the intraparietal sulcus and superior frontal gyrus predicting perceptual accuracy for crowd emotion perception, whereas activity in the fusiform cortex in the ventral stream predicted better perception of individual facial expressions. Our findings thus reveal significant behavioral differences and differential involvement of the hemispheres and the major visual streams in reading crowd versus individual face expressions. PMID:29226255
Evaluation of persons of varying ages.
Stolte, J F
1996-06-01
Dual coding theory (Paivio, 1986) suggests that communicating a stimulus person's age verbally/abstractly through words and numbers arouses little feeling and has little effect on the way others evaluate her or him, whereas communicating age nonverbally/concretely through facial photographs arouses more feeling and has a greater impact on evaluation. Two experiments reported in this article, involving U.S. students and incorporating techniques developed in prior research by Levin (1988) strongly support these theoretical expectations.
Jacome, Daniel E
2010-07-01
A 42-year-old farmer developed persistent mid-facial segmental pain and Meige's syndrome several months after suffering facial trauma and a fracture of the nose. He was not afflicted by systemic ailments, had no family history of movement disorder and no history of exposure to neuroleptic drugs. He was capable of suppressing his facial pain by performing a ritual that included forcefully tilting his head backwards, lowering of his eyelids and applying strong pressure to his nasion. Exceptionally dystonic movements and elaborate behavioral rituals may serve as a mechanism of pain suppression. Copyright 2010 Elsevier B.V. All rights reserved.
Facial skin blood flow responses during exposures to emotionally charged movies.
Matsukawa, Kanji; Endo, Kana; Ishii, Kei; Ito, Momoka; Liang, Nan
2018-03-01
The changes in regional facial skin blood flow and vascular conductance have been assessed for the first time with noninvasive two-dimensional laser speckle flowmetry during audiovisually elicited emotional challenges for 2 min (comedy, landscape, and horror movie) in 12 subjects. Limb skin blood flow and vascular conductance and systemic cardiovascular variables were simultaneously measured. The extents of pleasantness and consciousness for each emotional stimulus were estimated by the subjective rating from -5 (the most unpleasant; the most unconscious) to +5 (the most pleasant; the most conscious). Facial skin blood flow and vascular conductance, especially in the lips, decreased during viewing of comedy and horror movies, whereas they did not change during viewing of a landscape movie. The decreases in facial skin blood flow and vascular conductance were the greatest with the comedy movie. The changes in lip, cheek, and chin skin blood flow negatively correlated (P < 0.05) with the subjective ratings of pleasantness and consciousness. The changes in lip skin vascular conductance negatively correlated (P < 0.05) with the subjective rating of pleasantness, while the changes in infraorbital, subnasal, and chin skin vascular conductance negatively correlated (P < 0.05) with the subjective rating of consciousness. However, none of the changes in limb skin blood flow and vascular conductance and systemic hemodynamics correlated with the subjective ratings. The mental arithmetic task did not alter facial and limb skin blood flows, although the task influenced systemic cardiovascular variables. These findings suggest that the more emotional status becomes pleasant or conscious, the more neurally mediated vasoconstriction may occur in facial skin blood vessels.
Efficacy of Autologous Microfat Graft on Facial Handicap in Systemic Sclerosis Patients
Sautereau, Nolwenn; Daumas, Aurélie; Truillet, Romain; Jouve, Elisabeth; Magalon, Jéremy; Veran, Julie; Casanova, Dominique; Frances, Yves; Magalon, Guy
2016-01-01
Background: Autologous adipose tissue injection is used in plastic surgery for correction of localized tissue atrophy and has also been successfully offered for treatment of localized scleroderma. We aimed to evaluate whether patients with systemic sclerosis (SSc) and facial handicap could also benefit from this therapy. Methods: We included 14 patients (mean age of 53.8 ± 9.6 years) suffering from SSc with facial handicap defined by Mouth Handicap in Systemic Sclerosis Scale (MHISS) score more than or equal to 20, a Rodnan skin score on the face more than or equal to 1, and maximal mouth opening of less than 55 mm. Autologous adipose tissue injection was performed under local anesthesia using the technique of subcutaneous microinjection. The main objective of this study was an improvement of the MHISS score 6 months after the surgical treatment. Results: The procedure was well tolerated. We observed a mean decrease in the MHISS score of 10.7 points (±5.1; P < 0.0001) at 6 months (35% improvement). Secondary efficacy parameters assessing perioral skin sclerosis, maximum mouth opening, sicca syndrome, and facial pain significantly improved at 3 and 6 months postsurgery. At a 6-month follow-up, 75% of patients were satisfied or very satisfied of the adipose tissue microinjection therapy. Conclusions: Our study suggests that subcutaneous perioral microfat injection in patients with SSc is beneficial in the treatment of facial handicap, skin sclerosis, mouth opening limitation, sicca syndrome, and facial pain. Thus, this minimally invasive approach offers a new hope for face therapy for patients with SSc. PMID:27257590
Efficacy of Autologous Microfat Graft on Facial Handicap in Systemic Sclerosis Patients.
Sautereau, Nolwenn; Daumas, Aurélie; Truillet, Romain; Jouve, Elisabeth; Magalon, Jéremy; Veran, Julie; Casanova, Dominique; Frances, Yves; Magalon, Guy; Granel, Brigitte
2016-03-01
Autologous adipose tissue injection is used in plastic surgery for correction of localized tissue atrophy and has also been successfully offered for treatment of localized scleroderma. We aimed to evaluate whether patients with systemic sclerosis (SSc) and facial handicap could also benefit from this therapy. We included 14 patients (mean age of 53.8 ± 9.6 years) suffering from SSc with facial handicap defined by Mouth Handicap in Systemic Sclerosis Scale (MHISS) score more than or equal to 20, a Rodnan skin score on the face more than or equal to 1, and maximal mouth opening of less than 55 mm. Autologous adipose tissue injection was performed under local anesthesia using the technique of subcutaneous microinjection. The main objective of this study was an improvement of the MHISS score 6 months after the surgical treatment. The procedure was well tolerated. We observed a mean decrease in the MHISS score of 10.7 points (±5.1; P < 0.0001) at 6 months (35% improvement). Secondary efficacy parameters assessing perioral skin sclerosis, maximum mouth opening, sicca syndrome, and facial pain significantly improved at 3 and 6 months postsurgery. At a 6-month follow-up, 75% of patients were satisfied or very satisfied of the adipose tissue microinjection therapy. Our study suggests that subcutaneous perioral microfat injection in patients with SSc is beneficial in the treatment of facial handicap, skin sclerosis, mouth opening limitation, sicca syndrome, and facial pain. Thus, this minimally invasive approach offers a new hope for face therapy for patients with SSc.
Sun, Yajing; Jin, Cheng; Li, Keyong; Zhang, Qunfeng; Geng, Liang; Liu, Xundao; Zhang, Yi
2017-01-01
The purpose of the present study was to restore orbicularis oculi muscle function using the implantable artificial facial nerve system (IAFNS). The in vivo part of the IAFNS was implanted into 12 rabbits that were facially paralyzed on the right side of the face to restore the function of the orbicularis oculi muscle, which was indicated by closure of the paralyzed eye when the contralateral side was closed. Wireless communication links were established between the in vivo part (the processing chip and microelectrode) and the external part (System Controller program) of the system, which were used to set the working parameters and indicate the working state of the processing chip and microelectrode implanted in the body. A disturbance field strength test of the IAFNS processing chip was performed in a magnetic field dark room to test its electromagnetic radiation safety. Test distances investigated were 0, 1, 3 and 10 m, and levels of radiation intensity were evaluated in the horizontal and vertical planes. Anti-interference experiments were performed to test the stability of the processing chip under the interference of electromagnetic radiation. The fully implanted IAFNS was run for 5 h per day for 30 consecutive days to evaluate the accuracy and precision as well as the long-term stability and effectiveness of wireless communication. The stimulus intensity (range, 0–8 mA) was set every 3 days to confirm the minimum stimulation intensity which could indicate the movement of the paralyzed side was set. Effective stimulation rate was also tested by comparing the number of eye-close movements on both sides. The results of the present study indicated that the IAFNS could rebuild the reflex arc, inducing the experimental rabbits to close the eye of the paralyzed side. The System Controller program was able to reflect the in vivo part of the artificial facial nerve system in real-time and adjust the working pattern, stimulation intensity and frequency, range of wave and stimulation time. No significant differences in the stimulus intensities were observed during 30 days. The artificial facial nerve system chip operation stable in the anti-interference test, and the radiation field strength of the system was in a safe range according to the national standard. The IAFNS functioned without any interference and was able to restore functionality to facially paralyzed rabbits over the course of 30 days. PMID:29285055
Assessing Attentional Prioritization of Front-of-Pack Nutrition Labels using Change Detection
Becker, Mark W.; Sundar, Raghav Prashant; Bello, Nora; Alzahabi, Reem; Weatherspoon, Lorraine; Bix, Laura
2015-01-01
We used a change detection method to evaluate attentional prioritization of nutrition information that appears in the traditional “Nutrition Facts Panel” and in front-of-pack nutrition labels. Results provide compelling evidence that front-of-pack labels attract attention more readily than the Nutrition Facts Panel, even when participants are not specifically tasked with searching for nutrition information. Further, color-coding the relative nutritional value of key nutrients within the front-of-pack label resulted in increased attentional prioritization of nutrition information, but coding using facial icons did not significantly increase attention to the label. Finally, the general pattern of attentional prioritization across front-of-pack designs was consistent across a diverse sample of participants. Our results indicate that color-coded, front-of-pack nutrition labels increase attention to the nutrition information of packaged food, a finding that has implications for current policy discussions regarding labeling change. PMID:26851468
Neurofibromatosis of the head and neck: classification and surgical management.
Latham, Kerry; Buchanan, Edward P; Suver, Daniel; Gruss, Joseph S
2015-03-01
Neurofibromatosis is common and presents with variable penetrance and manifestations in one in 2500 to one in 3000 live births. The management of these patients is often multidisciplinary because of the complexity of the disease. Plastic surgeons are frequently involved in the surgical management of patients with head and neck involvement. A 20-year retrospective review of patients treated surgically for head and neck neurofibroma was performed. Patients were identified according to International Classification of Diseases, Ninth Revision codes for neurofibromatosis and from the senior author's database. A total of 59 patients with head and neck neurofibroma were identified. These patients were categorized into five distinct, but not exclusive, categories to assist with diagnosis and surgical management. These categories included plexiform, cranioorbital, facial, neck, and parotid/auricular neurofibromatosis. A surgical classification system and clinical characteristics of head and neck neurofibromatosis is presented to assist practitioners with diagnosis and surgical management of this complex disease. The surgical management of the cranioorbital type is discussed in detail in 24 patients. The importance and safety of facial nerve dissection and preservation using intraoperative nerve monitoring were validated in 16 dissections in 15 patients. Massive involvement of the neck extending from the skull base to the mediastinum, frequently considered inoperable, has been safely resected by the use of access osteotomies of the clavicle and sternum, muscle takedown, and brachial plexus dissection and preservation using intraoperative nerve monitoring. Therapeutic, IV.
Morphological evaluation of clefts of the lip, palate, or both in dogs.
Peralta, Santiago; Fiani, Nadine; Kan-Rohrer, Kimi H; Verstraete, Frank J M
2017-08-01
OBJECTIVE To systematically characterize the morphology of cleft lip, cleft palate, and cleft lip and palate in dogs. ANIMALS 32 client-owned dogs with clefts of the lip (n = 5), palate (23), or both (4) that had undergone a CT or cone-beam CT scan of the head prior to any surgical procedures involving the oral cavity or face. PROCEDURES Dog signalment and skull type were recorded. The anatomic form of each defect was characterized by use of a widely used human oral-cleft classification system on the basis of CT findings and clinical images. Other defect morphological features, including shape, relative size, facial symmetry, and vomer involvement, were also recorded. RESULTS 9 anatomic forms of cleft were identified. Two anatomic forms were identified in the 23 dogs with cleft palate, in which differences in defect shape and size as well as vomer abnormalities were also evident. Seven anatomic forms were observed in 9 dogs with cleft lip or cleft lip and palate, and most of these dogs had incisive bone abnormalities and facial asymmetry. CONCLUSIONS AND CLINICAL RELEVANCE The morphological features of congenitally acquired cleft lip, cleft palate, and cleft lip and palate were complex and varied among dogs. The features identified here may be useful for surgical planning, developing of clinical coding schemes, or informing genetic, embryological, or clinical research into birth defects in dogs and other species.
FaceTOON: a unified platform for feature-based cartoon expression generation
NASA Astrophysics Data System (ADS)
Zaharia, Titus; Marre, Olivier; Prêteux, Françoise; Monjaux, Perrine
2008-02-01
This paper presents the FaceTOON system, a semi-automatic platform dedicated to the creation of verbal and emotional facial expressions, within the applicative framework of 2D cartoon production. The proposed FaceTOON platform makes it possible to rapidly create 3D facial animations with a minimum amount of user interaction. In contrast with existing commercial 3D modeling softwares, which usually require from the users advanced 3D graphics skills and competences, the FaceTOON system is based exclusively on 2D interaction mechanisms, the 3D modeling stage being completely transparent for the user. The system takes as input a neutral 3D face model, free of any facial feature, and a set of 2D drawings, representing the desired facial features. A 2D/3D virtual mapping procedure makes it possible to obtain a ready-for-animation model which can be directly manipulated and deformed for generating expressions. The platform includes a complete set of dedicated tools for 2D/3D interactive deformation, pose management, key-frame interpolation and MPEG-4 compliant animation and rendering. The proposed FaceTOON system is currently considered for industrial evaluation and commercialization by the Quadraxis company.
Familiarity effects in the construction of facial-composite images using modern software systems.
Frowd, Charlie D; Skelton, Faye C; Butt, Neelam; Hassan, Amal; Fields, Stephen; Hancock, Peter J B
2011-12-01
We investigate the effect of target familiarity on the construction of facial composites, as used by law enforcement to locate criminal suspects. Two popular software construction methods were investigated. Participants were shown a target face that was either familiar or unfamiliar to them and constructed a composite of it from memory using a typical 'feature' system, involving selection of individual facial features, or one of the newer 'holistic' types, involving repeated selection and breeding from arrays of whole faces. This study found that composites constructed of a familiar face were named more successfully than composites of an unfamiliar face; also, naming of composites of internal and external features was equivalent for construction of unfamiliar targets, but internal features were better named than the external features for familiar targets. These findings applied to both systems, although benefit emerged for the holistic type due to more accurate construction of internal features and evidence for a whole-face advantage. STATEMENT OF RELEVANCE: This work is of relevance to practitioners who construct facial composites with witnesses to and victims of crime, as well as for software designers to help them improve the effectiveness of their composite systems.
Use of Resorbable Fixation System in Pediatric Facial Fractures.
Wong, Frankie K; Adams, Saleigh; Hudson, Donald A; Ozaki, Wayne
2017-05-01
Resorbable fixation system (RFS) is an alternative to titanium in open reduction and internal fixation of pediatric facial fractures. This study retrospectively reviewed all medical records in a major metropolitan pediatric hospital in Cape Town, South Africa from September 2010 through May 2014. Inclusion criteria were children under the age of 13 with facial fractures who have undergone open reduction and internal fixation using RFS. Intraoperative and postoperative complications were reviewed. A total of 21 patients were included in this study. Twelve were males and 9 were females. Good dental occlusion was achieved in all patients and there were no complications intraoperatively. Three patients developed postoperative implanted-related complications: all 3 patients developed malocclusions and 1 developed an additional sterile abscess over the right zygomatic bone. For the latter, incision and drainage was performed and the problem resolved without additional operations. Resorbable fixation system is an alternative to titanium products in the setting of pediatric facial fractures without complications involving delayed union or malunion. The combination of intermaxillary fixation and RFS is not needed postoperatively for adequate fixation of mandible fractures. Resorbable fixation system is able to provide adequate internal fixation when both low-stress and high-stress craniofacial fractures occur simultaneously.
Holmes, Amanda; Winston, Joel S; Eimer, Martin
2005-10-01
To investigate the impact of spatial frequency on emotional facial expression analysis, ERPs were recorded in response to low spatial frequency (LSF), high spatial frequency (HSF), and unfiltered broad spatial frequency (BSF) faces with fearful or neutral expressions, houses, and chairs. In line with previous findings, BSF fearful facial expressions elicited a greater frontal positivity than BSF neutral facial expressions, starting at about 150 ms after stimulus onset. In contrast, this emotional expression effect was absent for HSF and LSF faces. Given that some brain regions involved in emotion processing, such as amygdala and connected structures, are selectively tuned to LSF visual inputs, these data suggest that ERP effects of emotional facial expression do not directly reflect activity in these regions. It is argued that higher order neocortical brain systems are involved in the generation of emotion-specific waveform modulations. The face-sensitive N170 component was neither affected by emotional facial expression nor by spatial frequency information.
Peeters, N; Lemkens, P; Leach, R; Gemels B; Schepers, S; Lemmens, W
Facial trauma. Patients with facial trauma must be assessed in a systematic way so as to avoid missing any injury. Severe and disfiguring facial injuries can be distracting. However, clinicians must first focus on the basics of trauma care, following the Advanced Trauma Life Support (ATLS) system of care. Maxillofacial trauma occurs in a significant number of severely injured patients. Life- and sight-threatening injuries must be excluded during the primary and secondary surveys. Special attention must be paid to sight-threatening injuries in stabilized patients through early referral to an appropriate specialist or the early initiation of emergency care treatment. The gold standard for the radiographic evaluation of facial injuries is computed tomography (CT) imaging. Nasal fractures are the most frequent isolated facial fractures. Isolated nasal fractures are principally diagnosed through history and clinical examination. Closed reduction is the most frequently performed treatment for isolated nasal fractures, with a fractured nasal septum as a predictor of failure. Ear, nose and throat surgeons, maxillofacial surgeons and ophthalmologists must all develop an adequate treatment plan for patients with complex maxillofacial trauma.
An Assessment of How Facial Mimicry Can Change Facial Morphology: Implications for Identification.
Gibelli, Daniele; De Angelis, Danilo; Poppa, Pasquale; Sforza, Chiarella; Cattaneo, Cristina
2017-03-01
The assessment of facial mimicry is important in forensic anthropology; in addition, the application of modern 3D image acquisition systems may help for the analysis of facial surfaces. This study aimed at exposing a novel method for comparing 3D profiles in different facial expressions. Ten male adults, aged between 30 and 40 years, underwent acquisitions by stereophotogrammetry (VECTRA-3D ® ) with different expressions (neutral, happy, sad, angry, surprised). The acquisition of each individual was then superimposed on the neutral one according to nine landmarks, and the root mean square (RMS) value between the two expressions was calculated. The highest difference in comparison with the neutral standard was shown by the happy expression (RMS 4.11 mm), followed by the surprised (RMS 2.74 mm), sad (RMS 1.3 mm), and angry ones (RMS 1.21 mm). This pilot study shows that the 3D-3D superimposition may provide reliable results concerning facial alteration due to mimicry. © 2016 American Academy of Forensic Sciences.
Hontanilla, Bernardo; Vila, Antonio
2012-02-01
To compare quantitatively the results obtained after hemihypoglossal nerve transposition and microvascular gracilis transfer associated with a cross facial nerve graft (CFNG) for reanimation of a paralysed face, 66 patients underwent hemihypoglossal transposition (n = 25) or microvascular gracilis transfer and CFNG (n = 41). The commissural displacement (CD) and commissural contraction velocity (CCV) in the two groups were compared using the system known as Facial clima. There was no inter-group variability between the groups (p > 0.10) in either variable. However, intra-group variability was detected between the affected and healthy side in the transposition group (p = 0.036 and p = 0.017, respectively). The transfer group had greater symmetry in displacement of the commissure (CD) and commissural contraction velocity (CCV) than the transposition group and patients were more satisfied. However, the transposition group had correct symmetry at rest but more asymmetry of CCV and CD when smiling.
Facial soft biometric features for forensic face recognition.
Tome, Pedro; Vera-Rodriguez, Ruben; Fierrez, Julian; Ortega-Garcia, Javier
2015-12-01
This paper proposes a functional feature-based approach useful for real forensic caseworks, based on the shape, orientation and size of facial traits, which can be considered as a soft biometric approach. The motivation of this work is to provide a set of facial features, which can be understood by non-experts such as judges and support the work of forensic examiners who, in practice, carry out a thorough manual comparison of face images paying special attention to the similarities and differences in shape and size of various facial traits. This new approach constitutes a tool that automatically converts a set of facial landmarks to a set of features (shape and size) corresponding to facial regions of forensic value. These features are furthermore evaluated in a population to generate statistics to support forensic examiners. The proposed features can also be used as additional information that can improve the performance of traditional face recognition systems. These features follow the forensic methodology and are obtained in a continuous and discrete manner from raw images. A statistical analysis is also carried out to study the stability, discrimination power and correlation of the proposed facial features on two realistic databases: MORPH and ATVS Forensic DB. Finally, the performance of both continuous and discrete features is analyzed using different similarity measures. Experimental results show high discrimination power and good recognition performance, especially for continuous features. A final fusion of the best systems configurations achieves rank 10 match results of 100% for ATVS database and 75% for MORPH database demonstrating the benefits of using this information in practice. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Privacy Preserving Facial and Fingerprint Multi-biometric Authentication
NASA Astrophysics Data System (ADS)
Anzaku, Esla Timothy; Sohn, Hosik; Ro, Yong Man
The cases of identity theft can be mitigated by the adoption of secure authentication methods. Biohashing and its variants, which utilizes secret keys and biometrics, are promising methods for secure authentication; however, their shortcoming is the degraded performance under the assumption that secret keys are compromised. In this paper, we extend the concept of Biohashing to multi-biometrics - facial and fingerprint traits. We chose these traits because they are widely used, howbeit, little research attention has been given to designing privacy preserving multi-biometric systems using them. Instead of just using a single modality (facial or fingerprint), we presented a framework for using both modalities. The improved performance of the proposed method, using face and fingerprint, as against either facial or fingerprint trait used in isolation is evaluated using two chimerical bimodal databases formed from publicly available facial and fingerprint databases.
Illuminant color estimation based on pigmentation separation from human skin color
NASA Astrophysics Data System (ADS)
Tanaka, Satomi; Kakinuma, Akihiro; Kamijo, Naohiro; Takahashi, Hiroshi; Tsumura, Norimichi
2015-03-01
Human has the visual system called "color constancy" that maintains the perceptive colors of same object across various light sources. The effective method of color constancy algorithm was proposed to use the human facial color in a digital color image, however, this method has wrong estimation results by the difference of individual facial colors. In this paper, we present the novel color constancy algorithm based on skin color analysis. The skin color analysis is the method to separate the skin color into the components of melanin, hemoglobin and shading. We use the stationary property of Japanese facial color, and this property is calculated from the components of melanin and hemoglobin. As a result, we achieve to propose the method to use subject's facial color in image and not depend on the individual difference among Japanese facial color.
Corneanu, Ciprian Adrian; Simon, Marc Oliu; Cohn, Jeffrey F; Guerrero, Sergio Escalera
2016-08-01
Facial expressions are an important way through which humans interact socially. Building a system capable of automatically recognizing facial expressions from images and video has been an intense field of study in recent years. Interpreting such expressions remains challenging and much research is needed about the way they relate to human affect. This paper presents a general overview of automatic RGB, 3D, thermal and multimodal facial expression analysis. We define a new taxonomy for the field, encompassing all steps from face detection to facial expression recognition, and describe and classify the state of the art methods accordingly. We also present the important datasets and the bench-marking of most influential methods. We conclude with a general discussion about trends, important questions and future lines of research.
Discrimination of gender using facial image with expression change
NASA Astrophysics Data System (ADS)
Kuniyada, Jun; Fukuda, Takahiro; Terada, Kenji
2005-12-01
By carrying out marketing research, the managers of large-sized department stores or small convenience stores obtain the information such as ratio of men and women of visitors and an age group, and improve their management plan. However, these works are carried out in the manual operations, and it becomes a big burden to small stores. In this paper, the authors propose a method of men and women discrimination by extracting difference of the facial expression change from color facial images. Now, there are a lot of methods of the automatic recognition of the individual using a motion facial image or a still facial image in the field of image processing. However, it is very difficult to discriminate gender under the influence of the hairstyle and clothes, etc. Therefore, we propose the method which is not affected by personality such as size and position of facial parts by paying attention to a change of an expression. In this method, it is necessary to obtain two facial images with an expression and an expressionless. First, a region of facial surface and the regions of facial parts such as eyes, nose, and mouth are extracted in the facial image with color information of hue and saturation in HSV color system and emphasized edge information. Next, the features are extracted by calculating the rate of the change of each facial part generated by an expression change. In the last step, the values of those features are compared between the input data and the database, and the gender is discriminated. In this paper, it experimented for the laughing expression and smile expression, and good results were provided for discriminating gender.
A comparison study of different facial soft tissue analysis methods.
Kook, Min-Suk; Jung, Seunggon; Park, Hong-Ju; Oh, Hee-Kyun; Ryu, Sun-Youl; Cho, Jin-Hyoung; Lee, Jae-Seo; Yoon, Suk-Ja; Kim, Min-Soo; Shin, Hyo-Keun
2014-07-01
The purpose of this study was to evaluate several different facial soft tissue measurement methods. After marking 15 landmarks in the facial area of 12 mannequin heads of different sizes and shapes, facial soft tissue measurements were performed by the following 5 methods: Direct anthropometry, Digitizer, 3D CT, 3D scanner, and DI3D system. With these measurement methods, 10 measurement values representing the facial width, height, and depth were determined twice with a one week interval by one examiner. These data were analyzed with the SPSS program. The position created based on multi-dimensional scaling showed that direct anthropometry, 3D CT, digitizer, 3D scanner demonstrated relatively similar values, while the DI3D system showed slightly different values. All 5 methods demonstrated good accuracy and had a high coefficient of reliability (>0.92) and a low technical error (<0.9 mm). The measured value of the distance between the right and left medial canthus obtained by using the DI3D system was statistically significantly different from that obtained by using the digital caliper, digitizer and laser scanner (p < 0.05), but the other measured values were not significantly different. On evaluating the reproducibility of measurement methods, two measurement values (Ls-Li, G-Pg) obtained by using direct anthropometry, one measurement value (N'-Prn) obtained by using the digitizer, and four measurement values (EnRt-EnLt, AlaRt-AlaLt, ChRt-ChLt, Sn-Pg) obtained by using the DI3D system, were statistically significantly different. However, the mean measurement error in every measurement method was low (<0.7 mm). All measurement values obtained by using the 3D CT and 3D scanner did not show any statistically significant difference. The results of this study show that all 3D facial soft tissue analysis methods demonstrate favorable accuracy and reproducibility, and hence they can be used in clinical practice and research studies. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Foley, Elaine; Rippon, Gina; Thai, Ngoc Jade; Longe, Olivia; Senior, Carl
2012-02-01
Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223-233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions.
Lyme disease and Bell's palsy: an epidemiological study of diagnosis and risk in England.
Cooper, Lilli; Branagan-Harris, Michael; Tuson, Richard; Nduka, Charles
2017-05-01
Lyme disease is caused by a tick-borne spirochaete of the Borrelia species. It is associated with facial palsy, is increasingly common in England, and may be misdiagnosed as Bell's palsy. To produce an accurate map of Lyme disease diagnosis in England and to identify patients at risk of developing associated facial nerve palsy, to enable prevention, early diagnosis, and effective treatment. Hospital episode statistics (HES) data in England from the Health and Social Care Information Centre were interrogated from April 2011 to March 2015 for International Classification of Diseases 10th revision (ICD-10) codes A69.2 (Lyme disease) and G51.0 (Bell's palsy) in isolation, and as a combination. Patients' age, sex, postcode, month of diagnosis, and socioeconomic groups as defined according to the English Indices of Deprivation (2004) were also collected. Lyme disease hospital diagnosis increased by 42% per year from 2011 to 2015 in England. Higher incidence areas, largely rural, were mapped. A trend towards socioeconomic privilege and the months of July to September was observed. Facial palsy in combination with Lyme disease is also increasing, particularly in younger patients, with a mean age of 41.7 years, compared with 59.6 years for Bell's palsy and 45.9 years for Lyme disease ( P = 0.05, analysis of variance [ANOVA]). Healthcare practitioners should have a high index of suspicion for Lyme disease following travel in the areas shown, particularly in the summer months. The authors suggest that patients presenting with facial palsy should be tested for Lyme disease. © British Journal of General Practice 2017.
Delayed facial nerve decompression for Bell's palsy.
Kim, Sang Hoon; Jung, Junyang; Lee, Jong Ha; Byun, Jae Yong; Park, Moon Suh; Yeo, Seung Geun
2016-07-01
Incomplete recovery of facial motor function continues to be long-term sequelae in some patients with Bell's palsy. The purpose of this study was to investigate the efficacy of transmastoid facial nerve decompression after steroid and antiviral treatment in patients with late stage Bell's palsy. Twelve patients underwent surgical decompression for Bell's palsy 21-70 days after onset, whereas 22 patients were followed up after steroid and antiviral therapy without decompression. Surgical criteria included greater than 90 % degeneration on electroneuronography and no voluntary electromyography potentials. This study was a retrospective study of electrodiagnostic data and medical chart review between 2006 and 2013. Recovery from facial palsy was assessed using the House-Brackmann grading system. Final recovery rate did not differ significantly in the two groups; however, all patients in the decompression group recovered to at least House-Brackmann grade III at final follow-up. Although postoperative hearing threshold was increased in both groups, there was no significant between group difference in hearing threshold. Transmastoid decompression of the facial nerve in patients with severe late stage Bell's palsy at risk for a poor facial nerve outcome reduced severe complications of facial palsy with minimal morbidity.
Distinct growth of the nasomaxillary complex in Au. sediba.
Lacruz, Rodrigo S; Bromage, Timothy G; O'Higgins, Paul; Toro-Ibacache, Viviana; Warshaw, Johanna; Berger, Lee R
2015-10-15
Studies of facial ontogeny in immature hominins have contributed significantly to understanding the evolution of human growth and development. The recently discovered hominin species Autralopithecus sediba is represented by a well-preserved and nearly complete facial skeleton of a juvenile (MH1) which shows a derived facial anatomy. We examined MH1 using high radiation synchrotron to interpret features of the oronasal complex pertinent to facial growth. We also analyzed bone surface microanatomy to identify and map fields of bone deposition and bone resorption, which affect the development of the facial skeleton. The oronasal anatomy (premaxilla-palate-vomer architecture) is similar to other Australopithecus species. However surface growth remodeling of the midface (nasomaxillary complex) differs markedly from Australopithecus, Paranthropus, early Homo and from KNM-WT 15000 (H. erectus/ergaster) showing a distinct distribution of vertically disposed alternating depository and resorptive fields in relation to anterior dental roots and the subnasal region. The ontogeny of the MH1 midface superficially resembles some H. sapiens in the distribution of remodeling fields. The facial growth of MH1 appears unique among early hominins representing an evolutionary modification in facial ontogeny at 1.9 my, or to changes in masticatory system loading associated with diet.
Segmentation of human face using gradient-based approach
NASA Astrophysics Data System (ADS)
Baskan, Selin; Bulut, M. Mete; Atalay, Volkan
2001-04-01
This paper describes a method for automatic segmentation of facial features such as eyebrows, eyes, nose, mouth and ears in color images. This work is an initial step for wide range of applications based on feature-based approaches, such as face recognition, lip-reading, gender estimation, facial expression analysis, etc. Human face can be characterized by its skin color and nearly elliptical shape. For this purpose, face detection is performed using color and shape information. Uniform illumination is assumed. No restrictions on glasses, make-up, beard, etc. are imposed. Facial features are extracted using the vertically and horizontally oriented gradient projections. The gradient of a minimum with respect to its neighbor maxima gives the boundaries of a facial feature. Each facial feature has a different horizontal characteristic. These characteristics are derived by extensive experimentation with many face images. Using fuzzy set theory, the similarity between the candidate and the feature characteristic under consideration is calculated. Gradient-based method is accompanied by the anthropometrical information, for robustness. Ear detection is performed using contour-based shape descriptors. This method detects the facial features and circumscribes each facial feature with the smallest rectangle possible. AR database is used for testing. The developed method is also suitable for real-time systems.
Human facial neural activities and gesture recognition for machine-interfacing applications.
Hamedi, M; Salleh, Sh-Hussain; Tan, T S; Ismail, K; Ali, J; Dee-Uam, C; Pavaganun, C; Yupapin, P P
2011-01-01
The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human-machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2-11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy >90% of chosen combinations proved their ability to be used as command controllers.
Voluntary facial action generates emotion-specific autonomic nervous system activity.
Levenson, R W; Ekman, P; Friesen, W V
1990-07-01
Four experiments were conducted to determine whether voluntarily produced emotional facial configurations are associated with differentiated patterns of autonomic activity, and if so, how this might be mediated. Subjects received muscle-by-muscle instructions and coaching to produce facial configurations for anger, disgust, fear, happiness, sadness, and surprise while heart rate, skin conductance, finger temperature, and somatic activity were monitored. Results indicated that voluntary facial activity produced significant levels of subjective experience of the associated emotion, and that autonomic distinctions among emotions: (a) were found both between negative and positive emotions and among negative emotions, (b) were consistent between group and individual subjects' data, (c) were found in both male and female subjects, (d) were found in both specialized (actors, scientists) and nonspecialized populations, (e) were stronger when the voluntary facial configurations most closely resembled actual emotional expressions, and (f) were stronger when experience of the associated emotion was reported. The capacity of voluntary facial activity to generate emotion-specific autonomic activity: (a) did not require subjects to see facial expressions (either in a mirror or on an experimenter's face), and (b) could not be explained by differences in the difficulty of making the expressions or by differences in concomitant somatic activity.
Coupled Dictionary Learning for the Detail-Enhanced Synthesis of 3-D Facial Expressions.
Liang, Haoran; Liang, Ronghua; Song, Mingli; He, Xiaofei
2016-04-01
The desire to reconstruct 3-D face models with expressions from 2-D face images fosters increasing interest in addressing the problem of face modeling. This task is important and challenging in the field of computer animation. Facial contours and wrinkles are essential to generate a face with a certain expression; however, these details are generally ignored or are not seriously considered in previous studies on face model reconstruction. Thus, we employ coupled radius basis function networks to derive an intermediate 3-D face model from a single 2-D face image. To optimize the 3-D face model further through landmarks, a coupled dictionary that is related to 3-D face models and their corresponding 3-D landmarks is learned from the given training set through local coordinate coding. Another coupled dictionary is then constructed to bridge the 2-D and 3-D landmarks for the transfer of vertices on the face model. As a result, the final 3-D face can be generated with the appropriate expression. In the testing phase, the 2-D input faces are converted into 3-D models that display different expressions. Experimental results indicate that the proposed approach to facial expression synthesis can obtain model details more effectively than previous methods can.
Lahera, Guillermo; Ruiz, Alicia; Brañas, Antía; Vicens, María; Orozco, Arantxa
Previous studies have linked processing speed with social cognition and functioning of patients with schizophrenia. A discriminant analysis is needed to determine the different components of this neuropsychological construct. This paper analyzes the impact of processing speed, reaction time and sustained attention on social functioning. 98 outpatients between 18 and 65 with DSM-5 diagnosis of schizophrenia, with a period of 3 months of clinical stability, were recruited. Sociodemographic and clinical data were collected, and the following variables were measured: processing speed (Trail Making Test [TMT], symbol coding [BACS], verbal fluency), simple and elective reaction time, sustained attention, recognition of facial emotions and global functioning. Processing speed (measured only through the BACS), sustained attention (CPT) and elective reaction time (but not simple) were associated with functioning. Recognizing facial emotions (FEIT) correlated significantly with scores on measures of processing speed (BACS, Animals, TMT), sustained attention (CPT) and reaction time. The linear regression model showed a significant relationship between functioning, emotion recognition (P=.015) and processing speed (P=.029). A deficit in processing speed and facial emotion recognition are associated with worse global functioning in patients with schizophrenia. Copyright © 2017 SEP y SEPB. Publicado por Elsevier España, S.L.U. All rights reserved.
How face blurring affects body language processing of static gestures in women and men.
Proverbio, A M; Ornaghi, L; Gabaro, V
2018-05-14
The role of facial coding in body language comprehension was investigated by ERP recordings in 31 participants viewing 800 photographs of gestures (iconic, deictic and emblematic), which could be congruent or incongruent with their caption. Facial information was obscured by blurring in half of the stimuli. The task consisted of evaluating picture/caption congruence. Quicker response times were observed in women than in men to congruent stimuli, and a cost for incongruent vs. congruent stimuli was found only in men. Face obscuration did not affect accuracy in women as reflected by omission percentages, nor reduced their cognitive potentials, thus suggesting a better comprehension of face deprived pantomimes. N170 response (modulated by congruity and face presence) peaked later in men than in women. Late Positivity was much larger for congruent stimuli in the female brain, regardless of face blurring. Face presence specifically activated the right superior temporal and fusiform gyri, cingulate cortex and insula, according to source reconstruction. These regions have been reported to be insufficiently activated in face-avoiding individuals with social deficits. Overall, the results corroborate the hypothesis that females might be more resistant to the lack of facial information or better at understanding body language in face-deprived social information.
Novel magnet-retained prosthetic system for facial reconstruction.
Ahmed, Mostafa M; Piper, James M; Hansen, Nancy A; Sutton, Alan J; Schmalbach, Cecelia E
2014-01-01
Traumatic facial defects negatively impact speech, mastication, deglutition, dental hygiene, and psychosocial well-being. Reconstruction must address restoration of function and aesthetics to provide quality of life. This report describes soft-tissue reconstruction using a novel magnet-retained facial prosthesis without osseointegrated abutments, performed in a patient after traumatic loss of the entire left lower part of the face, including lips, commissure, and mentum. This reconstructive technique successfully addressed the cosmetic defect while also restoring function with respect to speech and oral nutrition. For this reason, magnet-retained facial prosthesis should be added to free tissue transfer and regional flaps as a reasonable option in the reconstructive algorithm for complex soft-tissue defects of the lower face.
Clinical characteristics of patients with facial psoriasis in Malaysia.
Syed Nong Chek, Sharifah Rosniza; Robinson, Suganthy; Mohd Affandi, Azura; Baharum, Nurakmal
2016-10-01
Psoriasis involving the face is visible and can cause considerable emotional distress to patients. Its presence may also confer a poorer prognosis for the patient. This study sought to evaluate the characteristics of facial psoriasis in Malaysia. A cross-sectional study conducted using data from the Malaysian Psoriasis Registry from 2007 to 2011. Specific risk factors, i.e., age, age of onset, gender, duration of disease, obesity group, body surface area, Dermatology Life Quality Index (DLQI), family history of psoriasis, nail involvement, psoriatic arthritis, phototherapy, systemic therapy, clinic visit, days of work/school, and hospital admission due to psoriasis in the last 6 months were analyzed. A total of 48.4% of patients had facial psoriasis. Variables significantly associated with facial psoriasis are younger age, younger age of onset of psoriasis of ≤ 40 years, male, severity of psoriasis involving >10% of the body surface area, higher DLQI of >10, nail involvement, and history of hospitalization due to psoriasis. This study found that facial psoriasis is not as rare as previously thought. Ambient ultraviolet light, sebum, and contact with chemicals from facial products may reduce the severity of facial psoriasis, but these factors do not reduce the prevalence of facial psoriasis. The association with younger age, younger age of onset, higher percentage of body surface area involvement, higher DLQI of > 10, nail involvement, and hospitalization due to psoriasis support the notion that facial psoriasis is a marker of severe disease. © 2016 The International Society of Dermatology.
Peripheral facial palsy: Speech, communication and oral motor function.
Movérare, T; Lohmander, A; Hultcrantz, M; Sjögreen, L
2017-02-01
The aim of the present study was to examine the effect of acquired unilateral peripheral facial palsy on speech, communication and oral functions and to study the relationship between the degree of facial palsy and articulation, saliva control, eating ability and lip force. In this descriptive study, 27 patients (15 men and 12 women, mean age 48years) with unilateral peripheral facial palsy were included if they were graded under 70 on the Sunnybrook Facial Grading System. The assessment was carried out in connection with customary visits to the ENT Clinic and comprised lip force, articulation and intelligibility, together with perceived ability to communicate and ability to eat and control saliva conducted through self-response questionnaires. The patients with unilateral facial palsy had significantly lower lip force, poorer articulation and ability to eat and control saliva compared with reference data in healthy populations. The degree of facial palsy correlated significantly with lip force but not with articulation, intelligibility, perceived communication ability or reported ability to eat and control saliva. Acquired peripheral facial palsy may affect communication and the ability to eat and control saliva. Physicians should be aware that there is no direct correlation between the degree of facial palsy and the possible effect on communication, eating ability and saliva control. Physicians are therefore recommended to ask specific questions relating to problems with these functions during customary medical visits and offer possible intervention by a speech-language pathologist or a physiotherapist. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Analysis of Facial Injuries Caused by Power Tools.
Kim, Jiye; Choi, Jin-Hee; Hyun Kim, Oh; Won Kim, Sug
2016-06-01
The number of injuries caused by power tools is steadily increasing as more domestic woodwork is undertaken and more power tools are used recreationally. The injuries caused by the different power tools as a consequence of accidents are an issue, because they can lead to substantial costs for patients and the national insurance system. The increase in hand surgery as a consequence of the use of power tools and its economic impact, and the characteristics of the hand injuries caused by power saws have been described. In recent years, the authors have noticed that, in addition to hand injuries, facial injuries caused by power tools commonly present to the emergency room. This study aimed to review the data in relation to facial injuries caused by power saws that were gathered from patients who visited the trauma center at our hospital over the last 4 years, and to analyze the incidence and epidemiology of the facial injuries caused by power saws. The authors found that facial injuries caused by power tools have risen continually. Facial injuries caused by power tools are accidental, and they cause permanent facial disfigurements and functional disabilities. Accidents are almost inevitable in particular workplaces; however, most facial injuries could be avoided by providing sufficient operator training and by tool operators wearing suitable protective devices. The evaluation of the epidemiology and patterns of facial injuries caused by power tools in this study should provide the information required to reduce the number of accidental injuries.
Foolad, Negar; Shi, Vivian Y; Prakash, Neha; Kamangar, Faranak; Sivamani, Raja K
2015-06-16
Rosacea and melasma are two common skin conditions in dermatology. Both conditions have a predilection for the centrofacial region where the sebaceous gland density is the highest. However it is not known if sebaceous function has an association with these conditions. We aimed to assess the relationship between facial glabellar wrinkle severity and facial sebum excretion rate for individuals with rosacea, melasma, both conditions, and in those with rhytides. Secondly, the purpose of this study was to utilize high resolution 3D facial modeling and measurement technology to obtain information regarding glabellar rhytid count and severity. A total of 21 subjects participated in the study. Subjects were divided into four groups based on facial features: rosacea-only, melasma-only, rosacea and melasma, rhytides-only. A high resolution facial photograph was taken followed by measurement of facial sebum excretion rate (SER). The SER was found to decline with age and with the presence of melasma. The SER negatively correlated with increasing Wrinkle Severity Rating Scale. Through the use of 3D facial modeling and skin analysis technology, we found a positive correlation between clinically based grading scores and computer generated glabellar rhytid count and severity. Continuing research with facial modeling and measurement systems will allow for development of more objective facial assessments. Future studies need to assess the role of technology in stratifying the severity and subtypes of rosacea and melasma. Furthermore, the role of sebaceous regulation may have important implications in photoaging.
Sliwa, Julia; Planté, Aurélie; Duhamel, Jean-René; Wirth, Sylvia
2016-03-01
Social interactions make up to a large extent the prime material of episodic memories. We therefore asked how social signals are coded by neurons in the hippocampus. Human hippocampus is home to neurons representing familiar individuals in an abstract and invariant manner ( Quian Quiroga et al. 2009). In contradistinction, activity of rat hippocampal cells is only weakly altered by the presence of other rats ( von Heimendahl et al. 2012; Zynyuk et al. 2012). We probed the activity of monkey hippocampal neurons to faces and voices of familiar and unfamiliar individuals (monkeys and humans). Thirty-one percent of neurons recorded without prescreening responded to faces or to voices. Yet responses to faces were more informative about individuals than responses to voices and neuronal responses to facial and vocal identities were not correlated, indicating that in our sample identity information was not conveyed in an invariant manner like in human neurons. Overall, responses displayed by monkey hippocampal neurons were similar to the ones of neurons recorded simultaneously in inferotemporal cortex, whose role in face perception is established. These results demonstrate that the monkey hippocampus participates in the read-out of social information contrary to the rat hippocampus, but possibly lack an explicit conceptual coding of as found in humans. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Anderson, Craig L; Monroy, Maria; Keltner, Dacher
2018-04-01
Emotional expressions communicate information about the individual's internal state and evoke responses in others that enable coordinated action. The current work investigated the informative and evocative properties of fear vocalizations in a sample of youth from underserved communities and military veterans while white-water rafting. Video-taped footage of participants rafting through white-water rapids was coded for vocal and facial expressions of fear, amusement, pride, and awe, yielding more than 1,300 coded expressions, which were then related to measures of subjective emotion and cortisol response. Consistent with informative properties of emotional expressions, fear vocalizations were positively and significantly related to facial expressions of fear, subjective reports of fear, and individuals' cortisol levels measured after the rafting trip. It is important to note that this coherent pattern was unique to fear vocalizations; vocalizations of amusement, pride, and awe were not significantly related to fear expressions in the face, subjective reports of fear, or cortisol levels. Demonstrating the evocative properties of emotional expression, fear vocalizations of individuals appeared to evoke fear vocalizations in other people in their raft, and cortisol levels of individuals within rafts similarly converged at the end of the trip. We discuss how the study of spontaneous emotion expressions in naturalistic settings can help address basic yet controversial questions about emotions. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Overview of pediatric peripheral facial nerve paralysis: analysis of 40 patients.
Özkale, Yasemin; Erol, İlknur; Saygı, Semra; Yılmaz, İsmail
2015-02-01
Peripheral facial nerve paralysis in children might be an alarming sign of serious disease such as malignancy, systemic disease, congenital anomalies, trauma, infection, middle ear surgery, and hypertension. The cases of 40 consecutive children and adolescents who were diagnosed with peripheral facial nerve paralysis at Baskent University Adana Hospital Pediatrics and Pediatric Neurology Unit between January 2010 and January 2013 were retrospectively evaluated. We determined that the most common cause was Bell palsy, followed by infection, tumor lesion, and suspected chemotherapy toxicity. We noted that younger patients had generally poorer outcome than older patients regardless of disease etiology. Peripheral facial nerve paralysis has been reported in many countries in America and Europe; however, knowledge about its clinical features, microbiology, neuroimaging, and treatment in Turkey is incomplete. The present study demonstrated that Bell palsy and infection were the most common etiologies of peripheral facial nerve paralysis. © The Author(s) 2014.
Synthesis of Speaker Facial Movement to Match Selected Speech Sequences
NASA Technical Reports Server (NTRS)
Scott, K. C.; Kagels, D. S.; Watson, S. H.; Rom, H.; Wright, J. R.; Lee, M.; Hussey, K. J.
1994-01-01
A system is described which allows for the synthesis of a video sequence of a realistic-appearing talking human head. A phonic based approach is used to describe facial motion; image processing rather than physical modeling techniques are used to create video frames.
Matsumiya, Kazumichi
2013-10-01
Current views on face perception assume that the visual system receives only visual facial signals. However, I show that the visual perception of faces is systematically biased by adaptation to a haptically explored face. Recently, face aftereffects (FAEs; the altered perception of faces after adaptation to a face) have been demonstrated not only in visual perception but also in haptic perception; therefore, I combined the two FAEs to examine whether the visual system receives face-related signals from the haptic modality. I found that adaptation to a haptically explored facial expression on a face mask produced a visual FAE for facial expression. This cross-modal FAE was not due to explicitly imaging a face, response bias, or adaptation to local features. Furthermore, FAEs transferred from vision to haptics. These results indicate that visual face processing depends on substrates adapted by haptic faces, which suggests that face processing relies on shared representation underlying cross-modal interactions.
[Peripheral facial paralysis: the role of physical medicine and rehabilitation].
Matos, Catarina
2011-12-01
Peripheral facial paralysis (PFP) is a consequence of the peripheral neuronal lesion of the facial nerve (FN). It can be either primary (Bell`s Palsy) or secondary. The classical clinical presentation typically involves both stages of the hemiface. However, there may be other symptoms (ex. xerophthalmia, hyperacusis, phonation and deglutition changes) that one should recall. Clinical evaluation includes rigorous muscle tonus and sensibility search in the FN territory. Some useful instruments allow better objectivity in the patients' evaluation (House-Brackmann System, Facial Grading System, Functional Evaluation). There are clear referral criteria to Physical Medicine and Rehabilitation. Treatment of Bell`s Palsy may include pharmacotherapy, neuromuscular training (NMT), physical methods and surgery. In the NMT field the several treatment techniques are systematized. Therapeutic strategies should be problem-oriented and adjusted to the patient's symptoms and signs. Physical methods are reviewed. In about 15-20 % of patients permanent sequelae subside after 3 months of evolution. PFP is commonly a multidisciplinary condition. Therefore, it is important to review strategies that Physical Medicine and Rehabilitation may offer.
Balconi, Michela; Mazza, Guido
2009-11-01
Alpha brain oscillation modulation was analyzed in response to masked emotional facial expressions. In addition, behavioural activation (BAS) and behavioural inhibition systems (BIS) were considered as an explicative factor to verify the effect of motivational significance on cortical activity. Nineteen subjects were submitted to an ample range of facial expressions of emotions (anger, fear, surprise, disgust, happiness, sadness, and neutral). The results demonstrated that anterior frontal sites were more active than central and posterior sites in response to facial stimuli. Moreover, right-side responses varied as a function of emotional types, with an increased right-frontal activity for negative emotions. Finally, whereas higher BIS subjects generated a more right hemisphere activation for some negative emotions (such as fear, anger, and surprise), Reward-BAS subjects were more responsive to positive emotion (happiness) within the left hemisphere. Valence and potential threatening power of facial expressions were considered to elucidate these cortical differences.
Hontanilla, Bernardo; Marre, Diego; Cabello, Alvaro
2014-01-01
Although in most cases Bell palsy resolves spontaneously, approximately one-third of patients will present sequela including facial synkinesis and paresis. Currently, the techniques available for reanimation of these patients include hypoglossal nerve transposition, free muscle transfer, and cross-face nerve grafting (CFNG). Between December 2008 and March 2012, eight patients with incomplete unilateral facial paralysis were reanimated with two-stage CFNG. Gender, age at surgery, etiology of paralysis denervation time, donor and recipient nerves, presence of facial synkinesis, and follow-up were registered. Commissural excursion and velocity and patient satisfaction were evaluated with the FACIAL CLIMA and a questionnaire, respectively. Mean age at surgery was 33.8 ± 11.5 years; mean time of denervation was 96.6 ± 109.8 months. No complications requiring surgery were registered. Follow-up period ranged from 7 to 33 months with a mean of 19 ± 9.7 months. FACIAL CLIMA showed improvement of both commissural excursion and velocity greater than 75% in 4 patients, greater than 50% in 2 patients, and less than 50% in the remaining two patients. Qualitative evaluation revealed a high grade of satisfaction in six patients (75%). Two-stage CFNG is a reliable technique for reanimation of incomplete facial paralysis with a high grade of patient satisfaction. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Zhang, Lili; Fan, Zhaomin; Han, Yuechen; Xu, Lei; Liu, Wenwen; Bai, Xiaohui; Zhou, Meijuan; Li, Jianfeng; Wang, Haibo
2018-04-01
Valproic acid (VPA), a medication primarily used to treat epilepsy and bipolar disorder, has been applied to the repair of central and peripheral nervous system injury. The present study investigated the effect of VPA on functional recovery, survival of facial motor neurons (FMNs), and expression of proteins in rats after facial nerve trunk transection by functional measurement, Nissl staining, TUNEL, immunofluorescence, and Western blot. Following facial nerve injury, all rats in group VPA showed a better functional recovery, which was significant at the given time, compared with group NS. The Nissl staining results demonstrated that the number of FMNs survival in group VPA was higher than that in group normal saline (NS). TUNEL staining showed that axonal injury of facial nerve could lead to neuronal apoptosis of FMNs. But treatment of VPA significantly reduced cell apoptosis by decreasing the expression of Bax protein and increased neuronal survival by upregulating the level of brain-derived neurotrophic factor (BDNF) and growth associated protein-43 (GAP-43) expression in injured FMNs compared with group NS. Overall, our findings suggest that VPA may advance functional recovery, reduce lesion-induced apoptosis, and promote neuron survival after facial nerve transection in rats. This study provides an experimental evidence for better understanding the mechanism of injury and repair of peripheral facial paralysis.
Cognitive penetrability and emotion recognition in human facial expressions
Marchi, Francesco
2015-01-01
Do our background beliefs, desires, and mental images influence our perceptual experience of the emotions of others? In this paper, we will address the possibility of cognitive penetration (CP) of perceptual experience in the domain of social cognition. In particular, we focus on emotion recognition based on the visual experience of facial expressions. After introducing the current debate on CP, we review examples of perceptual adaptation for facial expressions of emotion. This evidence supports the idea that facial expressions are perceptually processed as wholes. That is, the perceptual system integrates lower-level facial features, such as eyebrow orientation, mouth angle etc., into facial compounds. We then present additional experimental evidence showing that in some cases, emotion recognition on the basis of facial expression is sensitive to and modified by the background knowledge of the subject. We argue that such sensitivity is best explained as a difference in the visual experience of the facial expression, not just as a modification of the judgment based on this experience. The difference in experience is characterized as the result of the interference of background knowledge with the perceptual integration process for faces. Thus, according to the best explanation, we have to accept CP in some cases of emotion recognition. Finally, we discuss a recently proposed mechanism for CP in the face-based recognition of emotion. PMID:26150796
Does Facial Resemblance Enhance Cooperation?
Giang, Trang; Bell, Raoul; Buchner, Axel
2012-01-01
Facial self-resemblance has been proposed to serve as a kinship cue that facilitates cooperation between kin. In the present study, facial resemblance was manipulated by morphing stimulus faces with the participants' own faces or control faces (resulting in self-resemblant or other-resemblant composite faces). A norming study showed that the perceived degree of kinship was higher for the participants and the self-resemblant composite faces than for actual first-degree relatives. Effects of facial self-resemblance on trust and cooperation were tested in a paradigm that has proven to be sensitive to facial trustworthiness, facial likability, and facial expression. First, participants played a cooperation game in which the composite faces were shown. Then, likability ratings were assessed. In a source memory test, participants were required to identify old and new faces, and were asked to remember whether the faces belonged to cooperators or cheaters in the cooperation game. Old-new recognition was enhanced for self-resemblant faces in comparison to other-resemblant faces. However, facial self-resemblance had no effects on the degree of cooperation in the cooperation game, on the emotional evaluation of the faces as reflected in the likability judgments, and on the expectation that a face belonged to a cooperator rather than to a cheater. Therefore, the present results are clearly inconsistent with the assumption of an evolved kin recognition module built into the human face recognition system. PMID:23094095
Mirror book therapy for the treatment of idiopathic facial palsy.
Barth, Jodi Maron; Stezar, Gincy L; Acierno, Gabriela C; Kim, Thomas J; Reilly, Michael J
2014-09-01
We conducted a retrospective chart review to determine the effectiveness of treating idiopathic facial palsy with mirror book therapy in conjunction with facial physical rehabilitation. We compared outcomes in 15 patients who underwent mirror book therapy in addition to standard therapy with those of 10 patients who underwent standard rehabilitation therapy without the mirror book. Before and after treatment, patients in both groups were rated according to the Facial Grading System (FGS), the Facial Disability Index-Physical (FDIP), and the Facial Disability Index-Social (FDIS). Patients in the mirror therapy group had a mean increase of 24.9 in FGS score, 22.0 in FDIP score, and 25.0 in FDIS score, all of which represented statistically significant improvements over their pretreatment scores. Those who did not receive mirror book therapy had mean increases of 20.8, 19.0, 14.6, respectively; these, too, represented significant improvements over baseline, and thus there was no statistically significant difference in improvement between the two groups. Nevertheless, our results show that patients who used mirror book therapy in addition to standard facial rehabilitation therapy experienced significant improvements in the treatment of idiopathic facial palsy. While further studies are necessary to determine if it has a definitive, statistically significant advantage over standard therapy, we recommend adding this therapy to the rehabilitation program in view of its ease of use, low cost, and lack of side effects.
Saito, Kosuke; Tamaki, Tetsuro; Hirata, Maki; Hashimoto, Hiroyuki; Nakazato, Kenei; Nakajima, Nobuyuki; Kazuno, Akihito; Sakai, Akihiro; Iida, Masahiro; Okami, Kenji
2015-01-01
Head and neck cancer is often diagnosed at advanced stages, and surgical resection with wide margins is generally indicated, despite this treatment being associated with poor postoperative quality of life (QOL). We have previously reported on the therapeutic effects of skeletal muscle-derived multipotent stem cells (Sk-MSCs), which exert reconstitution capacity for muscle-nerve-blood vessel units. Recently, we further developed a 3D patch-transplantation system using Sk-MSC sheet-pellets. The aim of this study is the application of the 3D Sk-MSC transplantation system to the reconstitution of facial complex nerve-vascular networks after severe damage. Mouse experiments were performed for histological analysis and rats were used for functional examinations. The Sk-MSC sheet-pellets were prepared from GFP-Tg mice and SD rats, and were transplanted into the facial resection model (ST). Culture medium was transplanted as a control (NT). In the mouse experiment, facial-nerve-palsy (FNP) scoring was performed weekly during the recovery period, and immunohistochemistry was used for the evaluation of histological recovery after 8 weeks. In rats, contractility of facial muscles was measured via electrical stimulation of facial nerves root, as the marker of total functional recovery at 8 weeks after transplantation. The ST-group showed significantly higher FNP (about three fold) scores when compared to the NT-group after 2-8 weeks. Similarly, significant functional recovery of whisker movement muscles was confirmed in the ST-group at 8 weeks after transplantation. In addition, engrafted GFP+ cells formed complex branches of nerve-vascular networks, with differentiation into Schwann cells and perineurial/endoneurial cells, as well as vascular endothelial and smooth muscle cells. Thus, Sk-MSC sheet-pellet transplantation is potentially useful for functional reconstitution therapy of large defects in facial nerve-vascular networks.
Real-time Avatar Animation from a Single Image.
Saragih, Jason M; Lucey, Simon; Cohn, Jeffrey F
2011-01-01
A real time facial puppetry system is presented. Compared with existing systems, the proposed method requires no special hardware, runs in real time (23 frames-per-second), and requires only a single image of the avatar and user. The user's facial expression is captured through a real-time 3D non-rigid tracking system. Expression transfer is achieved by combining a generic expression model with synthetically generated examples that better capture person specific characteristics. Performance of the system is evaluated on avatars of real people as well as masks and cartoon characters.
Real-time Avatar Animation from a Single Image
Saragih, Jason M.; Lucey, Simon; Cohn, Jeffrey F.
2014-01-01
A real time facial puppetry system is presented. Compared with existing systems, the proposed method requires no special hardware, runs in real time (23 frames-per-second), and requires only a single image of the avatar and user. The user’s facial expression is captured through a real-time 3D non-rigid tracking system. Expression transfer is achieved by combining a generic expression model with synthetically generated examples that better capture person specific characteristics. Performance of the system is evaluated on avatars of real people as well as masks and cartoon characters. PMID:24598812
Three-dimensional analysis of facial morphology.
Liu, Yun; Kau, Chung How; Talbert, Leslie; Pan, Feng
2014-09-01
The objectives of this study were to evaluate sexual dimorphism for facial features within Chinese and African American populations and to compare the facial morphology by sex between these 2 populations. Three-dimensional facial images were acquired by using the portable 3dMDface System, which captured 189 subjects from 2 population groups of Chinese (n = 72) and African American (n = 117). Each population was categorized into male and female groups for evaluation. All subjects in the groups were aged between 18 and 30 years and had no apparent facial anomalies. A total of 23 anthropometric landmarks were identified on the three-dimensional faces of each subject. Twenty-one measurements in 4 regions, including 19 distances and 2 angles, were not only calculated but also compared within and between the Chinese and African American populations. The Student's t-test was used to analyze each data set obtained within each subgroup. Distinct facial differences were presented between the examined subgroups. When comparing the sex differences of facial morphology in the Chinese population, significant differences were noted in 71.43% of the parameters calculated, and the same proportion was found in the African American group. The facial morphologic differences between the Chinese and African American populations were evaluated by sex. The proportion of significant differences in the parameters calculated was 90.48% for females and 95.24% for males between the 2 populations. The African American population had a more convex profile and greater face width than those of the Chinese population. Sexual dimorphism for facial features was presented in both the Chinese and African American populations. In addition, there were significant differences in facial morphology between these 2 populations.
More Pronounced Deficits in Facial Emotion Recognition for Schizophrenia than Bipolar Disorder
Goghari, Vina M; Sponheim, Scott R
2012-01-01
Schizophrenia and bipolar disorder are typically separated in diagnostic systems. Behavioural, cognitive, and brain abnormalities associated with each disorder nonetheless overlap. We evaluated the diagnostic specificity of facial emotion recognition deficits in schizophrenia and bipolar disorder to determine whether select aspects of emotion recognition differed for the two disorders. The investigation used an experimental task that included the same facial images in an emotion recognition condition and an age recognition condition (to control for processes associated with general face recognition) in 27 schizophrenia patients, 16 bipolar I patients, and 30 controls. Schizophrenia and bipolar patients exhibited both shared and distinct aspects of facial emotion recognition deficits. Schizophrenia patients had deficits in recognizing angry facial expressions compared to healthy controls and bipolar patients. Compared to control participants, both schizophrenia and bipolar patients were more likely to mislabel facial expressions of anger as fear. Given that schizophrenia patients exhibited a deficit in emotion recognition for angry faces, which did not appear due to generalized perceptual and cognitive dysfunction, improving recognition of threat-related expression may be an important intervention target to improve social functioning in schizophrenia. PMID:23218816
Quantitative analysis of facial paralysis using local binary patterns in biomedical videos.
He, Shu; Soraghan, John J; O'Reilly, Brian F; Xing, Dongshan
2009-07-01
Facial paralysis is the loss of voluntary muscle movement of one side of the face. A quantitative, objective, and reliable assessment system would be an invaluable tool for clinicians treating patients with this condition. This paper presents a novel framework for objective measurement of facial paralysis. The motion information in the horizontal and vertical directions and the appearance features on the apex frames are extracted based on the local binary patterns (LBPs) on the temporal-spatial domain in each facial region. These features are temporally and spatially enhanced by the application of novel block processing schemes. A multiresolution extension of uniform LBP is proposed to efficiently combine the micropatterns and large-scale patterns into a feature vector. The symmetry of facial movements is measured by the resistor-average distance (RAD) between LBP features extracted from the two sides of the face. Support vector machine is applied to provide quantitative evaluation of facial paralysis based on the House-Brackmann (H-B) scale. The proposed method is validated by experiments with 197 subject videos, which demonstrates its accuracy and efficiency.
More than mere mimicry? The influence of emotion on rapid facial reactions to faces.
Moody, Eric J; McIntosh, Daniel N; Mann, Laura J; Weisser, Kimberly R
2007-05-01
Within a second of seeing an emotional facial expression, people typically match that expression. These rapid facial reactions (RFRs), often termed mimicry, are implicated in emotional contagion, social perception, and embodied affect, yet ambiguity remains regarding the mechanism(s) involved. Two studies evaluated whether RFRs to faces are solely nonaffective motor responses or whether emotional processes are involved. Brow (corrugator, related to anger) and forehead (frontalis, related to fear) activity were recorded using facial electromyography (EMG) while undergraduates in two conditions (fear induction vs. neutral) viewed fear, anger, and neutral facial expressions. As predicted, fear induction increased fear expressions to angry faces within 1000 ms of exposure, demonstrating an emotional component of RFRs. This did not merely reflect increased fear from the induction, because responses to neutral faces were unaffected. Considering RFRs to be merely nonaffective automatic reactions is inaccurate. RFRs are not purely motor mimicry; emotion influences early facial responses to faces. The relevance of these data to emotional contagion, autism, and the mirror system-based perspectives on imitation is discussed.
Martín-Ruiz, María-Luisa; Máximo-Bocanegra, Nuria; Luna-Oliva, Laura
2016-03-26
The importance of an early rehabilitation process in children with cerebral palsy (CP) is widely recognized. On the one hand, new and useful treatment tools such as rehabilitation systems based on interactive technologies have appeared for rehabilitation of gross motor movements. On the other hand, from the therapeutic point of view, performing rehabilitation exercises with the facial muscles can improve the swallowing process, the facial expression through the management of muscles in the face, and even the speech of children with cerebral palsy. However, it is difficult to find interactive games to improve the detection and evaluation of oral-facial musculature dysfunctions in children with CP. This paper describes a framework based on strategies developed for interactive serious games that is created both for typically developed children and children with disabilities. Four interactive games are the core of a Virtual Environment called SONRIE. This paper demonstrates the benefits of SONRIE to monitor children's oral-facial difficulties. The next steps will focus on the validation of SONRIE to carry out the rehabilitation process of oral-facial musculature in children with cerebral palsy.
An audiovisual emotion recognition system
NASA Astrophysics Data System (ADS)
Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun
2007-12-01
Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.
Parks, Connie L; Monson, Keith L
2018-01-01
This research examined how accurately 2D images (i.e., photographs) of 3D clay facial approximations were matched to corresponding photographs of the approximated individuals using an objective automated facial recognition system. Irrespective of search filter (i.e., blind, sex, or ancestry) or rank class (R 1 , R 10 , R 25 , and R 50 ) employed, few operationally informative results were observed. In only a single instance of 48 potential match opportunities was a clay approximation matched to a corresponding life photograph within the top 50 images (R 50 ) of a candidate list, even with relatively small gallery sizes created from the application of search filters (e.g., sex or ancestry search restrictions). Increasing the candidate lists to include the top 100 images (R 100 ) resulted in only two additional instances of correct match. Although other untested variables (e.g., approximation method, 2D photographic process, and practitioner skill level) may have impacted the observed results, this study suggests that 2D images of manually generated clay approximations are not readily matched to life photos by automated facial recognition systems. Further investigation is necessary in order to identify the underlying cause(s), if any, of the poor recognition results observed in this study (e.g., potential inferior facial feature detection and extraction). Additional inquiry exploring prospective remedial measures (e.g., stronger feature differentiation) is also warranted, particularly given the prominent use of clay approximations in unidentified persons casework. Copyright © 2017. Published by Elsevier B.V.
Somatic tinnitus prevalence and treatment with tinnitus retraining therapy.
Ostermann, K; Lurquin, P; Horoi, M; Cotton, P; Hervé, V; Thill, M P
2016-01-01
Somatic tinnitus originates from increased activity of the dorsal cochlear nucleus, a cross-point between the somatic and auditory systems. Its activity can be modified by auditory stimulation or somatic system manipulation. Thus, sound enrichment and white noise stimulation might decrease tinnitus and associated somatic symptoms. The present uncontrolled study sought to determine somatic tinnitus prevalence among tinnitus sufferers, and to investigate whether sound therapy with counselling (tinnitus retraining therapy; TRT) may decrease tinnitus-associated somatic symptoms. To determine somatic tinnitus prevalence, 70 patients following the TRT protocol completed the Jastreboff Structured Interview (JSI) with additional questions regarding the presence and type of somatic symptoms. Among 21 somatic tinnitus patients, we further investigated the effects of TRT on tinnitus-associated facial dysesthesia. Before and after three months of TRT, tinnitus severity was evaluated using the Tinnitus Handicap Inventory (THI), and facial dysesthesia was assessed with an extended JSI-based questionnaire. Among the evaluated tinnitus patients, 56% presented somatic tinnitus-including 51% with facial dysesthesia, 36% who could modulate tinnitus by head and neck movements, and 13% with both conditions. Self-evaluation indicated that TRT significantly improved tinnitus and facial dysesthesia in 76% of patients. Three months of TRT led to a 50% decrease in mean THI and JSI scores regarding facial dysesthesia. Somatic tinnitus is a frequent and underestimated condition. We suggest an extension of the JSI, including specific questions regarding somatic tinnitus. TRT significantly improved tinnitus and accompanying facial dysesthesia, and could be a useful somatic tinnitus treatment.
A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras
NASA Astrophysics Data System (ADS)
Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.
2006-05-01
A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.
Transformative science education through action research and self-study practices
NASA Astrophysics Data System (ADS)
Calderon, Olga
The research studies human emotions through diverse methods and theoretical lenses. My intention in using this approach is to provide alternative ways of perceiving and interpreting emotions being experienced in the moment of arousal. Emotions are fundamental in human interactions because they are essential in the development of effective relationships of any kind and they can also mediate hostility towards others. I begin by presenting an impressionist auto-ethnography, which narrates a personal account of how science and scientific inquiry has been entrenched in me since childhood. I describe how emotions are an important part of how I perceive and respond to the world around me. I describe science in my life in terms of natural environments, which were the initial source of scientific wonder and bafflement for me. In this auto-ethnography, I recount how social interactions shaped my perceptions about people, the world, and my education trajectory. Furthermore, I illustrate how sociocultural structures are used in different contexts to mediate several life decisions that enable me to pursue a career in science and science education. I also reflect on how some of those sociocultural aspects mediated my emotional wellness. I reveal how my life and science are interconnected and I present my story as a segue to the remainder of the dissertation. In chapters 2 and 3, I address a methodology and associated methods for research on facial expression of emotion. I use a facial action coding system developed by Paul Ekman in the 1970s (Ekman, 2002) to study facial representation of emotions. In chapters 4 and 5, I review the history of oximetry and ways in which an oximeter can be used to obtain information on the physiological expression of emotions. I examine oximetry data in relation to emotional physiology in three different aspects; pulse rate, oxygenation of the blood, and plethysmography (i.e., strength of pulse). In chapters 3 and 5, I include data and observations collected in a science education course for science teachers at Brooklyn College. These observations are only a small part on a larger study of emotions and mindfulness in the science classroom by a group of researchers of the City University of New York. In this context, I explore how, while teaching and learning science, emotions are represented facially and physiologically in terms of oxygenation of the blood and pulse rate and strength.
Three-dimensional gender differences in facial form of children in the North East of England.
Bugaighis, Iman; Mattick, Clare R; Tiddeman, Bernard; Hobson, Ross
2013-06-01
The aim of the prospective cross-sectional morphometric study was to explore three dimensional (3D) facial shape and form (shape plus size) variation within and between 8- and 12-year-old Caucasian children; 39 males age-matched with 41 females. The 3D images were captured using a stereophotogrammeteric system, and facial form was recorded by digitizing 39 anthropometric landmarks for each scan. The x, y, z coordinates of each landmark were extracted and used to calculate linear and angular measurements. 3D landmark asymmetry was quantified using Generalized Procrustes Analysis (GPA) and an average face was constructed for each gender. The average faces were superimposed and differences were visualized and quantified. Shape variations were explored using GPA and PrincipalComponent Analysis. Analysis of covariance and Pearson correlation coefficients were used to explore gender differences and to determine any correlation between facial measurements and height or weight. Multivariate analysis was used to ascertain differences in facial measurements or 3D landmark asymmetry. There were no differences in height or weight between genders. There was a significant positive correlation between facial measurements and height and weight and statistically significant differences in linear facial width measurements between genders. These differences were related to the larger size of males rather than differences in shape. There were no age- or gender-linked significant differences in 3D landmark asymmetry. Shape analysis confirmed similarities between both males and females for facial shape and form in 8- to 12-year-old children. Any differences found were related to differences in facial size rather than shape.
IncobotulinumtoxinA treatment of facial nerve palsy after neurosurgery.
Akulov, Mihail A; Orlova, Ol'ga R; Orlova, Aleksandra S; Usachev, Dmitrij J; Shimansky, Vadim N; Tanjashin, Sergey V; Khatkova, Svetlana E; Yunosha-Shanyavskaya, Anna V
2017-10-15
This study evaluates the effect of incobotulinumtoxinA in the acute and chronic phases of facial nerve palsy after neurosurgical interventions. Patients received incobotulinumtoxinA injections (active treatment group) or standard rehabilitation treatment (control group). Functional efficacy was assessed using House-Brackmann, Yanagihara System and Sunnybrook Facial Grading scales, and Facial Disability Index self-assessment. Significant improvements on all scales were seen after 1month of incobotulinumtoxinA treatment (active treatment group, р<0.05), but only after 3months of rehabilitation treatment (control group, р<0.05). At 1 and 2years post-surgery, the prevalence of synkinesis was significantly higher in patients in the control group compared with those receiving incobotulinumtoxinA treatment (р<0.05 and р<0.001, respectively). IncobotulinumtoxinA treatment resulted in significant improvements in facial symmetry in patients with facial nerve injury following neurosurgical interventions. Treatment was effective for the correction of the compensatory hyperactivity of mimic muscles on the unaffected side that develops in the acute period of facial nerve palsy, and for the correction of synkinesis in the affected side that develops in the long-term period. Appropriate dosing and patient education to perform exercises to restore mimic muscle function should be considered in multimodal treatment. Copyright © 2017 Elsevier B.V. All rights reserved.
Aging disrupts the neural transformations that link facial identity across views.
Habak, Claudine; Wilkinson, Frances; Wilson, Hugh R
2008-01-01
Healthy human aging can have adverse effects on cortical function and on the brain's ability to integrate visual information to form complex representations. Facial identification is crucial to successful social discourse, and yet, it remains unclear whether the neuronal mechanisms underlying face perception per se, and the speed with which they process information, change with age. We present face images whose discrimination relies strictly on the shape and geometry of a face at various stimulus durations. Interestingly, we demonstrate that facial identity matching is maintained with age when faces are shown in the same view (e.g., front-front or side-side), regardless of exposure duration, but degrades when faces are shown in different views (e.g., front and turned 20 degrees to the side) and does not improve at longer durations. Our results indicate that perceptual processing speed for complex representations and the mechanisms underlying same-view facial identity discrimination are maintained with age. In contrast, information is degraded in the neural transformations that represent facial identity across views. We suggest that the accumulation of useful information over time to refine a representation within a population of neurons saturates earlier in the aging visual system than it does in the younger system and contributes to the age-related deterioration of face discrimination across views.
Validating Facial Aesthetic Surgery Results with the FACE-Q.
Kappos, Elisabeth A; Temp, Mathias; Schaefer, Dirk J; Haug, Martin; Kalbermatten, Daniel F; Toth, Bryant A
2017-04-01
In aesthetic clinical practice, surgical outcome is best measured by patient satisfaction and quality of life. For many years, there has been a lack of validated questionnaires. Recently, the FACE-Q was introduced, and the authors present the largest series of face-lift patients evaluated by the FACE-Q with the longest follow-up to date. Two hundred consecutive patients were identified who underwent high-superficial musculoaponeurotic system face lifts, with or without additional facial rejuvenation procedures, between January of 2005 and January of 2015. Patients were sent eight FACE-Q scales and were asked to answer questions with regard to their satisfaction. Rank analysis of covariance was used to compare different subgroups. The response rate was 38 percent. Combination of face lift with other procedures resulted in higher satisfaction than face lift alone (p < 0.05). Patients who underwent lipofilling as part of their face lift showed higher satisfaction than patients without lipofilling in three subscales (p < 0.05). Facial rejuvenation surgery, combining a high-superficial musculoaponeurotic system face lift with lipofilling and/or other facial rejuvenation procedures, resulted in a high level of patient satisfaction. The authors recommend the implementation of the FACE-Q by physicians involved in aesthetic facial surgery, to validate their clinical outcomes from a patient's perspective.
Comparison of different methods for gender estimation from face image of various poses
NASA Astrophysics Data System (ADS)
Ishii, Yohei; Hongo, Hitoshi; Niwa, Yoshinori; Yamamoto, Kazuhiko
2003-04-01
Recently, gender estimation from face images has been studied for frontal facial images. However, it is difficult to obtain such facial images constantly in the case of application systems for security, surveillance and marketing research. In order to build such systems, a method is required to estimate gender from the image of various facial poses. In this paper, three different classifiers are compared in appearance-based gender estimation, which use four directional features (FDF). The classifiers are linear discriminant analysis (LDA), Support Vector Machines (SVMs) and Sparse Network of Winnows (SNoW). Face images used for experiments were obtained from 35 viewpoints. The direction of viewpoints varied +/-45 degrees horizontally, +/-30 degrees vertically at 15 degree intervals respectively. Although LDA showed the best performance for frontal facial images, SVM with Gaussian kernel was found the best performance (86.0%) for the facial images of 35 viewpoints. It is considered that SVM with Gaussian kernel is robust to changes in viewpoint when estimating gender from these results. Furthermore, the estimation rate was quite close to the average estimation rate at 35 viewpoints respectively. It is supposed that the methods are reasonable to estimate gender within the range of experimented viewpoints by learning face images from multiple directions by one class.
Ryu, Min-Hee; Moon, Victor A
2015-01-01
Few facelift methods are designed specifically for Asian patients. Because of their characteristic thick skin and flat, wide facial geometry, satisfactory facelift results can be difficult to achieve in these patients. The authors evaluated outcomes achieved with a high superficial musculoaponeurotic system (high-SMAS) facelift with finger-assisted facial spaces dissection to rejuvenate the aging Asian face. Fifty-three patients underwent this facelift procedure. The indication for surgery was typical sagging of the face associated with aging; the relative contraindications were previous facelift and severe facial atrophy. Mean patient age was 50.7 years. Patients received follow-up for a mean of 19 months. In all cases, improvement was seen in soft-tissue sagging of the midface and lower face. One patient experienced unilateral temporal nerve injury, 3 experienced hematoma, and 2 had wound dehiscence. Understanding surgical anatomy including facial layers, spaces, and retaining ligaments is crucial for stable application of facelift techniques in Asian patients. Because of the small number of patients evaluated in this study and the limited follow-up period, more research is needed to define suitable facelift methods for these patients. © 2015 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.
Historical Techniques of Lie Detection
Vicianova, Martina
2015-01-01
Since time immemorial, lying has been a part of everyday life. For this reason, it has become a subject of interest in several disciplines, including psychology. The purpose of this article is to provide a general overview of the literature and thinking to date about the evolution of lie detection techniques. The first part explores ancient methods recorded circa 1000 B.C. (e.g., God’s judgment in Europe). The second part describes technical methods based on sciences such as phrenology, polygraph and graphology. This is followed by an outline of more modern-day approaches such as FACS (Facial Action Coding System), functional MRI, and Brain Fingerprinting. Finally, after the familiarization with the historical development of techniques for lie detection, we discuss the scope for new initiatives not only in the area of designing new methods, but also for the research into lie detection itself, such as its motives and regulatory issues related to deception. PMID:27247675
Cranial base topology and basic trends in the facial evolution of Homo.
Bastir, Markus; Rosas, Antonio
2016-02-01
Facial prognathism and projection are important characteristics in human evolution but their three-dimensional (3D) architectonic relationships to basicranial morphology are not clear. We used geometric morphometrics and measured 51 3D-landmarks in a comparative sample of modern humans (N = 78) and fossil Pleistocene hominins (N = 10) to investigate the spatial features of covariation between basicranial and facial elements. The study reveals complex morphological integration patterns in craniofacial evolution of Middle and Late Pleistocene hominins. A downwards-orientated cranial base correlates with alveolar maxillary prognathism, relatively larger faces, and relatively larger distances between the anterior cranial base and the frontal bone (projection). This upper facial projection correlates with increased overall relative size of the maxillary alveolar process. Vertical facial height is associated with tall nasal cavities and is accommodated by an elevated anterior cranial base, possibly because of relations between the cribriform and the nasal cavity in relation to body size and energetics. Variation in upper- and mid-facial projection can further be produced by basicranial topology in which the midline base and nasal cavity are shifted anteriorly relative to retracted lateral parts of the base and the face. The zygomatics and the middle cranial fossae act together as bilateral vertical systems that are either projected or retracted relative to the midline facial elements, causing either midfacial flatness or midfacial projection correspondingly. We propose that facial flatness and facial projection reflect classical principles of craniofacial growth counterparts, while facial orientation relative to the basicranium as well as facial proportions reflect the complex interplay of head-body integration in the light of encephalization and body size decrease in Middle to Late Pleistocene hominin evolution. Developmental and evolutionary patterns of integration may only partially overlap morphologically, and traditional concepts taken from research on two-dimensional (2D) lateral X-rays and sections have led to oversimplified and overly mechanistic models of basicranial evolution. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hong, WenMing; Cheng, HongWei; Wang, XiaoJie; Feng, ChunGuo
2017-01-01
Objective To explore and analyze the influencing factors of facial nerve function retainment after microsurgery resection of acoustic neurinoma. Methods Retrospective analysis of our hospital 105 acoustic neuroma cases from October, 2006 to January 2012, in the group all patients were treated with suboccipital sigmoid sinus approach to acoustic neuroma microsurgery resection. We adopted researching individual patient data, outpatient review and telephone followed up and the House-Brackmann grading system to evaluate and analyze the facial nerve function. Results Among 105 patients in this study group, complete surgical resection rate was 80.9% (85/105), subtotal resection rate was 14.3% (15/105), and partial resection rate 4.8% (5/105). The rate of facial nerve retainment on neuroanatomy was 95.3% (100/105) and the mortality rate was 2.1% (2/105). Facial nerve function when the patient is discharged from the hospital, also known as immediate facial nerve function which was graded in House-Brackmann: excellent facial nerve function (House-Brackmann I–II level) cases accounted for 75.2% (79/105), facial nerve function III–IV level cases accounted for 22.9% (24/105), and V–VI cases accounted for 1.9% (2/105). Patients were followed up for more than one year, with excellent facial nerve function retention rate (H-B I–II level) was 74.4% (58/78). Conclusion Acoustic neuroma patients after surgery, the long-term (≥1 year) facial nerve function excellent retaining rate was closely related with surgical proficiency, post-operative immediate facial nerve function, diameter of tumor and whether to use electrophysiological monitoring techniques; while there was no significant correlation with the patient’s age, surgical approach, whether to stripping the internal auditory canal, whether there was cystic degeneration, tumor recurrence, whether to merge with obstructive hydrocephalus and the length of the duration of symptoms. PMID:28264236
Schlessinger, Joel; Kenkel, Jeffrey; Werschler, Philip
2011-07-01
A hydroquinone (HQ) skin care system has been designed for use in conjunction with nonsurgical procedures. The authors evaluate the efficacy of this system plus tretinoin for improving facial appearance in comparison to a standard skin care regimen in users of botulinum toxin Type A (BoNT-A). In this multicenter, randomized, investigator-masked, parallel-group study, 61 patients who received upper facial treatment with BoNT-A at a plastic surgery or dermatology clinic were randomly assigned to apply either the HQ system (cleanser, toner, proprietary 4% hydroquinone, exfoliant, and sunscreen) plus 0.05% tretinoin cream or a standard skin care regimen (cleanser, moisturizer, and sunscreen) for 120 days. Outcomes were assessed by the investigators and through a patient questionnaire. Compared with standard skin care, the HQ system plus tretinoin resulted in significantly milder fine lines/wrinkles and hyperpigmentation at Days 30, 90, and 120 (p ≤ .05) and significantly superior overall ratings for each of nine patient assessments at Days 90 and 120 (p ≤ .05). A relatively greater proportion of patients using the HQ system plus tretinoin believed that their study treatment had further enhanced the improvements attained with BoNT-A (86% vs 8%). Both regimens were generally well tolerated. Adjunctive use of the HQ system plus tretinoin can further enhance the improvements in facial appearance attained with BoNT-A. Applying the HQ system plus tretinoin offers multiple clinical benefits over standard skin care, including significantly greater improvements in fine lines/wrinkles and hyperpigmentation.
A case presentation of bilateral simultaneous Bell's palsy.
Kilic, Rahmi; Ozdek, Ali; Felek, Sevim; Safak, M Asim; Samim, Erdal
2003-01-01
Bilateral simultaneous facial paralysis is an extremely rare clinical entity. Unlike the unilateral form, bilateral facial paralysis seldom falls into Bell's category. It is most often a special finding in a symptom complex of a systemic disease; many of them are potentially life-threatening, and therefore the condition warrants urgent medical intervention. Lyme disease, Guillian-Barre syndrome, Bell's palsy, leukemia, sarcoidosis, bacterial meningitis, syphilis, leprosy, Moebius syndrome, infectious mononucleosis, and skull fracture are the most common cause of bilateral facial paralysis. Here we present a 16-year-old patient with bilateral simultaneous Bell's palsy.
EEVEE: the Empathy-Enhancing Virtual Evolving Environment
Jackson, Philip L.; Michon, Pierre-Emmanuel; Geslin, Erik; Carignan, Maxime; Beaudoin, Danny
2015-01-01
Empathy is a multifaceted emotional and mental faculty that is often found to be affected in a great number of psychopathologies, such as schizophrenia, yet it remains very difficult to measure in an ecological context. The challenge stems partly from the complexity and fluidity of this social process, but also from its covert nature. One powerful tool to enhance experimental control over such dynamic social interactions has been the use of avatars in virtual reality (VR); information about an individual in such an interaction can be collected through the analysis of his or her neurophysiological and behavioral responses. We have developed a unique platform, the Empathy-Enhancing Virtual Evolving Environment (EEVEE), which is built around three main components: (1) different avatars capable of expressing feelings and emotions at various levels based on the Facial Action Coding System (FACS); (2) systems for measuring the physiological responses of the observer (heart and respiration rate, skin conductance, gaze and eye movements, facial expression); and (3) a multimodal interface linking the avatar's behavior to the observer's neurophysiological response. In this article, we provide a detailed description of the components of this innovative platform and validation data from the first phases of development. Our data show that healthy adults can discriminate different negative emotions, including pain, expressed by avatars at varying intensities. We also provide evidence that masking part of an avatar's face (top or bottom half) does not prevent the detection of different levels of pain. This innovative and flexible platform provides a unique tool to study and even modulate empathy in a comprehensive and ecological manner in various populations, notably individuals suffering from neurological or psychiatric disorders. PMID:25805983
Lyme disease and Bell’s palsy: an epidemiological study of diagnosis and risk in England
Cooper, Lilli; Branagan-Harris, Michael; Tuson, Richard; Nduka, Charles
2017-01-01
Background Lyme disease is caused by a tick-borne spirochaete of the Borrelia species. It is associated with facial palsy, is increasingly common in England, and may be misdiagnosed as Bell’s palsy. Aim To produce an accurate map of Lyme disease diagnosis in England and to identify patients at risk of developing associated facial nerve palsy, to enable prevention, early diagnosis, and effective treatment. Design and setting Hospital episode statistics (HES) data in England from the Health and Social Care Information Centre were interrogated from April 2011 to March 2015 for International Classification of Diseases 10th revision (ICD-10) codes A69.2 (Lyme disease) and G51.0 (Bell’s palsy) in isolation, and as a combination. Method Patients’ age, sex, postcode, month of diagnosis, and socioeconomic groups as defined according to the English Indices of Deprivation (2004) were also collected. Results Lyme disease hospital diagnosis increased by 42% per year from 2011 to 2015 in England. Higher incidence areas, largely rural, were mapped. A trend towards socioeconomic privilege and the months of July to September was observed. Facial palsy in combination with Lyme disease is also increasing, particularly in younger patients, with a mean age of 41.7 years, compared with 59.6 years for Bell’s palsy and 45.9 years for Lyme disease (P = 0.05, analysis of variance [ANOVA]). Conclusion Healthcare practitioners should have a high index of suspicion for Lyme disease following travel in the areas shown, particularly in the summer months. The authors suggest that patients presenting with facial palsy should be tested for Lyme disease. PMID:28396367
Automatic Facial Expression Recognition and Operator Functional State
NASA Technical Reports Server (NTRS)
Blanson, Nina
2012-01-01
The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions
Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T.; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J.; Sadato, Norihiro
2012-01-01
Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience. PMID:23372547
Automatic Facial Expression Recognition and Operator Functional State
NASA Technical Reports Server (NTRS)
Blanson, Nina
2011-01-01
The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions.
Pretreatment Hematologic Findings as Novel Predictive Markers for Facial Palsy Prognosis.
Wasano, Koichiro; Kawasaki, Taiji; Yamamoto, Sayuri; Tomisato, Shuta; Shinden, Seiichi; Ishikawa, Toru; Minami, Shujiro; Wakabayashi, Takeshi; Ogawa, Kaoru
2016-10-01
To examine the relationship between prognosis of 2 different facial palsies and pretreatment hematologic laboratory values. Multicenter case series with chart review. Three tertiary care hospitals. We examined the clinical records of 468 facial palsy patients who were treated with an antiviral drug in combination with either oral or intravenous corticosteroids in participating hospitals between 2010 and 2014. Patients were divided into a Bell's palsy group or a Hunt's palsy group. We used the Yanagihara facial nerve grading system to grade the severity of facial palsy. "Recovery" from facial palsy was defined as achieving a Yanagihara score ≥36 points within 6 months of onset and having no accompanying facial contracture or synkinesis. We collected information about pretreatment hematologic findings, demographic data, and electrophysiologic test results of the Bell and Hunt group patients who recovered and those who did not. We then compared these data across the 2 palsy groups. In the Bell's palsy group, recovered and unrecovered patients differed significantly in age, sex, electroneuronography score, stapedial muscle reflex, neutrophil rate, lymphocyte rate, neutrophil-to-lymphocyte ratio, and initial Yanagihara score. In the Hunt's palsy group, recovered and unrecovered patients differed in age, electroneuronography score, stapedial muscle reflex, monocyte rate, platelet count, mean corpuscular volume, and initial Yanagihara score. Pretreatment hematologic findings, which reflect the severity of inflammation and bone marrow dysfunction caused by a virus infection, are useful for predicting the prognosis of facial palsy. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2016.
Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J; Sadato, Norihiro
2013-01-01
Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience.
Micro-Expression Recognition Using Color Spaces.
Wang, Su-Jing; Yan, Wen-Jing; Li, Xiaobai; Zhao, Guoying; Zhou, Chun-Guang; Fu, Xiaolan; Yang, Minghao; Tao, Jianhua
2015-12-01
Micro-expressions are brief involuntary facial expressions that reveal genuine emotions and, thus, help detect lies. Because of their many promising applications, they have attracted the attention of researchers from various fields. Recent research reveals that two perceptual color spaces (CIELab and CIELuv) provide useful information for expression recognition. This paper is an extended version of our International Conference on Pattern Recognition paper, in which we propose a novel color space model, tensor independent color space (TICS), to help recognize micro-expressions. In this paper, we further show that CIELab and CIELuv are also helpful in recognizing micro-expressions, and we indicate why these three color spaces achieve better performance. A micro-expression color video clip is treated as a fourth-order tensor, i.e., a four-dimension array. The first two dimensions are the spatial information, the third is the temporal information, and the fourth is the color information. We transform the fourth dimension from RGB into TICS, in which the color components are as independent as possible. The combination of dynamic texture and independent color components achieves a higher accuracy than does that of RGB. In addition, we define a set of regions of interests (ROIs) based on the facial action coding system and calculated the dynamic texture histograms for each ROI. Experiments are conducted on two micro-expression databases, CASME and CASME 2, and the results show that the performances for TICS, CIELab, and CIELuv are better than those for RGB or gray.
Schonhardt-Bailey, Cheryl
2017-01-01
In parliamentary committee oversight hearings on fiscal policy, monetary policy, and financial stability, where verbal deliberation is the focus, nonverbal communication may be crucial in the acceptance or rejection of arguments proffered by policymakers. Systematic qualitative coding of these hearings in the 2010-15 U.K. Parliament finds the following: (1) facial expressions, particularly in the form of anger and contempt, are more prevalent in fiscal policy hearings, where backbench parliamentarians hold frontbench parliamentarians to account, than in monetary policy or financial stability hearings, where the witnesses being held to account are unelected policy experts; (2) comparing committees across chambers, hearings in the House of Lords committee yield more reassuring facial expressions relative to hearings in the House of Commons committee, suggesting a more relaxed and less adversarial context in the former; and (3) central bank witnesses appearing before both the Lords and Commons committees tend toward expressions of appeasement, suggesting a willingness to defer to Parliament.
The use of mandibular body distraction in hemifacial microsomia
Sakamoto, Yoshiaki; Nakajima, Hideo; Ogata, Hisao; Kishi, Kazuo
2013-01-01
Objective: The goals of treatment for hemifacial microsomia include horizontalization of occlusal plane and acquisition of facial symmetry. Although horizontalization of occlusal plane can be easily achieved, facial symmetry, particularly in relation to mandibular contour, can be difficult to attain. Soft tissue is generally reconstructed to correct facial asymmetry, and no studies have described correction of facial asymmetry through skeletal reconstruction. Case: A 12-year-old girl presented with grade IIb right-sided hemifacial microsomia. She was treated using Nakajima's angle-variable internal distraction (NAVID) system for mandibular body distraction. Results: Following treatment, appropriate facial symmetry was achieved, and the patient was extremely satisfied with the results. Conclusions: Thus, we successfully treated the present patient by our novel method involving distraction osteogenesis. This method was effective and useful for several reasons including; the changes were not accompanied by postoperative tissue absorption, donor sites were not involved, and the treatment outcome could be reevaluated by adjusting distraction while the patient's appearance was being remodeled. PMID:24205479
The relationship between facial 3-D morphometry and the perception of attractiveness in children.
Ferrario, V F; Sforza, C; Poggio, C E; Colombo, A; Tartaglia, G
1997-01-01
The aim of this investigation was to determine whether attractive children differ in their three-dimensional facial characteristics from nonattractive children of the same age, race, and sex. The facial characteristics of 36 boys and 44 girls aged 8 to 9 years were investigated. Frontal and profile photographs were analyzed independently by 21 judges, and, for each view, four groups were obtained: attractive boys, nonattractive boys, attractive girls, and nonattractive girls. For each child, the three-dimensional coordinates of 16 standardized soft tissue facial landmarks were automatically collected using an infrared system and used to calculate several three-dimensional angles, linear distances, and linear distance ratios. Mean values were computed in the eight groups, and attractive and nonattractive children were compared within sex and view. Most children received a different esthetic evaluation in the separate frontal and profile assessments; concordance in both attractive and nonattractive groups was only 50%. Moreover, three-dimensional facial morphometry was not able to separate attractive and nonattractive children.
Infrared thermal facial image sequence registration analysis and verification
NASA Astrophysics Data System (ADS)
Chen, Chieh-Li; Jian, Bo-Lin
2015-03-01
To study the emotional responses of subjects to the International Affective Picture System (IAPS), infrared thermal facial image sequence is preprocessed for registration before further analysis such that the variance caused by minor and irregular subject movements is reduced. Without affecting the comfort level and inducing minimal harm, this study proposes an infrared thermal facial image sequence registration process that will reduce the deviations caused by the unconscious head shaking of the subjects. A fixed image for registration is produced through the localization of the centroid of the eye region as well as image translation and rotation processes. Thermal image sequencing will then be automatically registered using the two-stage genetic algorithm proposed. The deviation before and after image registration will be demonstrated by image quality indices. The results show that the infrared thermal image sequence registration process proposed in this study is effective in localizing facial images accurately, which will be beneficial to the correlation analysis of psychological information related to the facial area.
Local ICA for the Most Wanted face recognition
NASA Astrophysics Data System (ADS)
Guan, Xin; Szu, Harold H.; Markowitz, Zvi
2000-04-01
Facial disguises of FBI Most Wanted criminals are inevitable and anticipated in our design of automatic/aided target recognition (ATR) imaging systems. For example, man's facial hairs may hide his mouth and chin but not necessarily the nose and eyes. Sunglasses will cover the eyes but not the nose, mouth, and chins. This fact motivates us to build sets of the independent component analyses bases separately for each facial region of the entire alleged criminal group. Then, given an alleged criminal face, collective votes are obtained from all facial regions in terms of 'yes, no, abstain' and are tallied for a potential alarm. Moreover, and innocent outside shall fall below the alarm threshold and is allowed to pass the checkpoint. Such a PD versus FAR called ROC curve is obtained.
NASA Technical Reports Server (NTRS)
Gutensohn, Michael
2018-01-01
The task for this project was to design, develop, test, and deploy a facial recognition system for the Kennedy Space Center Augmented/Virtual Reality Lab. This system will serve as a means of user authentication as part of the NUI of the lab. The overarching goal is to create a seamless user interface that will allow the user to initiate and interact with AR and VR experiences without ever needing to use a mouse or keyboard at any step in the process.
Automatic recognition of emotions from facial expressions
NASA Astrophysics Data System (ADS)
Xue, Henry; Gertner, Izidor
2014-06-01
In the human-computer interaction (HCI) process it is desirable to have an artificial intelligent (AI) system that can identify and categorize human emotions from facial expressions. Such systems can be used in security, in entertainment industries, and also to study visual perception, social interactions and disorders (e.g. schizophrenia and autism). In this work we survey and compare the performance of different feature extraction algorithms and classification schemes. We introduce a faster feature extraction method that resizes and applies a set of filters to the data images without sacrificing the accuracy. In addition, we have enhanced SVM to multiple dimensions while retaining the high accuracy rate of SVM. The algorithms were tested using the Japanese Female Facial Expression (JAFFE) Database and the Database of Faces (AT&T Faces).
Muthuswamy, M B; Thomas, B N; Williams, D; Dingley, J
2014-09-01
Patients recovering from critical illness especially those with critical illness related neuropathy, myopathy, or burns to face, arms and hands are often unable to communicate by writing, speech (due to tracheostomy) or lip reading. This may frustrate both patient and staff. Two low cost movement tracking systems based around a laptop webcam and a laser/optical gaming system sensor were utilised as control inputs for on-screen text creation software and both were evaluated as communication tools in volunteers. Two methods were used to control an on-screen cursor to create short sentences via an on-screen keyboard: (i) webcam-based facial feature tracking, (ii) arm movement tracking by laser/camera gaming sensor and modified software. 16 volunteers with simulated tracheostomy and bandaged arms to simulate communication via gross movements of a burned limb, communicated 3 standard messages using each system (total 48 per system) in random sequence. Ten and 13 minor typographical errors occurred with each system respectively, however all messages were comprehensible. Speed of sentence formation ranged from 58 to 120s with the facial feature tracking system, and 60-160s with the arm movement tracking system. The average speed of sentence formation was 81s (range 58-120) and 104s (range 60-160) for facial feature and arm tracking systems respectively, (P<0.001, 2-tailed independent sample t-test). Both devices may be potentially useful communication aids in patients in general and burns critical care units who cannot communicate by conventional means, due to the nature of their injuries. Copyright © 2014 Elsevier Ltd and ISBI. All rights reserved.
Pilurzi, G; Hasan, A; Saifee, T A; Tolu, E; Rothwell, J C; Deriu, F
2013-01-01
Previous studies of the cortical control of human facial muscles documented the distribution of corticobulbar projections and the presence of intracortical inhibitory and facilitatory mechanisms. Yet surprisingly, given the importance and precision in control of facial expression, there have been no studies of the afferent modulation of corticobulbar excitability or of the plasticity of synaptic connections in the facial primary motor cortex (face M1). In 25 healthy volunteers, we used standard single- and paired-pulse transcranial magnetic stimulation (TMS) methods to probe motor-evoked potentials (MEPs), short-intracortical inhibition, intracortical facilitation, short-afferent and long-afferent inhibition and paired associative stimulation in relaxed and active depressor anguli oris muscles. Single-pulse TMS evoked bilateral MEPs at rest and during activity that were larger in contralateral muscles, confirming that corticobulbar projection to lower facial muscles is bilateral and asymmetric, with contralateral predominance. Both short-intracortical inhibition and intracortical facilitation were present bilaterally in resting and active conditions. Electrical stimulation of the facial nerve paired with a TMS pulse 5–200 ms later showed no short-afferent inhibition, but long-afferent inhibition was present. Paired associative stimulation tested with an electrical stimulation–TMS interval of 20 ms significantly facilitated MEPs for up to 30 min. The long-term potentiation, evoked for the first time in face M1, demonstrates that excitability of the facial motor cortex is prone to plastic changes after paired associative stimulation. Evaluation of intracortical circuits in both relaxed and active lower facial muscles as well as of plasticity in the facial motor cortex may provide further physiological insight into pathologies affecting the facial motor system. PMID:23297305
Internal representations reveal cultural diversity in expectations of facial expressions of emotion.
Jack, Rachael E; Caldara, Roberto; Schyns, Philippe G
2012-02-01
Facial expressions have long been considered the "universal language of emotion." Yet consistent cultural differences in the recognition of facial expressions contradict such notions (e.g., R. E. Jack, C. Blais, C. Scheepers, P. G. Schyns, & R. Caldara, 2009). Rather, culture--as an intricate system of social concepts and beliefs--could generate different expectations (i.e., internal representations) of facial expression signals. To investigate, they used a powerful psychophysical technique (reverse correlation) to estimate the observer-specific internal representations of the 6 basic facial expressions of emotion (i.e., happy, surprise, fear, disgust, anger, and sad) in two culturally distinct groups (i.e., Western Caucasian [WC] and East Asian [EA]). Using complementary statistical image analyses, cultural specificity was directly revealed in these representations. Specifically, whereas WC internal representations predominantly featured the eyebrows and mouth, EA internal representations showed a preference for expressive information in the eye region. Closer inspection of the EA observer preference revealed a surprising feature: changes of gaze direction, shown primarily among the EA group. For the first time, it is revealed directly that culture can finely shape the internal representations of common facial expressions of emotion, challenging notions of a biologically hardwired "universal language of emotion."
NASA Astrophysics Data System (ADS)
Benitez-Garcia, Gibran; Nakamura, Tomoaki; Kaneko, Masahide
2017-01-01
Darwin was the first one to assert that facial expressions are innate and universal, which are recognized across all cultures. However, recent some cross-cultural studies have questioned this assumed universality. Therefore, this paper presents an analysis of the differences between Western and East-Asian faces of the six basic expressions (anger, disgust, fear, happiness, sadness and surprise) focused on three individual facial regions of eyes-eyebrows, nose and mouth. The analysis is conducted by applying PCA for two feature extraction methods: appearance-based by using the pixel intensities of facial parts, and geometric-based by handling 125 feature points from the face. Both methods are evaluated using 4 standard databases for both racial groups and the results are compared with a cross-cultural human study applied to 20 participants. Our analysis reveals that differences between Westerns and East-Asians exist mainly on the regions of eyes-eyebrows and mouth for expressions of fear and disgust respectively. This work presents important findings for a better design of automatic facial expression recognition systems based on the difference between two racial groups.
Fang, Jing-Jing; Liu, Jia-Kuang; Wu, Tzu-Chieh; Lee, Jing-Wei; Kuo, Tai-Hong
2013-05-01
Computer-aided design has gained increasing popularity in clinical practice, and the advent of rapid prototyping technology has further enhanced the quality and predictability of surgical outcomes. It provides target guides for complex bony reconstruction during surgery. Therefore, surgeons can efficiently and precisely target fracture restorations. Based on three-dimensional models generated from a computed tomographic scan, precise preoperative planning simulation on a computer is possible. Combining the interdisciplinary knowledge of surgeons and engineers, this study proposes a novel surgical guidance method that incorporates a built-in occlusal wafer that serves as the positioning reference.Two patients with complex facial deformity suffering from severe facial asymmetry problems were recruited. In vitro facial reconstruction was first rehearsed on physical models, where a customized surgical guide incorporating a built-in occlusal stent as the positioning reference was designed to implement the surgery plan. This study is intended to present the authors' preliminary experience in a complex facial reconstruction procedure. It suggests that in regions with less information, where intraoperative computed tomographic scans or navigation systems are not available, our approach could be an effective, expedient, straightforward aid to enhance surgical outcome in a complex facial repair.
Martín-Ruiz, María-Luisa; Máximo-Bocanegra, Nuria; Luna-Oliva, Laura
2016-01-01
The importance of an early rehabilitation process in children with cerebral palsy (CP) is widely recognized. On the one hand, new and useful treatment tools such as rehabilitation systems based on interactive technologies have appeared for rehabilitation of gross motor movements. On the other hand, from the therapeutic point of view, performing rehabilitation exercises with the facial muscles can improve the swallowing process, the facial expression through the management of muscles in the face, and even the speech of children with cerebral palsy. However, it is difficult to find interactive games to improve the detection and evaluation of oral-facial musculature dysfunctions in children with CP. This paper describes a framework based on strategies developed for interactive serious games that is created both for typically developed children and children with disabilities. Four interactive games are the core of a Virtual Environment called SONRIE. This paper demonstrates the benefits of SONRIE to monitor children’s oral-facial difficulties. The next steps will focus on the validation of SONRIE to carry out the rehabilitation process of oral-facial musculature in children with cerebral palsy. PMID:27023561
I Think We're Alone Now: Solitary Social Behaviors in Adolescents with Autism Spectrum Disorder.
Zane, Emily; Neumeyer, Kayla; Mertens, Julia; Chugg, Amanda; Grossman, Ruth B
2017-10-10
Research into emotional responsiveness in Autism Spectrum Disorder (ASD) has yielded mixed findings. Some studies report uniform, flat and emotionless expressions in ASD; others describe highly variable expressions that are as or even more intense than those of typically developing (TD) individuals. Variability in findings is likely due to differences in study design: some studies have examined posed (i.e., not spontaneous expressions) and others have examined spontaneous expressions in social contexts, during which individuals with ASD-by nature of the disorder-are likely to behave differently than their TD peers. To determine whether (and how) spontaneous facial expressions and other emotional responses are different from TD individuals, we video-recorded the spontaneous responses of children and adolescents with and without ASD (between the ages of 10 and 17 years) as they watched emotionally evocative videos in a non-social context. Researchers coded facial expressions for intensity, and noted the presence of laughter and other responsive vocalizations. Adolescents with ASD displayed more intense, frequent and varied spontaneous facial expressions than their TD peers. They also produced significantly more emotional vocalizations, including laughter. Individuals with ASD may display their emotions more frequently and more intensely than TD individuals when they are unencumbered by social pressure. Differences in the interpretation of the social setting and/or understanding of emotional display rules may also contribute to differences in emotional behaviors between groups.
Barbosa, Jocelyn; Lee, Kyubum; Lee, Sunwon; Lodhi, Bilal; Cho, Jae-Gu; Seo, Woo-Keun; Kang, Jaewoo
2016-03-12
Facial palsy or paralysis (FP) is a symptom that loses voluntary muscles movement in one side of the human face, which could be very devastating in the part of the patients. Traditional methods are solely dependent to clinician's judgment and therefore time consuming and subjective in nature. Hence, a quantitative assessment system becomes apparently invaluable for physicians to begin the rehabilitation process; and to produce a reliable and robust method is challenging and still underway. We introduce a novel approach for a quantitative assessment of facial paralysis that tackles classification problem for FP type and degree of severity. Specifically, a novel method of quantitative assessment is presented: an algorithm that extracts the human iris and detects facial landmarks; and a hybrid approach combining the rule-based and machine learning algorithm to analyze and prognosticate facial paralysis using the captured images. A method combining the optimized Daugman's algorithm and Localized Active Contour (LAC) model is proposed to efficiently extract the iris and facial landmark or key points. To improve the performance of LAC, appropriate parameters of initial evolving curve for facial features' segmentation are automatically selected. The symmetry score is measured by the ratio between features extracted from the two sides of the face. Hybrid classifiers (i.e. rule-based with regularized logistic regression) were employed for discriminating healthy and unhealthy subjects, FP type classification, and for facial paralysis grading based on House-Brackmann (H-B) scale. Quantitative analysis was performed to evaluate the performance of the proposed approach. Experiments show that the proposed method demonstrates its efficiency. Facial movement feature extraction on facial images based on iris segmentation and LAC-based key point detection along with a hybrid classifier provides a more efficient way of addressing classification problem on facial palsy type and degree of severity. Combining iris segmentation and key point-based method has several merits that are essential for our real application. Aside from the facial key points, iris segmentation provides significant contribution as it describes the changes of the iris exposure while performing some facial expressions. It reveals the significant difference between the healthy side and the severe palsy side when raising eyebrows with both eyes directed upward, and can model the typical changes in the iris region.
Bekele, E; Bian, D; Peterman, J; Park, S; Sarkar, N
2017-06-01
Schizophrenia is a life-long, debilitating psychotic disorder with poor outcome that affects about 1% of the population. Although pharmacotherapy can alleviate some of the acute psychotic symptoms, residual social impairments present a significant barrier that prevents successful rehabilitation. With limited resources and access to social skills training opportunities, innovative technology has emerged as a potentially powerful tool for intervention. In this paper, we present a novel virtual reality (VR)-based system for understanding facial emotion processing impairments that may lead to poor social outcome in schizophrenia. We henceforth call it a VR System for Affect Analysis in Facial Expressions (VR-SAAFE). This system integrates a VR-based task presentation platform that can minutely control facial expressions of an avatar with or without accompanying verbal interaction, with an eye-tracker to quantitatively measure a participants real-time gaze and a set of physiological sensors to infer his/her affective states to allow in-depth understanding of the emotion recognition mechanism of patients with schizophrenia based on quantitative metrics. A usability study with 12 patients with schizophrenia and 12 healthy controls was conducted to examine processing of the emotional faces. Preliminary results indicated that there were significant differences in the way patients with schizophrenia processed and responded towards the emotional faces presented in the VR environment compared with healthy control participants. The preliminary results underscore the utility of such a VR-based system that enables precise and quantitative assessment of social skill deficits in patients with schizophrenia.
Parks, Connie L; Monson, Keith L
2018-05-01
This study employed an automated facial recognition system as a means of objectively evaluating biometric correspondence between a ReFace facial approximation and the computed tomography (CT) derived ground truth skin surface of the same individual. High rates of biometric correspondence were observed, irrespective of rank class (R k ) or demographic cohort examined. Overall, 48% of the test subjects' ReFace approximation probes (n=96) were matched to his or her corresponding ground truth skin surface image at R 1 , a rank indicating a high degree of biometric correspondence and a potential positive identification. Identification rates improved with each successively broader rank class (R 10 =85%, R 25 =96%, and R 50 =99%), with 100% identification by R 57 . A sharp increase (39% mean increase) in identification rates was observed between R 1 and R 10 across most rank classes and demographic cohorts. In contrast, significantly lower (p<0.01) increases in identification rates were observed between R 10 and R 25 (8% mean increase) and R 25 and R 50 (3% mean increase). No significant (p>0.05) performance differences were observed across demographic cohorts or CT scan protocols. Performance measures observed in this research suggest that ReFace approximations are biometrically similar to the actual faces of the approximated individuals and, therefore, may have potential operational utility in contexts in which computerized approximations are utilized as probes in automated facial recognition systems. Copyright © 2018. Published by Elsevier B.V.
Hemispheric differences in recognizing upper and lower facial displays of emotion.
Prodan, C I; Orbelo, D M; Testa, J A; Ross, E D
2001-01-01
To determine if there are hemispheric differences in processing upper versus lower facial displays of emotion. Recent evidence suggests that there are two broad classes of emotions with differential hemispheric lateralization. Primary emotions (e.g. anger, fear) and associated displays are innate, are recognized across all cultures, and are thought to be modulated by the right hemisphere. Social emotions (e.g., guilt, jealousy) and associated "display rules" are learned during early child development, vary across cultures, and are thought to be modulated by the left hemisphere. Display rules are used by persons to alter, suppress or enhance primary emotional displays for social purposes. During deceitful behaviors, a subject's true emotional state is often leaked through upper rather than lower facial displays, giving rise to facial blends of emotion. We hypothesized that upper facial displays are processed preferentially by the right hemisphere, as part of the primary emotional system, while lower facial displays are processed preferentially by the left hemisphere, as part of the social emotional system. 30 strongly right-handed adult volunteers were tested tachistoscopically by randomly flashing facial displays of emotion to the right and left visual fields. The stimuli were line drawings of facial blends with different emotions displayed on the upper versus lower face. The subjects were tested under two conditions: 1) without instructions and 2) with instructions to attend to the upper face. Without instructions, the subjects robustly identified the emotion displayed on the lower face, regardless of visual field presentation. With instructions to attend to the upper face, for the left visual field they robustly identified the emotion displayed on the upper face. For the right visual field, they continued to identify the emotion displayed on the lower face, but to a lesser degree. Our results support the hypothesis that hemispheric differences exist in the ability to process upper versus lower facial displays of emotion. Attention appears to enhance the ability to explore these hemispheric differences under experimental conditions. Our data also support the recent observation that the right hemisphere has a greater ability to recognize deceitful behaviors compared with the left hemisphere. This may be attributable to the different roles the hemispheres play in modulating social versus primary emotions and related behaviors.
Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions
Maruthapillai, Vasanthan; Murugappan, Murugappan
2016-01-01
In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject’s face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject’s face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network. PMID:26859884
A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).
Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A
2013-01-01
The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations. Copyright © 2012 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Pediatric facial fractures: evolving patterns of treatment.
Posnick, J C; Wells, M; Pron, G E
1993-08-01
This study reviews the treatment of facial trauma between October 1986 and December 1990 at a major pediatric referral center. The mechanism of injury, location and pattern of facial fractures, pattern of facial injury, soft tissue injuries, and any associated injuries to other organ systems were recorded, and fracture management and perioperative complications reviewed. The study population consisted of 137 patients who sustained 318 facial fractures. Eighty-one patients (171 fractures) were seen in the acute stage, and 56 patients (147 fractures) were seen for reconstruction of a secondary deformity. Injuries in boys were more prevalent than in girls (63% versus 37%), and the 6- to 12-year cohort made up the largest group (42%). Most fractures resulted from traffic-related accidents (50%), falls (23%), or sports-related injuries (15%). Mandibular (34%) and orbital fractures (23%) predominated; fewer midfacial fractures (7%) were sustained than would be expected in a similar adult population. Three quarters of the patients with acute fractures required operative intervention. Closed reduction techniques with maxillomandibular fixation were frequently chosen for mandibular condyle fractures and open reduction techniques (35%) for other regions of the facial skeleton. When open reduction was indicated, plate-and-screw fixation was the preferred method of stabilization (65%). The long-term effects of the injuries and the treatment given on facial growth remain undetermined. Perioperative complication rates directly related to the surgery were low.
Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.
Maruthapillai, Vasanthan; Murugappan, Murugappan
2016-01-01
In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.
Tessier 3 Cleft in a Pre-Hispanic Anthropomorphic Figurine in El Salvador, Central America.
Aleman, Ramon Manuel; Martinez, Maria Guadalupe
2017-03-01
In 1976, Paul Tessier provided a numerical classification system for rare facial clefts, numbered from 0 to 14. The Tessier 3 cleft is a rare facial cleft extending from the philtrum of the upper lip through the wing of the nostril, and reaches the medial canthus of the eye. The aim of this document was to describe a pre-Hispanic anthropomorphic figurine dating from the classic period (200 A.D.-900 A.D.), which has a Tessier 3 cleft. We also discuss the documented pre-Hispanic beliefs about facial clefts.
Silva, Lidia Aragão; Ferraz Carbonel, Adriana Aparecida; de Moraes, Andréa Regina Barbosa; Simões, Ricardo S; Sasso, Gisela Rodrigues da Silva; Goes, Lívia; Nunes, Winnie; Simões, Manuel Jesus; Patriarca, Marisa Teresinha
2017-11-01
The objective of this study is to compare the effects of topical estrogen and genistein (a soy isoflavone) on the facial skin collagen of postmenopausal women not undergoing systemic hormonal therapy. This is a prospective, double blind, randomized, controlled clinical trial. Volunteer women (N = 30) 45-55 year old from the Endocrine Gynecology sector of the Gynecology Department of the Federal University of São Paulo (UNIFESP). The Ethical Committee of the Federal University of São Paulo approved the study (report no. 386/2004; registration on ClinicalTrials.gov NCT01553773), were assigned to topical treatment with either estrogen or genistein for 24 weeks. We quantified and compared facial collagen concentration before and after each treatment by performing pre-auricular skin biopsies. Our data showed an increase in the amount of both type I and type III facial collagen by the end of both treatments. However, the outcomes of the estrogen GI (ER) group were superior to the genistein GII (GEN) group, with statistical significance p < 000.1 Conclusion: Treatment with topical estrogen is superior to genistein, but both have positive impacts on facial skin collagen. Nevertheless, it is still unclear whether prolonged use of genistein and other topical phytoestrogens could produce systemic effects and further research is needed to clarify this question.
Zangara, Andrea; Blair, R J R; Curran, H Valerie
2002-08-01
Accumulating evidence from neuropsychological and neuroimaging research suggests that facial expressions are processed by at least partially separable neurocognitive systems. Recent evidence implies that the processing of different facial expressions may also be dissociable pharmacologically by GABAergic and noradrenergic compounds, although no study has directly compared the two types of drugs. The present study therefore directly compared the effects of a benzodiazepine with those of a beta-adrenergic blocker on the ability to recognise emotional expressions. A double-blind, independent group design was used with 45 volunteers to compare the effects of diazepam (15 mg) and metoprolol (50 mg) with matched placebo. Participants were presented with morphed facial expression stimuli and asked to identify which of the six basic emotions (sadness, happiness, anger, disgust, fear and surprise) were portrayed. Control measures of mood, pulse rate and word recall were also taken. Diazepam selectively impaired participants' ability to recognise expressions of both anger and fear but not other emotional expressions. Errors were mainly mistaking fear for surprise and disgust for anger. Metoprolol did not significantly affect facial expression recognition. These findings are interpreted as providing further support for the suggestion that there are dissociable systems responsible for processing emotional expressions. The results may have implications for understanding why 'paradoxical' aggression is sometimes elicited by benzodiazepines and for extending our psychological understanding of the anxiolytic effects of these drugs.
Surveillance for work-related skull fractures in Michigan.
Kica, Joanna; Rosenman, Kenneth D
2014-12-01
The objective was to develop a multisource surveillance system for work-related skull fractures. Records on work-related skull fractures were obtained from Michigan's 134 hospitals, Michigan's Workers' Compensation Agency and death certificates. Cases from the three sources were matched to eliminate duplicates from more than one source. Workplaces where the most severe injuries occurred were referred to OSHA for an enforcement inspection. There were 318 work related skull fractures, not including facial fractures, between 2010 and 2012. In 2012, after the inclusion of facial fractures, 316 fractures were identified of which 218 (69%) were facial fractures. The Bureau of Labor Statistic's (BLS) 2012 estimate of skull fractures in Michigan, which includes facial fractures, was 170, which was 53.8% of those identified from our review of medical records. The inclusion of facial fractures in the surveillance system increased the percentage of women identified from 15.4% to 31.2%, decreased severity (hospitalization went from 48.7% to 10.6% and loss of consciousness went from 56.5% to 17.8%), decreased falls from 48.2% to 27.6%, and increased assaults from 5.0% to 20.2%, shifted the most common industry from construction (13.3%) to health care and social assistance (15.0%) and the highest incidence rate from males 65+ (6.8 per 100,000) to young men, 20-24 years (9.6 per 100,000). Workplace inspections resulted in 45 violations and $62,750 in penalties. The Michigan multisource surveillance system of workplace injuries had two major advantages over the existing national system: (a) workplace investigations were initiated hazards identified and safety changes implemented at the facilities where the injuries occurred; and (b) a more accurate count was derived, with 86% more work-related skull fractures identified than BLS's employer based estimate. A more comprehensive system to identify and target interventions for workplace injuries was implemented using hospital and emergency department medical records. Copyright © 2014 National Safety Council and Elsevier Ltd. All rights reserved.
Representations in learning new faces: evidence from prosopagnosia.
Polster, M R; Rapcsak, S Z
1996-05-01
We report the performance of a prosopagnosic patient on face learning tasks under different encoding instructions (i.e., levels of processing manipulations). R.J. performs at chance when given no encoding instructions or when given "shallow" encoding instruction to focus on facial features. By contrast, he performs relatively well with "deep" encoding instructions to rate faces in terms of personality traits or when provided with semantic and name information during the study phase. We propose that the improvement associated with deep encoding instructions may be related to the establishment of distinct visually derived and identity-specific semantic codes. The benefit associated with deep encoding in R.J., however, was found to be restricted to the specific view of the face presented at study and did not generalize to other views of the same face. These observations suggest that deep encoding instructions may enhance memory for concrete or pictorial representations of faces in patients with prosopagnosia, but that these patients cannot compensate for the inability to construct abstract structural codes that normally allow faces to be recognized from different orientations. We postulate further that R.J.'s poor performance on face learning tasks may be attributable to excessive reliance on a feature-based left hemisphere face processing system that operates primarily on view-specific representations.
Zhu, Bi; Chen, Chuansheng; Moyzis, Robert K; Dong, Qi; Chen, Chunhui; He, Qinghua; Stern, Hal S; Li, He; Li, Jin; Li, Jun; Lessard, Jared; Lin, Chongde
2012-01-01
This study investigated the relation between genetic variations in the dopamine system and facial expression recognition. A sample of Chinese college students (n = 478) was given a facial expression recognition task. Subjects were genotyped for 98 loci [96 single-nucleotide polymorphisms (SNPs) and 2 variable number tandem repeats] in 16 genes involved in the dopamine neurotransmitter system, including its 4 subsystems: synthesis (TH, DDC, and DBH), degradation/transport (COMT,MAOA,MAOB, and SLC6A3), receptors (DRD1,DRD2,DRD3,DRD4, and DRD5), and modulation (NTS,NTSR1,NTSR2, and NLN). To quantify the total contributions of the dopamine system to emotion recognition, we used a series of multiple regression models. Permutation analyses were performed to assess the posterior probabilities of obtaining such results. Among the 78 loci that were included in the final analyses (after excluding 12 SNPs that were in high linkage disequilibrium and 8 that were not in Hardy-Weinberg equilibrium), 1 (for fear), 3 (for sadness), 5 (for anger), 13 (for surprise), and 15 (for disgust) loci exhibited main effects on the recognition of facial expressions. Genetic variations in the dopamine system accounted for 3% for fear, 6% for sadness, 7% for anger, 10% for surprise, and 18% for disgust, with the latter surviving a stringent permutation test. Genetic variations in the dopamine system (especially the dopamine synthesis and modulation subsystems) made significant contributions to individual differences in the recognition of disgust faces. Copyright © 2012 S. Karger AG, Basel.
Dalla Costa, Emanuela; Stucke, Diana; Dai, Francesca; Minero, Michela; Leach, Matthew C.; Lebelt, Dirk
2016-01-01
Simple Summary Acute laminitis is a common equine disease characterized by intense foot pain. This work aimed to investigate whether the Horse Grimace Scale (HGS), a facial-expression-based pain coding system, can be usefully applied to assess pain associated with acute laminitis in horses at rest. Ten horses, referred as acute laminitis cases with no prior treatment, were assessed at the admission and at seven days after the initial evaluation and treatment. The authors found that the Horse Grimace Scale is a potentially effective method to assess pain associated with acute laminitis in horses at rest, as horses showing high HGS scores also exhibited higher Obel scores, and veterinarians classified them in a more severe painful state. Abstract Acute laminitis is a common equine disease characterized by intense foot pain, both acutely and chronically. The Obel grading system is the most widely accepted method for describing the severity of laminitis by equine practitioners, however this method requires movement (walk and trot) of the horse, causing further intense pain. The recently developed Horse Grimace Scale (HGS), a facial-expression-based pain coding system, may offer a more effective means of assessing the pain associated with acute laminitis. The aims of this study were: to investigate whether HGS can be usefully applied to assess pain associated with acute laminitis in horses at rest, and to examine if scoring HGS using videos produced similar results as those obtained from still images. Ten horses, referred as acute laminitis cases with no prior treatment, were included in the study. Each horse was assessed using the Obel and HGS (from images and videos) scales: at the admission (before any treatment) and at seven days after the initial evaluation and treatment. The results of this study suggest that HGS is a potentially effective method to assess pain associated with acute laminitis in horses at rest, as horses showing high HGS scores also exhibited higher Obel scores and veterinarians classified them in a more severe painful state. Furthermore, the inter-observer reliability of the HGS total score was good for both still images and video evaluation. There was no significant difference in HGS total scores between the still images and videos, suggesting that there is a possibility of applying the HGS in clinical practice, by observing the horse for a short time. However, further validation studies are needed prior to applying the HGS in a clinical setting. PMID:27527224
Saito, Kosuke; Tamaki, Tetsuro; Hirata, Maki; Hashimoto, Hiroyuki; Nakazato, Kenei; Nakajima, Nobuyuki; Kazuno, Akihito; Sakai, Akihiro; Iida, Masahiro; Okami, Kenji
2015-01-01
Head and neck cancer is often diagnosed at advanced stages, and surgical resection with wide margins is generally indicated, despite this treatment being associated with poor postoperative quality of life (QOL). We have previously reported on the therapeutic effects of skeletal muscle-derived multipotent stem cells (Sk-MSCs), which exert reconstitution capacity for muscle-nerve-blood vessel units. Recently, we further developed a 3D patch-transplantation system using Sk-MSC sheet-pellets. The aim of this study is the application of the 3D Sk-MSC transplantation system to the reconstitution of facial complex nerve-vascular networks after severe damage. Mouse experiments were performed for histological analysis and rats were used for functional examinations. The Sk-MSC sheet-pellets were prepared from GFP-Tg mice and SD rats, and were transplanted into the facial resection model (ST). Culture medium was transplanted as a control (NT). In the mouse experiment, facial-nerve-palsy (FNP) scoring was performed weekly during the recovery period, and immunohistochemistry was used for the evaluation of histological recovery after 8 weeks. In rats, contractility of facial muscles was measured via electrical stimulation of facial nerves root, as the marker of total functional recovery at 8 weeks after transplantation. The ST-group showed significantly higher FNP (about three fold) scores when compared to the NT-group after 2–8 weeks. Similarly, significant functional recovery of whisker movement muscles was confirmed in the ST-group at 8 weeks after transplantation. In addition, engrafted GFP+ cells formed complex branches of nerve-vascular networks, with differentiation into Schwann cells and perineurial/endoneurial cells, as well as vascular endothelial and smooth muscle cells. Thus, Sk-MSC sheet-pellet transplantation is potentially useful for functional reconstitution therapy of large defects in facial nerve-vascular networks. PMID:26372044
Effects of ozone therapy on facial nerve regeneration.
Ozbay, Isa; Ital, Ilker; Kucur, Cuneyt; Akcılar, Raziye; Deger, Aysenur; Aktas, Savas; Oghan, Fatih
Ozone may promote moderate oxidative stress, which increases antioxidant endogenous systems. There are a number of antioxidants that have been investigated therapeutically for improving peripheral nerve regeneration. However, no previous studies have reported the effect of ozone therapy on facial nerve regeneration. We aimed to evaluate the effect of ozone therapy on facial nerve regeneration. Fourteen Wistar albino rats were randomly divided into two groups with experimental nerve crush injuries: a control group, which received saline treatment post-crush, and an experimental group, which received ozone treatment. All animals underwent surgery in which the left facial nerve was exposed and crushed. Treatment with saline or ozone began on the day of the nerve crush. Left facial nerve stimulation thresholds were measured before crush, immediately after crush, and after 30 days. After measuring nerve stimulation thresholds at 30 days post-injury, the crushed facial nerve was excised. All specimens were studied using light and electron microscopy. Post-crushing, the ozone-treated group had lower stimulation thresholds than the saline group. Although this did not achieve statistical significance, it is indicative of greater functional improvement in the ozone group. Significant differences were found in vascular congestion, macrovacuolization, and myelin thickness between the ozone and control groups. Significant differences were also found in axonal degeneration and myelin ultrastructure between the two groups. We found that ozone therapy exerted beneficial effect on the regeneration of crushed facial nerves in rats. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Lee, Jae-Gi; Jung, Su-Jin; Lee, Hyung-Jin; Seo, Jung-Hyuk; Choi, You-Jin; Bae, Hyun-Sook; Park, Jong-Tae; Kim, Hee-Jin
2015-09-01
The topography of the facial muscles differs between males and females and among individuals of the same gender. To explain the unique expressions that people can make, it is important to define the shapes of the muscle, their associations with the skin, and their relative functions. Three-dimensional (3D) motion-capture analysis, often used to study facial expression, was used in this study to identify characteristic skin movements in males and females when they made six representative basic expressions. The movements of 44 reflective markers (RMs) positioned on anatomical landmarks were measured. Their mean displacement was large in males [ranging from 14.31 mm (fear) to 41.15 mm (anger)], and 3.35-4.76 mm smaller in females [ranging from 9.55 mm (fear) to 37.80 mm (anger)]. The percentages of RMs involved in the ten highest mean maximum displacement values in making at least one expression were 47.6% in males and 61.9% in females. The movements of the RMs were larger in males than females but were more limited. Expanding our understanding of facial expression requires morphological studies of facial muscles and studies of related complex functionality. Conducting these together with quantitative analyses, as in the present study, will yield data valuable for medicine, dentistry, and engineering, for example, for surgical operations on facial regions, software for predicting changes in facial features and expressions after corrective surgery, and the development of face-mimicking robots. © 2015 Wiley Periodicals, Inc.
Recognition of children on age-different images: Facial morphology and age-stable features.
Caplova, Zuzana; Compassi, Valentina; Giancola, Silvio; Gibelli, Daniele M; Obertová, Zuzana; Poppa, Pasquale; Sala, Remo; Sforza, Chiarella; Cattaneo, Cristina
2017-07-01
The situation of missing children is one of the most emotional social issues worldwide. The search for and identification of missing children is often hampered, among others, by the fact that the facial morphology of long-term missing children changes as they grow. Nowadays, the wide coverage by surveillance systems potentially provides image material for comparisons with images of missing children that may facilitate identification. The aim of study was to identify whether facial features are stable in time and can be utilized for facial recognition by comparing facial images of children at different ages as well as to test the possible use of moles in recognition. The study was divided into two phases (1) morphological classification of facial features using an Anthropological Atlas; (2) algorithm developed in MATLAB® R2014b for assessing the use of moles as age-stable features. The assessment of facial features by Anthropological Atlases showed high mismatch percentages among observers. On average, the mismatch percentages were lower for features describing shape than for those describing size. The nose tip cleft and the chin dimple showed the best agreement between observers regarding both categorization and stability over time. Using the position of moles as a reference point for recognition of the same person on age-different images seems to be a useful method in terms of objectivity and it can be concluded that moles represent age-stable facial features that may be considered for preliminary recognition. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.
Bilateral Bell palsy as a presenting sign of preeclampsia.
Vogell, Alison; Boelig, Rupsa C; Skora, Joanna; Baxter, Jason K
2014-08-01
Bell palsy is a facial nerve neuropathy that is a rare disorder but occurs at higher frequency in pregnancy. Almost 30% of cases are associated with preeclampsia or gestational hypertension. Bilateral Bell palsy occurs in only 0.3%-2.0% of cases of facial paralysis, has a poorer prognosis for recovery, and may be associated with a systemic disorder. We describe a case of a 24-year-old primigravid woman with a twin gestation at 35 weeks diagnosed initially with bilateral facial palsy and subsequently with preeclampsia. She then developed partial hemolysis, elevated liver enzymes, and low platelet count syndrome, prompting the diagnosis of severe preeclampsia, and was delivered. Bilateral facial palsy is a rare entity in pregnancy that may be the first sign of preeclampsia and suggests increased severity of disease, warranting close monitoring.
Oral-facial-digital syndrome type 1 with hypothalamic hamartoma and Dandy-Walker malformation.
Azukizawa, Takayuki; Yamamoto, Masahito; Narumiya, Seirou; Takano, Tomoyuki
2013-04-01
We report a 1-year-old girl with oral-facial-digital syndrome type 1 with multiple malformations of the oral cavity, face, digits, and central nervous system, including agenesis of the corpus callosum, the presence of intracerebral cysts, and agenesis of the cerebellar vermis, which is associated with the subarachnoid space separating the medial sides of the cerebellar hemispheres. This child also had a hypothalamic hamartoma and a Dandy-Walker malformation, which have not been reported previously. The clinical features, including cerebral malformations, in several types of oral-facial-digital syndrome, overlap with each other. Further accumulation of new case reports and identification of new genetic mutations in oral-facial-digital syndrome may provide novel and important insights into the genetic mechanisms of this syndrome. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Silverstein, Evan Asher
For a radiation oncology clinic, the number of devices available to assist in the workflow for radiotherapy treatments are quite numerous. Processes such as patient verification, motion management, or respiratory motion tracking can all be improved upon by devices currently on the market. These three specific processes can directly impact patient safety and treatment efficacy and, as such, are important to track and quantify. Most products available will only provide a solution for one of these processes and may be outside the reach of a typical radiation oncology clinic due to difficult implementation and incorporation with already existing hardware. This manuscript investigates the use of the Microsoft Kinect v2 sensor to provide solutions for all three processes all while maintaining a relatively simple and easy to use implementation. To assist with patient verification, the Kinect system was programmed to create a facial recognition and recall process. The basis of the facial recognition algorithm was created by utilizing a facial mapping library distributed by Microsoft within the Software Developers Toolkit (SDK). Here, the system extracts 31 fiducial points representing various facial landmarks. 3D vectors are created between each of the 31 points and the magnitude of each vector is calculated by the system. This allows for a face to be defined as a collection of 465 specific vector magnitudes. The 465 vector magnitudes defining a face are then used in both the creation of a facial reference data set and subsequent evaluations of real-time sensor data in the matching algorithm. To test the algorithm, a database of 39 faces was created, each with 465 vectors derived from the fiducial points, and a one-to-one matching procedure was performed to obtain sensitivity and specificity data of the facial identification system. In total, 5299 trials were performed and threshold parameters were created for match determination. Optimization of said parameters in the matching algorithm by way of ROC curves indicated the sensitivity of the system for was 96.5% and the specificity was 96.7%. These results indicate a fairly robust methodology for verifying, in real-time, a specific face through comparison from a pre-collected reference data set. In its current implementation, the process of data collection for each face and subsequent matching session averaged approximately 30 seconds, which may be too onerous to provide a realistic supplement to patient identification in a clinical setting. Despite the time commitment, the data collection process was well tolerated by all participants. It was found that ambient light played a crucial role in the accuracy and reproducibility of the facial recognition system. Testing with various light levels found that ambient light greater than 200 lux produced the most accurate results. As such, the acquisition process should be setup in such a way to ensure consistent ambient light conditions across both the reference recording session and subsequent real-time identification sessions. In developing a motion management process with the Kinect, two separate, but complimentary processes were created. First, to track large scale anatomical movements, the automatic skeletal tracking capabilities of the Kinect were utilized. 25 specific body joints (head, elbow, knee, etc) make up the skeletal frame and are locked to relative positions on the body. Using code written in C#, these joints are tracked, in 3D space, and compared to an initial state of the patient allowing for an indication of anatomical motion. Additionally, to track smaller, more subtle movements on a specific area of the body, a user drawn ROI can be created. Here, the depth values of all pixels associated with the body in the ROI are compared to the initial state. The system counts the number of live pixels with a depth difference greater than a specified threshold compared to the initial state and the area of each of those pixels is calculated based on their depth. The percentage of area moved (PAM) compared to the ROI area then becomes an indication of gross movement within the ROI. In this study, 9 specific joints proved to be stable during data acquisition. When moved in orthogonal directions, each coordinate recorded had a relatively linear trend of movement but not the expected 1:1 relationship to couch movement. Instead, calculation of the vector magnitude between the initial and current position proved a better indicator of movement. 5 of the 9 joints (Left/Right Elbow, Left/Right Hip, and Spine-Base) showed relatively consistent values for radial movements of 5mm and 10mm, achieving 20%-25% coefficient of variation. For these 5 joints, this allowed for threshold values for calculated radial distances of 3mm and 7.5 mm to be set for 5mm and 10mm of actual movement, respectively. (Abstract shortened by ProQuest.).
ERIC Educational Resources Information Center
Herridge, Matt L.; Harrison, David W.; Mollet, Gina A.; Shenal, Brian V.
2004-01-01
The effects of hostility and a cold pressor stressor on the accuracy of facial affect perception were examined in the present experiment. A mechanism whereby physiological arousal level is mediated by systems which also mediate accuracy of an individual's interpretation of affective cues is described. Right-handed participants were classified as…
Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition
NASA Astrophysics Data System (ADS)
Rouabhia, C.; Tebbikh, H.
2008-06-01
Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).
[Neurological disease and facial recognition].
Kawamura, Mitsuru; Sugimoto, Azusa; Kobayakawa, Mutsutaka; Tsuruya, Natsuko
2012-07-01
To discuss the neurological basis of facial recognition, we present our case reports of impaired recognition and a review of previous literature. First, we present a case of infarction and discuss prosopagnosia, which has had a large impact on face recognition research. From a study of patient symptoms, we assume that prosopagnosia may be caused by unilateral right occipitotemporal lesion and right cerebral dominance of facial recognition. Further, circumscribed lesion and degenerative disease may also cause progressive prosopagnosia. Apperceptive prosopagnosia is observed in patients with posterior cortical atrophy (PCA), pathologically considered as Alzheimer's disease, and associative prosopagnosia in frontotemporal lobar degeneration (FTLD). Second, we discuss face recognition as part of communication. Patients with Parkinson disease show social cognitive impairments, such as difficulty in facial expression recognition and deficits in theory of mind as detected by the reading the mind in the eyes test. Pathological and functional imaging studies indicate that social cognitive impairment in Parkinson disease is possibly related to damages in the amygdalae and surrounding limbic system. The social cognitive deficits can be observed in the early stages of Parkinson disease, and even in the prodromal stage, for example, patients with rapid eye movement (REM) sleep behavior disorder (RBD) show impairment in facial expression recognition. Further, patients with myotonic dystrophy type 1 (DM 1), which is a multisystem disease that mainly affects the muscles, show social cognitive impairment similar to that of Parkinson disease. Our previous study showed that facial expression recognition impairment of DM 1 patients is associated with lesion in the amygdalae and insulae. Our study results indicate that behaviors and personality traits in DM 1 patients, which are revealed by social cognitive impairment, are attributable to dysfunction of the limbic system.
Hindocha, Chandni; Freeman, Tom P; Schafer, Grainne; Gardener, Chelsea; Das, Ravi K; Morgan, Celia J A; Curran, H Valerie
2015-03-01
Acute administration of the primary psychoactive constituent of cannabis, Δ-9-tetrahydrocannabinol (THC), impairs human facial affect recognition, implicating the endocannabinoid system in emotional processing. Another main constituent of cannabis, cannabidiol (CBD), has seemingly opposite functional effects on the brain. This study aimed to determine the effects of THC and CBD, both alone and in combination on emotional facial affect recognition. 48 volunteers, selected for high and low frequency of cannabis use and schizotypy, were administered, THC (8mg), CBD (16mg), THC+CBD (8mg+16mg) and placebo, by inhalation, in a 4-way, double-blind, placebo-controlled crossover design. They completed an emotional facial affect recognition task including fearful, angry, happy, sad, surprise and disgust faces varying in intensity from 20% to 100%. A visual analogue scale (VAS) of feeling 'stoned' was also completed. In comparison to placebo, CBD improved emotional facial affect recognition at 60% emotional intensity; THC was detrimental to the recognition of ambiguous faces of 40% intensity. The combination of THC+CBD produced no impairment. Relative to placebo, both THC alone and combined THC+CBD equally increased feelings of being 'stoned'. CBD did not influence feelings of 'stoned'. No effects of frequency of use or schizotypy were found. In conclusion, CBD improves recognition of emotional facial affect and attenuates the impairment induced by THC. This is the first human study examining the effects of different cannabinoids on emotional processing. It provides preliminary evidence that different pharmacological agents acting upon the endocannabinoid system can both improve and impair recognition of emotional faces. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Hindocha, Chandni; Freeman, Tom P.; Schafer, Grainne; Gardener, Chelsea; Das, Ravi K.; Morgan, Celia J.A.; Curran, H. Valerie
2015-01-01
Acute administration of the primary psychoactive constituent of cannabis, Δ-9-tetrahydrocannabinol (THC), impairs human facial affect recognition, implicating the endocannabinoid system in emotional processing. Another main constituent of cannabis, cannabidiol (CBD), has seemingly opposite functional effects on the brain. This study aimed to determine the effects of THC and CBD, both alone and in combination on emotional facial affect recognition. 48 volunteers, selected for high and low frequency of cannabis use and schizotypy, were administered, THC (8 mg), CBD (16 mg), THC+CBD (8 mg+16 mg) and placebo, by inhalation, in a 4-way, double-blind, placebo-controlled crossover design. They completed an emotional facial affect recognition task including fearful, angry, happy, sad, surprise and disgust faces varying in intensity from 20% to 100%. A visual analogue scale (VAS) of feeling ‘stoned’ was also completed. In comparison to placebo, CBD improved emotional facial affect recognition at 60% emotional intensity; THC was detrimental to the recognition of ambiguous faces of 40% intensity. The combination of THC+CBD produced no impairment. Relative to placebo, both THC alone and combined THC+CBD equally increased feelings of being ‘stoned’. CBD did not influence feelings of ‘stoned’. No effects of frequency of use or schizotypy were found. In conclusion, CBD improves recognition of emotional facial affect and attenuates the impairment induced by THC. This is the first human study examining the effects of different cannabinoids on emotional processing. It provides preliminary evidence that different pharmacological agents acting upon the endocannabinoid system can both improve and impair recognition of emotional faces. PMID:25534187
The Functional Role of the Periphery in Emotional Language Comprehension
Havas, David A.; Matheson, James
2013-01-01
Language can impact emotion, even when it makes no reference to emotion states. For example, reading sentences with positive meanings (“The water park is refreshing on the hot summer day”) induces patterns of facial feedback congruent with the sentence emotionality (smiling), whereas sentences with negative meanings induce a frown. Moreover, blocking facial afference with botox selectively slows comprehension of emotional sentences. Therefore, theories of cognition should account for emotion-language interactions above the level of explicit emotion words, and the role of peripheral feedback in comprehension. For this special issue exploring frontiers in the role of the body and environment in cognition, we propose a theory in which facial feedback provides a context-sensitive constraint on the simulation of actions described in language. Paralleling the role of emotions in real-world behavior, our account proposes that (1) facial expressions accompany sudden shifts in wellbeing as described in language; (2) facial expressions modulate emotional action systems during reading; and (3) emotional action systems prepare the reader for an effective simulation of the ensuing language content. To inform the theory and guide future research, we outline a framework based on internal models for motor control. To support the theory, we assemble evidence from diverse areas of research. Taking a functional view of emotion, we tie the theory to behavioral and neural evidence for a role of facial feedback in cognition. Our theoretical framework provides a detailed account that can guide future research on the role of emotional feedback in language processing, and on interactions of language and emotion. It also highlights the bodily periphery as relevant to theories of embodied cognition. PMID:23750145
Mandrini, Silvia; Comelli, Mario; Dall'angelo, Anna; Togni, Rossella; Cecini, Miriam; Pavese, Chiara; Dalla Toffola, Elena
2016-12-01
Only few studies have considered the effects of the combined treatment with onabotulinumtoxinA (BoNT-A) injections and biofeedback (BFB) rehabilitation in the recovery of postparetic facial synkinesis (PPFS). To explore the presence of a persistent improvement in facial function out of the pharmacological effect of BoNT-A in subjects with established PPFS, after repeated sessions of BoNT-A injections combined with an educational facial training program using mirror biofeedback (BFB) exercises. Secondary objective was to investigate the trend of the presumed persistent improvement. Case-series study. Outpatient Clinic of Physical Medicine and Rehabilitation Unit. Twenty-seven patients (22 females; mean age 45±16 years) affected by an established peripheral facial palsy, treated with a minimum of three BoNT-A injections in association with mirror BFB rehabilitation. The interval between consecutive BoNT-A injections was at least five months. At baseline and before every BoNT-A injection+mirror BFB session (when the effect of the previous BoNT-A injection had vanished), patients were assessed with the Italian version of Sunnybrook Facial Grading System (SB). The statistical analysis considered SB composite and partial scores before each treatment session compared to the baseline scores. A significant improvement of the SB composite and partial scores was observed until the fourth session. Considering the "Symmetry of Voluntary Movement" partial score, the main improvement was observed in the muscles of the lower part of the face. In a chronic stage of postparetic facial synkinesis, patients may benefit from a combined therapy with repeated BoNT-A injections and an educational facial training program with mirror BFB exercises, gaining an improvement of the facial function up to the fourth session. This improvement reflects the acquired ability to use facial muscle correctly. It doesn't involve the injected muscles but those trained with mirror biofeedback exercises and it persists also when BoNT-A action has vanished. The combined therapy with repeated BoNT-A injections and an educational facial training program using mirror BFB exercises may be useful in the motor recovery of the muscles of the lower part of the face not injected but trained.
A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.
Yu, Jun; Wang, Zeng-Fu
2015-05-01
A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction.
Fox, Christopher J; Barton, Jason J S
2007-01-05
The neural representation of facial expression within the human visual system is not well defined. Using an adaptation paradigm, we examined aftereffects on expression perception produced by various stimuli. Adapting to a face, which was used to create morphs between two expressions, substantially biased expression perception within the morphed faces away from the adapting expression. This adaptation was not based on low-level image properties, as a different image of the same person displaying that expression produced equally robust aftereffects. Smaller but significant aftereffects were generated by images of different individuals, irrespective of gender. Non-face visual, auditory, or verbal representations of emotion did not generate significant aftereffects. These results suggest that adaptation affects at least two neural representations of expression: one specific to the individual (not the image), and one that represents expression across different facial identities. The identity-independent aftereffect suggests the existence of a 'visual semantic' for facial expression in the human visual system.
Face recognition system and method using face pattern words and face pattern bytes
Zheng, Yufeng
2014-12-23
The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.
Origin of symbol-using systems: speech, but not sign, without the semantic urge
Sereno, Martin I.
2014-01-01
Natural language—spoken and signed—is a multichannel phenomenon, involving facial and body expression, and voice and visual intonation that is often used in the service of a social urge to communicate meaning. Given that iconicity seems easier and less abstract than making arbitrary connections between sound and meaning, iconicity and gesture have often been invoked in the origin of language alongside the urge to convey meaning. To get a fresh perspective, we critically distinguish the origin of a system capable of evolution from the subsequent evolution that system becomes capable of. Human language arose on a substrate of a system already capable of Darwinian evolution; the genetically supported uniquely human ability to learn a language reflects a key contact point between Darwinian evolution and language. Though implemented in brains generated by DNA symbols coding for protein meaning, the second higher-level symbol-using system of language now operates in a world mostly decoupled from Darwinian evolutionary constraints. Examination of Darwinian evolution of vocal learning in other animals suggests that the initial fixation of a key prerequisite to language into the human genome may actually have required initially side-stepping not only iconicity, but the urge to mean itself. If sign languages came later, they would not have faced this constraint. PMID:25092671
2006-12-01
COL Timothy A Mitchener, DC USA 5e. TASK NUMBER 6. AUTHOR( S ) 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) 8...SPONSORING/MONITORING AGENCY NAME( S ) AND 10. SPONSOR/MONITOR’S ACRONYM( S ) ADDRESS(ES) 11. SPONSOR/MONITOR’S REPORT NUMBER( S ) 12. DISTRIBUTION/AVAILABILITY...NATO) Standardization Agreement (STANAG), 5th edition, coding scheme. (See P.J. Amoroso, G.S. Smith, and N.S. Bell : Qualitative assessment of cause
Duncan, Justin; Gosselin, Frédéric; Cobarro, Charlène; Dugas, Gabrielle; Blais, Caroline; Fiset, Daniel
2017-12-01
Horizontal information was recently suggested to be crucial for face identification. In the present paper, we expand on this finding and investigate the role of orientations for all the basic facial expressions and neutrality. To this end, we developed orientation bubbles to quantify utilization of the orientation spectrum by the visual system in a facial expression categorization task. We first validated the procedure in Experiment 1 with a simple plaid-detection task. In Experiment 2, we used orientation bubbles to reveal the diagnostic-i.e., task relevant-orientations for the basic facial expressions and neutrality. Overall, we found that horizontal information was highly diagnostic for expressions-surprise excepted. We also found that utilization of horizontal information strongly predicted performance level in this task. Despite the recent surge of research on horizontals, the link with local features remains unexplored. We were thus also interested in investigating this link. In Experiment 3, location bubbles were used to reveal the diagnostic features for the basic facial expressions. Crucially, Experiments 2 and 3 were run in parallel on the same participants, in an interleaved fashion. This way, we were able to correlate individual orientation and local diagnostic profiles. Our results indicate that individual differences in horizontal tuning are best predicted by utilization of the eyes.
[Facial nerve injuries cause changes in central nervous system microglial cells].
Cerón, Jeimmy; Troncoso, Julieta
2016-12-01
Our research group has described both morphological and electrophysiological changes in motor cortex pyramidal neurons associated with contralateral facial nerve injury in rats. However, little is known about those neural changes, which occur together with changes in surrounding glial cells. To characterize the effect of the unilateral facial nerve injury on microglial proliferation and activation in the primary motor cortex. We performed immunohistochemical experiments in order to detect microglial cells in brain tissue of rats with unilateral facial nerve lesion sacrificed at different times after the injury. We caused two types of lesions: reversible (by crushing, which allows functional recovery), and irreversible (by section, which produces permanent paralysis). We compared the brain tissues of control animals (without surgical intervention) and sham-operated animals with animals with lesions sacrificed at 1, 3, 7, 21 or 35 days after the injury. In primary motor cortex, the microglial cells of irreversibly injured animals showed proliferation and activation between three and seven days post-lesion. The proliferation of microglial cells in reversibly injured animals was significant only three days after the lesion. Facial nerve injury causes changes in microglial cells in the primary motor cortex. These modifications could be involved in the generation of morphological and electrophysiological changes previously described in the pyramidal neurons of primary motor cortex that command facial movements.
A Report of Two Cases of Solid Facial Edema in Acne.
Kuhn-Régnier, Sarah; Mangana, Joanna; Kerl, Katrin; Kamarachev, Jivko; French, Lars E; Cozzio, Antonio; Navarini, Alexander A
2017-03-01
Solid facial edema (SFE) is a rare complication of acne vulgaris. To examine the clinical features of acne patients with solid facial edema, and to give an overview on the outcome of previous topical and systemic treatments in the cases so far published. We report two cases from Switzerland, both young men with initially papulopustular acne resistant to topical retinoids. Both cases responded to oral isotretinoin, in one case combined with oral steroids. Our cases show a strikingly similar clinical appearance to the cases described by Connelly and Winkelmann in 1985 (Connelly MG, Winkelmann RK. Solid facial edema as a complication of acne vulgaris. Arch Dermatol. 1985;121(1):87), as well as to cases of Morbihan's disease that occurs as a rare complication of rosacea. Even 30 years after, the cause of the edema remains unknown. In two of the original four cases, a potential triggering factor was identified such as facial trauma or insect bites; however, our two patients did not report such occurrencies. The rare cases of solid facial edema in both acne and rosacea might hold the key to understanding the specific inflammatory pattern that creates both persisting inflammation and disturbed fluid homeostasis which can occur as a slightly different presentation in dermatomyositis, angioedema, Heerfordt's syndrome and other conditions.
Home-use TriPollar RF device for facial skin tightening: Clinical study results.
Beilin, Ghislaine
2011-04-01
Professional, non-invasive, anti-aging treatments based on radio-frequency (RF) technologies are popular for skin tightening and improvement of wrinkles. A new home-use RF device for facial treatments has recently been developed based on TriPollar™ technology. To evaluate the STOP™ home-use device for facial skin tightening using objective and subjective methods. Twenty-three female subjects used the STOP at home for a period of 6 weeks followed by a maintenance period of 6 weeks. Facial skin characteristics were objectively evaluated at baseline and at the end of the treatment and maintenance periods using a three-dimensional imaging system. Additionally, facial wrinkles were classified and subjects scored their satisfaction and sensations. Following STOP treatment, a statistically significant reduction of perioral and periorbital wrinkles was achieved in 90% and 95% of the patients, respectively, with an average periorbital wrinkle reduction of 41%. This objective result correlated well with the periorbital wrinkle classification result of 40%. All patients were satisfied to extremely satisfied with the treatments and all reported moderate to excellent visible results. The clinical study demonstrated the safety and efficacy of the STOP home-use device for facial skin tightening. Treatment can maintain a tighter and suppler skin with improvement of fine lines and wrinkles.
Anatomical evidence regarding the existence of sustentaculum facies.
Frâncu, L L; Hînganu, Delia; Hînganu, M V
2013-01-01
The face, seen as a unitary region is subject to the gravitational force. Since it is the main relational and socialization region of each individual, it presents unique ways of suspension. The elevation system of the face is complex, and it includes four different elements: the continuity with the epicranial fascia, the adhesion of superficial structures to the peri- and inter-orbital mimic muscles, ligaments adhesions and fixing ligaments of the superficial layers to the zygomatic process, and also to the facial fat pad. Each of these four elements were evaluated on 12 cephalic extremities, dissected in detail, layer by layer, and the images were captured with an informatics system connected to an operating microscope. The purchased mesoscopic images revealed the presence of a superficial musculo-aponeurotic system (SMAS) through which the anti-gravity suspension of the superficial facial structures become possible. This system acts against face aging and all four elevation structures form what the so-called sustentaculum facies. The participation of each of the four anatomic components and their approach in the facial rejuvenation surgeries are here in discussion.
Generating Facial Expressions Using an Anatomically Accurate Biomechanical Model.
Wu, Tim; Hung, Alice; Mithraratne, Kumar
2014-11-01
This paper presents a computational framework for modelling the biomechanics of human facial expressions. A detailed high-order (Cubic-Hermite) finite element model of the human head was constructed using anatomical data segmented from magnetic resonance images. The model includes a superficial soft-tissue continuum consisting of skin, the subcutaneous layer and the superficial Musculo-Aponeurotic system. Embedded within this continuum mesh, are 20 pairs of facial muscles which drive facial expressions. These muscles were treated as transversely-isotropic and their anatomical geometries and fibre orientations were accurately depicted. In order to capture the relative composition of muscles and fat, material heterogeneity was also introduced into the model. Complex contact interactions between the lips, eyelids, and between superficial soft tissue continuum and deep rigid skeletal bones were also computed. In addition, this paper investigates the impact of incorporating material heterogeneity and contact interactions, which are often neglected in similar studies. Four facial expressions were simulated using the developed model and the results were compared with surface data obtained from a 3D structured-light scanner. Predicted expressions showed good agreement with the experimental data.
Minimally invasive brow suspension for facial paralysis.
Costantino, Peter D; Hiltzik, David H; Moche, Jason; Preminger, Aviva
2003-01-01
To report a new technique for unilateral brow suspension for facial paralysis that is minimally invasive, limits supraciliary scar formation, does not require specialized endoscopic equipment or expertise, and has proved to be equal to direct brow suspension in durability and symmetry. Retrospective survey of a case series of 23 patients between January 1997 and December 2000. Metropolitan tertiary care center. Patients with head and neck tumors and brow ptosis caused by facial nerve paralysis. The results of the procedure were determined using the following 3-tier rating system: outstanding (excellent elevation and symmetry); acceptable (good elevation and fair symmetry); and unacceptable (loss of elevation). The results were considered outstanding in 12 patients, acceptable in 9 patients, and unacceptable in only 1 patient. One patient developed a hematoma, and 1 patient required a secondary adjustment. The technique has proved to be superior to standard brow suspension procedures with regard to scar formation and equal with respect to facial symmetry and suspension. These results have caused us to abandon direct brow suspension and to use this minimally invasive method in all cases of brow ptosis due to facial paralysis.
Hontanilla, Bernardo; Marre, Diego
2013-04-01
This study aims to analyse the efficacy of static techniques, namely gold weight implant and tendon sling, in the reanimation of the paralytic eyelid. Upper eyelid rehabilitation in terms of excursion and blinking velocity is performed using the automatic motion capture system, FACIAL CLIMA. Seventy-four patients underwent a total of 101 procedures including 58 upper eyelid gold weight implants and 43 lower eyelid tendon suspension with 27 patients undergoing both procedures. The presence of lagophtalmos, eye dryness, corneal ulcer, epiphora and lower lid ptosis/ectropion was assessed preoperatively. The Wilcoxon signed-rank test was used to compare preoperative versus postoperative measurements of upper eyelid excursion and blinking velocity determined with FACIAL CLIMA. Significance was set at p <0.05. FACIAL CLIMA revealed significant improvement of eyelid excursion and velocity of blinking (p < 0.001). Eye dryness improved in 49 patients (90.7%) and corneal ulcer resolved without any further treatment in 12 (85.7%) of those with a gold weight inserted. Implant extrusion was observed in 8.6% of the cases. Of the patients with lower lid tendon suspension, correction of ptosis/ectropion and epiphora was achieved in 93.9% and 91.9% of cases, respectively. In eight patients (18.6%), further surgery was needed to adjust tendon tension. The paralytic upper and lower eyelid can be successfully managed with gold weight implant and tendon suspension. The FACIAL CLIMA system is a reliable method to quantify upper eyelid excursion and blinking velocity and to detect the exact position of the lower eyelid. Copyright © 2012 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Salazar-Gamarra, Rodrigo; Seelaus, Rosemary; da Silva, Jorge Vicente Lopes; da Silva, Airton Moreira; Dib, Luciano Lauria
2016-05-25
The aim of this study is to present the development of a new technique to obtain 3D models using photogrammetry by a mobile device and free software, as a method for making digital facial impressions of patients with maxillofacial defects for the final purpose of 3D printing of facial prostheses. With the use of a mobile device, free software and a photo capture protocol, 2D captures of the anatomy of a patient with a facial defect were transformed into a 3D model. The resultant digital models were evaluated for visual and technical integrity. The technical process and resultant models were described and analyzed for technical and clinical usability. Generating 3D models to make digital face impressions was possible by the use of photogrammetry with photos taken by a mobile device. The facial anatomy of the patient was reproduced by a *.3dp and a *.stl file with no major irregularities. 3D printing was possible. An alternative method for capturing facial anatomy is possible using a mobile device for the purpose of obtaining and designing 3D models for facial rehabilitation. Further studies must be realized to compare 3D modeling among different techniques and systems. Free software and low cost equipment could be a feasible solution to obtain 3D models for making digital face impressions for maxillofacial prostheses, improving access for clinical centers that do not have high cost technology considered as a prior acquisition.
Bilateral Facial Paralysis: A 13-Year Experience.
Gaudin, Robert A; Jowett, Nathan; Banks, Caroline A; Knox, Christopher J; Hadlock, Tessa A
2016-10-01
Bilateral facial palsy is a rare clinical entity caused by myriad disparate conditions requiring different treatment paradigms. Lyme disease, Guillain-Barré syndrome, and leukemia are several examples. In this article, the authors describe the cause, the initial diagnostic approach, and the management of long-term sequelae of bilateral paralysis that has evolved in the authors' center over the past 13 years. A chart review was performed to identify all patients diagnosed with bilateral paralysis at the authors' center between January of 2002 and January of 2015. Demographics, signs and symptoms, diagnosis, initial medical treatment, interventions for facial reanimation, and outcomes were reviewed. Of the 2471 patients seen at the authors' center, 68 patients (3 percent) with bilateral facial paralysis were identified. Ten patients (15 percent) presented with bilateral facial paralysis caused by Lyme disease, nine (13 percent) with Möbius syndrome, nine (13 percent) with neurofibromatosis type 2, five (7 percent) with bilateral facial palsy caused by brain tumor, four (6 percent) with Melkersson-Rosenthal syndrome, three (4 percent) with bilateral temporal bone fractures, two (3 percent) with Guillain-Barré syndrome, one (2 percent) with central nervous system lymphoma, one (2 percent) with human immunodeficiency virus infection, and 24 (35 percent) with presumed Bell palsy. Treatment included pharmacologic therapy, physical therapy, chemodenervation, and surgical interventions. Bilateral facial palsy is a rare medical condition, and treatment often requires a multidisciplinary approach. The authors outline diagnostic and therapeutic algorithms of a tertiary care center to provide clinicians with a systematic approach to managing these complicated patients.
Recognizing Age-Separated Face Images: Humans and Machines
Yadav, Daksha; Singh, Richa; Vatsa, Mayank; Noore, Afzel
2014-01-01
Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components - facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario. PMID:25474200
Recognizing age-separated face images: humans and machines.
Yadav, Daksha; Singh, Richa; Vatsa, Mayank; Noore, Afzel
2014-01-01
Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components--facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario.
Mask face: bilateral simultaneous facial palsy in an 11-year-old boy.
Güngör, Serdal; Güngör Raif, Sabiha; Arslan, Müjgan
2013-04-01
Bilateral facial paralysis is an uncommon clinical entity especially in the pediatric age group and occurs frequently as a manifestation of systemic disease. The most important causes are trauma, infectious diseases, neurological diseases, metabolic, neoplastic, autoimmune diseases and idiopathic disease (Bell's palsy). We report a case of an 11-year-old boy presenting with bilateral simultaneous peripheral facial paralysis. All possible infectious causes were excluded and the patient was diagnosed as having Bell's palsy (idiopathic). The most important approach in these cases is to rule out a life-threatening disease. © 2013 The Authors. Pediatrics International © 2013 Japan Pediatric Society.
Recio, Guillermo; Wilhelm, Oliver; Sommer, Werner; Hildebrandt, Andrea
2017-04-01
Despite a wealth of knowledge about the neural mechanisms behind emotional facial expression processing, little is known about how they relate to individual differences in social cognition abilities. We studied individual differences in the event-related potentials (ERPs) elicited by dynamic facial expressions. First, we assessed the latent structure of the ERPs, reflecting structural face processing in the N170, and the allocation of processing resources and reflexive attention to emotionally salient stimuli, in the early posterior negativity (EPN) and the late positive complex (LPC). Then we estimated brain-behavior relationships between the ERP factors and behavioral indicators of facial identity and emotion-processing abilities. Structural models revealed that the participants who formed faster structural representations of neutral faces (i.e., shorter N170 latencies) performed better at face perception (r = -.51) and memory (r = -.42). The N170 amplitude was not related to individual differences in face cognition or emotion processing. The latent EPN factor correlated with emotion perception (r = .47) and memory (r = .32), and also with face perception abilities (r = .41). Interestingly, the latent factor representing the difference in EPN amplitudes between the two neutral control conditions (chewing and blinking movements) also correlated with emotion perception (r = .51), highlighting the importance of tracking facial changes in the perception of emotional facial expressions. The LPC factor for negative expressions correlated with the memory for emotional facial expressions. The links revealed between the latency and strength of activations of brain systems and individual differences in processing socio-emotional information provide new insights into the brain mechanisms involved in social communication.
Falco, Jeffrey J; Thomas, Andrew J; Quin, Xuan; Ashby, Shaelene; Mace, Jess C; Deconde, Adam S; Smith, Timothy L; Alt, Jeremiah A
2016-11-01
Facial pain is a cardinal symptom of chronic rhinosinusitis (CRS) with significant impacts on patient treatment selection, quality of life, and outcomes. The association between facial pain and CRS disease severity has not been systematically evaluated with validated, facial pain-specific questionnaires. Our objective was to measure pain location, severity, and interference in patients with CRS, and correlate these to the location and severity of radiographic evidence of disease. Patients with CRS were enrolled into a prospective, cross-sectional study. Patients completed the Brief Pain Inventory Short Form, which is a validated and widely used tool that measures pain location, severity, and interference with daily activities of living. The Lund-Mackay (L-M) computed tomography (CT) scoring system was used to operationalize the radiographic location and severity of inflammation. Facial pain location, severity, and interference scores were correlated to paranasal sinus opacification scores. Consecutive patients with CRS with nasal polyps (CRSwNP; n = 37) and CRS without nasal polyps (CRSsNP; n = 46) were enrolled. No significant relationship was found between the location and severity of reported facial pain and radiographic findings of disease for patients with either CRSwNP or CRSsNP. There was no difference in pain location between patients with and without radiographic disease in a given sinus. Facial pain in CRS is not predicted by the radiographic extent of disease. The location and severity of facial pain reported by the patient is not a reliable marker of the anatomic location and severity of sinonasal inflammation. Pain location should not necessarily be relied upon for guiding targeted therapy. © 2016 ARS-AAOA, LLC.
A systematic review and meta-analysis of 'Systems for Social Processes' in eating disorders.
Caglar-Nazali, H Pinar; Corfield, Freya; Cardi, Valentina; Ambwani, Suman; Leppanen, Jenni; Olabintan, Olaolu; Deriziotis, Stephanie; Hadjimichalis, Alexandra; Scognamiglio, Pasquale; Eshkevari, Ertimiss; Micali, Nadia; Treasure, Janet
2014-05-01
Social and emotional problems have been implicated in the development and maintenance of eating disorders (ED). This paper reviews the facets of social processing in ED according to the NIMH Research and Domain Criteria (NIMH RDoC) 'Systems for Social Processes' framework. Embase, Medline, PsycInfo and Web of Science were searched for peer-reviewed articles published by March 2013. One-hundred and fifty four studies measuring constructs of: attachment, social communication, perception and understanding of self and others, and social dominance in people with ED, were identified. Eleven meta-analyses were performed, they showed evidence that people with ED had attachment insecurity (d=1.31), perceived low parental care (d=.51), appraised high parental overprotection (d=0.29), impaired facial emotion recognition (d=.44) and facial communication (d=2.10), increased facial avoidance (d=.52), reduced agency (d=.39), negative self-evaluation (d=2.27), alexithymia (d=.66), poor understanding of mental states (d=1.07) and sensitivity to social dominance (d=1.08). There is less evidence for problems with production and reception of non-facial communication, animacy and action. Copyright © 2013 Elsevier Ltd. All rights reserved.
Wallez, Catherine; Schaeffer, Jennifer; Meguerditchian, Adrien; Vauclair, Jacques; Schapiro, Steven J.; Hopkins, William D.
2013-01-01
Studies involving oro-facial asymmetries in nonhuman primates have largely demonstrated a right hemispheric dominance for communicative signals and conveyance of emotional information. A recent study on chimpanzee reported the first evidence of significant left-hemispheric dominance when using attention-getting sounds and rightward bias for species-typical vocalizations (Losin, Russell, Freeman, Meguerditchian, Hopkins & Fitch, 2008). The current study sought to extend the findings from Losin et al. (2008) with additional oro-facial assessment in a new colony of chimpanzees. When combining the two populations, the results indicated a consistent leftward bias for attention-getting sounds and a right lateralization for species-typical vocalizations. Collectively, the results suggest that both voluntary- controlled oro-facial and gestural communication might share the same left-hemispheric specialization and might have coevolved into a single integrated system present in a common hominid ancestor. PMID:22867751
Wasano, K; Ishikawa, T; Kawasaki, T; Yamamoto, S; Tomisato, S; Shinden, S; Minami, S; Wakabayashi, T; Ogawa, K
2017-12-01
We describe a novel scoring system, the facial Palsy Prognosis Prediction score (PPP score), which we test for reliability in predicting pre-therapeutic prognosis of facial palsy. We aimed to use readily available patient data that all clinicians have access to before starting treatment. Multicenter case series with chart review. Three tertiary care hospitals. We obtained haematological and demographic data from 468 facial palsy patients who were treated between 2010 and 2014 in three tertiary care hospitals. Patients were categorised as having Bell's palsy or Ramsey Hunt's palsy. We compared the data of recovered and unrecovered patients. PPP scores consisted of combinatorial threshold values of continuous patient data (eg platelet count) and categorical variables (eg gender) that best predicted recovery. We created separate PPP scores for Bell's palsy patients (PPP-B) and for Ramsey Hunt's palsy patients (PPP-H). The PPP-B score included age (≥65 years), gender (male) and neutrophil-to-lymphocyte ratio (≥2.9). The PPP-H score included age (≥50 years), monocyte rate (≥6.0%), mean corpuscular volume (≥95 fl) and platelet count (≤200 000 /μL). Patient recovery rate significantly decreased with increasing PPP scores (both PPP-B and PPP-H) in a step-wise manner. PPP scores (ie PPP-B score and PPP-H score) ≥2 were associated with worse than average prognosis. Palsy Prognosis Prediction scores are useful for predicting prognosis of facial palsy before beginning treatment. © 2017 John Wiley & Sons Ltd.
A real-time monitoring system for the facial nerve.
Prell, Julian; Rachinger, Jens; Scheller, Christian; Alfieri, Alex; Strauss, Christian; Rampp, Stefan
2010-06-01
Damage to the facial nerve during surgery in the cerebellopontine angle is indicated by A-trains, a specific electromyogram pattern. These A-trains can be quantified by the parameter "traintime," which is reliably correlated with postoperative functional outcome. The system presented was designed to monitor traintime in real-time. A dedicated hardware and software platform for automated continuous analysis of the intraoperative facial nerve electromyogram was specifically designed. The automatic detection of A-trains is performed by a software algorithm for real-time analysis of nonstationary biosignals. The system was evaluated in a series of 30 patients operated on for vestibular schwannoma. A-trains can be detected and measured automatically by the described method for real-time analysis. Traintime is monitored continuously via a graphic display and is shown as an absolute numeric value during the operation. It is an expression of overall, cumulated length of A-trains in a given channel; a high correlation between traintime as measured by real-time analysis and functional outcome immediately after the operation (Spearman correlation coefficient [rho] = 0.664, P < .001) and in long-term outcome (rho = 0.631, P < .001) was observed. Automated real-time analysis of the intraoperative facial nerve electromyogram is the first technique capable of reliable continuous real-time monitoring. It can critically contribute to the estimation of functional outcome during the course of the operative procedure.
Maniu, Alma Aurelia; Harabagiu, Oana; Damian, Laura Otilia; Ştefănescu, Eugen HoraŢiu; FănuŢă, Bogdan Marius; Cătană, Andreea; Mogoantă, Carmen Aurelia
2016-01-01
Several systemic diseases, including granulomatous and infectious processes, tumors, bone disorders, collagen-vascular and other autoimmune diseases may involve the middle ear and temporal bone. These diseases are difficult to diagnose when symptoms mimic acute otomastoiditis. The present report describes our experience with three such cases initially misdiagnosed. Their predominating symptoms were otological with mastoiditis, hearing loss, and subsequently facial nerve palsy. The cases were considered an emergency and the patients underwent tympanomastoidectomy, under the suspicion of otitis media with cholesteatoma, in order to remove a possible abscess and to decompress the facial nerve. The common features were the presence of severe granulation tissue filling the mastoid cavity and middle ear during surgery, without cholesteatoma. The definitive diagnoses was made by means of biopsy of the granulation tissue from the middle ear, revealing granulomatosis with polyangiitis (formerly known as Wegener's granulomatosis) in one case, middle ear tuberculosis and diffuse large B-cell lymphoma respectively. After specific associated therapy facial nerve functions improved, and atypical inflammatory states of the ear resolved. As a group, systemic diseases of the middle ear and temporal bone are uncommon, but aggressive lesions. After analyzing these cases and reviewing the literature, we would like to stress upon the importance of microscopic examination of the affected tissue, required for an accurate diagnosis and effective treatment.
Anatomy of Sodium Hypochlorite Accidents Involving Facial Ecchymosis – A Review
Zhu, Wan-chun; Gyamfi, Jacqueline; Niu, Li-na; Schoeffel, G. John; Liu, Si-ying; Santarcangelo, Filippo; Khan, Sara; Tay, Kelvin C-Y.; Pashley, David H.; Tay, Franklin R.
2013-01-01
Objectives Root canal treatment forms an essential part of general dental practice. Sodium hypochlorite (NaOCl) is the most commonly used irrigant in endodontics due to its ability to dissolve organic soft tissues in the root canal system and its action as a potent antimicrobial agent. Although NaOCl accidents created by extrusion of the irrigant through root apices are relatively rare and are seldom life-threatening, they do create substantial morbidity when they occur. Methods To date, NaOCl accidents have only been published as isolated case reports. Although previous studies have attempted to summarise the symptoms involved in these case reports, there was no endeavor to analyse the distribution of soft tissue distribution in those reports. In this review, the anatomy of a classical NaOCl accident that involves facial swelling and ecchymosis is discussed. Results By summarising the facial manifestations presented in previous case reports, a novel hypothesis that involves intravenous infusion of extruded NaOCl into the facial vein via non-collapsible venous sinusoids within the cancellous bone is presented. Conclusions Understanding the mechanism involved in precipitating a classic NaOCl accident will enable the profession to make the best decision regarding the choice of irrigant delivery techniques in root canal débridement, and for manufacturers to design and improve their irrigation systems to achieve maximum safety and efficient cleanliness of the root canal system. PMID:23994710
Anatomy of sodium hypochlorite accidents involving facial ecchymosis - a review.
Zhu, Wan-chun; Gyamfi, Jacqueline; Niu, Li-na; Schoeffel, G John; Liu, Si-ying; Santarcangelo, Filippo; Khan, Sara; Tay, Kelvin C-Y; Pashley, David H; Tay, Franklin R
2013-11-01
Root canal treatment forms an essential part of general dental practice. Sodium hypochlorite (NaOCl) is the most commonly used irrigant in endodontics due to its ability to dissolve organic soft tissues in the root canal system and its action as a potent antimicrobial agent. Although NaOCl accidents created by extrusion of the irrigant through root apices are relatively rare and are seldom life-threatening, they do create substantial morbidity when they occur. To date, NaOCl accidents have only been published as isolated case reports. Although previous studies have attempted to summarise the symptoms involved in these case reports, there was no endeavour to analyse the distribution of soft tissue distribution in those reports. In this review, the anatomy of a classical NaOCl accident that involves facial swelling and ecchymosis is discussed. By summarising the facial manifestations presented in previous case reports, a novel hypothesis that involves intravenous infusion of extruded NaOCl into the facial vein via non-collapsible venous sinusoids within the cancellous bone is presented. Understanding the mechanism involved in precipitating a classic NaOCl accident will enable the profession to make the best decision regarding the choice of irrigant delivery techniques in root canal débridement, and for manufacturers to design and improve their irrigation systems to achieve maximum safety and efficient cleanliness of the root canal system. Copyright © 2013 Elsevier Ltd. All rights reserved.
Ariai, M Shafie; Eggers, Scott D; Giannini, Caterina; Driscoll, Colin L W; Link, Michael J
2015-10-01
Distant metastasis of mucinous adenocarcinoma from the gastrointestinal tract, ovaries, pancreas, lungs, breast, or urogenital system is a well-described entity. Mucinous adenocarcinomas from different primary sites are histologically identical with gland cells producing a copious amount of mucin. This report describes a very rare solitary metastasis of a mucinous adenocarcinoma of unknown origin to the facial/vestibulocochlear nerve complex in the cerebellopontine angle. A 71-year-old woman presented with several month history of progressive neurological decline and a negative extensive workup performed elsewhere. She presented to our institution with complete left facial weakness, left-sided deafness, gait unsteadiness, headache and anorexia. A repeat magnetic resonance imaging scan of the head revealed a cystic, enhancing abnormality involving the left cerebellopontine angle and internal auditory canal. A left retrosigmoid craniotomy was performed and the lesion was completely resected. The final pathology was a mucinous adenocarcinoma of indeterminate origin. Postoperatively, the patient continued with her preoperative deficits and subsequently died of her systemic disease 6 weeks after discharge. The facial/vestibulocochlear nerve complex is an unusual location for metastatic disease in the central nervous system. Clinicians should consider metastatic tumor as the possible etiology of an unusual appearing mass in this location causing profound neurological deficits. The prognosis after metastatic mucinous adenocarcinoma to the cranial nerves in the cerebellopontine angle may be poor. Copyright © 2015 Elsevier Inc. All rights reserved.
Kuru, Kaya; Niranjan, Mahesan; Tunca, Yusuf; Osvank, Erhan; Azim, Tayyaba
2014-10-01
In general, medical geneticists aim to pre-diagnose underlying syndromes based on facial features before performing cytological or molecular analyses where a genotype-phenotype interrelation is possible. However, determining correct genotype-phenotype interrelationships among many syndromes is tedious and labor-intensive, especially for extremely rare syndromes. Thus, a computer-aided system for pre-diagnosis can facilitate effective and efficient decision support, particularly when few similar cases are available, or in remote rural districts where diagnostic knowledge of syndromes is not readily available. The proposed methodology, visual diagnostic decision support system (visual diagnostic DSS), employs machine learning (ML) algorithms and digital image processing techniques in a hybrid approach for automated diagnosis in medical genetics. This approach uses facial features in reference images of disorders to identify visual genotype-phenotype interrelationships. Our statistical method describes facial image data as principal component features and diagnoses syndromes using these features. The proposed system was trained using a real dataset of previously published face images of subjects with syndromes, which provided accurate diagnostic information. The method was tested using a leave-one-out cross-validation scheme with 15 different syndromes, each of comprised 5-9 cases, i.e., 92 cases in total. An accuracy rate of 83% was achieved using this automated diagnosis technique, which was statistically significant (p<0.01). Furthermore, the sensitivity and specificity values were 0.857 and 0.870, respectively. Our results show that the accurate classification of syndromes is feasible using ML techniques. Thus, a large number of syndromes with characteristic facial anomaly patterns could be diagnosed with similar diagnostic DSSs to that described in the present study, i.e., visual diagnostic DSS, thereby demonstrating the benefits of using hybrid image processing and ML-based computer-aided diagnostics for identifying facial phenotypes. Copyright © 2014. Published by Elsevier B.V.
Electrophysiology of Cranial Nerve Testing: Trigeminal and Facial Nerves.
Muzyka, Iryna M; Estephan, Bachir
2018-01-01
The clinical examination of the trigeminal and facial nerves provides significant diagnostic value, especially in the localization of lesions in disorders affecting the central and/or peripheral nervous system. The electrodiagnostic evaluation of these nerves and their pathways adds further accuracy and reliability to the diagnostic investigation and the localization process, especially when different testing methods are combined based on the clinical presentation and the electrophysiological findings. The diagnostic uniqueness of the trigeminal and facial nerves is their connectivity and their coparticipation in reflexes commonly used in clinical practice, namely the blink and corneal reflexes. The other reflexes used in the diagnostic process and lesion localization are very nerve specific and add more diagnostic yield to the workup of certain disorders of the nervous system. This article provides a review of commonly used electrodiagnostic studies and techniques in the evaluation and lesion localization of cranial nerves V and VII.
Hatayama, Tomoko; Kitamura, Shingo; Tamura, Chihiro; Nagano, Mayumi; Ohnuki, Koichiro
2008-12-01
The aim of this study was to clarify the effects of 45 min of facial massage on the activity of autonomic nervous system, anxiety and mood in 32 healthy women. Autonomic nervous activity was assessed by heart rate variability (HRV) with spectral analysis. In the spectral analysis of HRV, we evaluated the high-frequency components (HF) and the low- to high-frequency ratio (LF/HF ratio), reflecting parasympathetic nervous activity and sympathetic nervous activity, respectively. The State Trait Anxiety Inventory (STAI) and the Profile of Mood Status (POMS) were administered to evaluate psychological status. The score of STAI and negative scale of POMS were significantly reduced following the massage, and only the LF/HF ratio was significantly enhanced after the massage. It was concluded that the facial massage might refresh the subjects by reducing their psychological distress and activating the sympathetic nervous system.
Femoral-facial syndrome with malformations in the central nervous system.
Leal, Evelia; Macías-Gómez, Nelly; Rodríguez, Lisa; Mercado, F Miguel; Barros-Núñez, Patricio
2003-01-01
The femoral hypoplasia-unusual facies syndrome (FFS) is a very rare association of femoral and facial abnormalities. Maternal diabetes mellitus has been mainly involved as the causal agent. We report the second case of FFS with anomalies in the central nervous system (CNS) including corticosubcortical atrophy, colpocephaly, partial agenesis of corpus callosum, hypoplasia of the falx cerebri and absent septum pellucidum. The psychomotor development has been normal. We propose that the CNS defects observed in these patients are part of the spectrum of abnormalities in the FFS.
Jhang, Yuna; Franklin, Beau; Ramsdell-Hudock, Heather L.; Oller, D. Kimbrough
2017-01-01
Seeking roots of language, we probed infant facial expressions and vocalizations. Both have roles in language, but the voice plays an especially flexible role, expressing a variety of functions and affect conditions with the same vocal categories—a word can be produced with many different affective flavors. This requirement of language is seen in very early infant vocalizations. We examined the extent to which affect is transmitted by early vocal categories termed “protophones” (squeals, vowel-like sounds, and growls) and by their co-occurring facial expressions, and similarly the extent to which vocal type is transmitted by the voice and co-occurring facial expressions. Our coder agreement data suggest infant affect during protophones was most reliably transmitted by the face (judged in video-only), while vocal type was transmitted most reliably by the voice (judged in audio-only). Voice alone transmitted negative affect more reliably than neutral or positive affect, suggesting infant protophones may be used especially to call for attention when the infant is in distress. By contrast, the face alone provided no significant information about protophone categories. Indeed coders in VID could scarcely recognize the difference between silence and voice when coding protophones in VID. The results suggest that partial decoupling of communicative roles for face and voice occurs even in the first months of life. Affect in infancy appears to be transmitted in a way that audio and video aspects are flexibly interwoven, as in mature language. PMID:29423398
Jhang, Yuna; Franklin, Beau; Ramsdell-Hudock, Heather L; Oller, D Kimbrough
2017-01-01
Seeking roots of language, we probed infant facial expressions and vocalizations. Both have roles in language, but the voice plays an especially flexible role, expressing a variety of functions and affect conditions with the same vocal categories-a word can be produced with many different affective flavors. This requirement of language is seen in very early infant vocalizations. We examined the extent to which affect is transmitted by early vocal categories termed "protophones" (squeals, vowel-like sounds, and growls) and by their co-occurring facial expressions, and similarly the extent to which vocal type is transmitted by the voice and co-occurring facial expressions. Our coder agreement data suggest infant affect during protophones was most reliably transmitted by the face (judged in video-only), while vocal type was transmitted most reliably by the voice (judged in audio-only). Voice alone transmitted negative affect more reliably than neutral or positive affect, suggesting infant protophones may be used especially to call for attention when the infant is in distress. By contrast, the face alone provided no significant information about protophone categories. Indeed coders in VID could scarcely recognize the difference between silence and voice when coding protophones in VID. The results suggest that partial decoupling of communicative roles for face and voice occurs even in the first months of life. Affect in infancy appears to be transmitted in a way that audio and video aspects are flexibly interwoven, as in mature language.
Gender and performance of community treatment assistants in Tanzania.
Jenson, Alexander; Gracewello, Catherine; Mkocha, Harran; Roter, Debra; Munoz, Beatriz; West, Sheila
2014-10-01
To examine the effects of gender and demographics of community treatment assistants (CTAs) on their performance of assigned tasks and quantity of speech during mass drug administration of azithromycin for trachoma in rural Tanzania. Surveys of CTAs and audio recordings of interactions between CTAs and villagers during drug distribution. Mass drug administration program in rural Kongwa district. Fifty-seven randomly selected CTAs, and 3122 residents of villages receiving azithromycin as part of the Kongwa Trachoma Project. None. Speech quantity graded by Roter interaction analysis system, presence of culturally appropriate greeting and education on facial hygiene for trachoma prevention from coded analysis of audio-recorded interactions. At sites with all female CTAs, each CTA spent more time and spoke more in each interaction in comparison with CTAs at sites with only male CTAs and CTAs at 'mixed gender' sites (sites with both male and female CTAs). At 'mixed gender' sites, males spoke significantly more than females. Female CTAs mentioned trachoma prevention with facial cleanliness more than twice as often as male CTAs; however, both genders mentioned hygiene in <10% of interactions. Both genders had culturally appropriate greetings in <25% of interactions. Gender dynamics affect the amount of time that CTAs spend with villagers during drug distribution, and the relative amount of speech when both genders work together. Both genders are not meeting expectations for trachoma prevention education and greeting villagers, and novel training methods are necessary. © The Author 2014. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.
Casal, Diogo; Pelliccia, Giovanni; Pais, Diogo; Carrola-Gomes, Diogo; Angélica-Almeida, Maria; Videira-Castro, José; Goyri-O'Neill, João
2017-07-29
Open injuries to the face involving the external carotid artery are uncommon. These injuries are normally associated with laceration of the facial nerve because this nerve is more superficial than the external carotid artery. Hence, external carotid artery lesions are usually associated with facial nerve dysfunction. We present an unusual case report in which the patient had an injury to this artery with no facial nerve compromise. A 25-year-old Portuguese man sustained a stab wound injury to his right preauricular region with a broken glass. Immediate profuse bleeding ensued. Provisory tamponade of the wound was achieved at the place of aggression by two off-duty doctors. He was initially transferred to a district hospital, where a large arterial bleeding was observed and a temporary compressive dressing was applied. Subsequently, the patient was transferred to a tertiary hospital. At admission in the emergency room, he presented a pulsating lesion in the right preauricular region and slight weakness in the territory of the inferior buccal branch of the facial nerve. The physical examination suggested an arterial lesion superficial to the facial nerve. However, in the operating theater, a section of the posterior and lateral flanks of the external carotid artery inside the parotid gland was identified. No lesion of the facial nerve was observed, and the external carotid artery was repaired. To better understand the anatomical rationale of this uncommon clinical case, we dissected the preauricular region of six cadavers previously injected with colored latex solutions in the vascular system. A small triangular space between the two main branches of division of the facial nerve in which the external carotid artery was not covered by the facial nerve was observed bilaterally in all cases. This clinical case illustrates that, in a preauricular wound, the external carotid artery can be injured without facial nerve damage. However, no similar description was found in the reviewed literature, which suggests that this must be a very rare occurrence. According to the dissection study performed, this is due to the existence of a triangular space between the cervicofacial and temporofacial nerve trunks in which the external carotid artery is not covered by the facial nerve or its branches.
An optimized ERP brain-computer interface based on facial expression changes.
Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej
2014-06-01
Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.
Facial Emotion Recognition and Expression in Parkinson's Disease: An Emotional Mirror Mechanism?
Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J; Kilner, James
2017-01-01
Parkinson's disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. For emotion recognition, PD reported lower score than HC for Ekman total score (p<0.001), and for single emotions sub-scores happiness, fear, anger, sadness (p<0.01) and surprise (p = 0.02). In the facial emotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all p<0.001). RT and the level of confidence showed significant differences between PD and HC for the same emotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the correlation was even stronger when ranking emotions from the best recognized to the worst (R = 0.75, p = 0.004). PD patients showed difficulties in recognizing emotional facial expressions produced by others and in posing facial emotional expressions compared to healthy subjects. The linear correlation between recognition and expression in both experimental groups suggests that the two mechanisms share a common system, which could be deteriorated in patients with PD. These results open new clinical and rehabilitation perspectives.
An optimized ERP brain-computer interface based on facial expression changes
NASA Astrophysics Data System (ADS)
Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej
2014-06-01
Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). Significance. The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.
Zhang, Lei; Shen, Shunyao; Yu, Hongbo; Shen, Steve Guofang; Wang, Xudong
2015-07-01
The aim of this study was to investigate the use of computer-aided design and computer-aided manufacturing hydroxyapatite (HA)/epoxide acrylate maleic (EAM) compound construction artificial implants for craniomaxillofacial bone defects. Computed tomography, computer-aided design/computer-aided manufacturing and three-dimensional reconstruction, as well as rapid prototyping were performed in 12 patients between 2008 and 2013. The customized HA/EAM compound artificial implants were manufactured through selective laser sintering using a rapid prototyping machine into the exact geometric shapes of the defect. The HA/EAM compound artificial implants were then implanted during surgical reconstruction. Color-coded superimpositions demonstrated the discrepancy between the virtual plan and achieved results using Geomagic Studio. As a result, the HA/EAM compound artificial bone implants were perfectly matched with the facial areas that needed reconstruction. The postoperative aesthetic and functional results were satisfactory. The color-coded superimpositions demonstrated good consistency between the virtual plan and achieved results. The three-dimensional maximum deviation is 2.12 ± 0.65 mm and the three-dimensional mean deviation is 0.27 ± 0.07 mm. No facial nerve weakness or pain was observed at the follow-up examinations. Only 1 implant had to be removed 2 months after the surgery owing to severe local infection. No other complication was noted during the follow-up period. In conclusion, computer-aided, individually fabricated HA/EAM compound construction artificial implant was a good craniomaxillofacial surgical technique that yielded improved aesthetic results and functional recovery after reconstruction.
2011-01-01
We recently demonstrated the utility of quantifying spontaneous pain in mice via the blinded coding of facial expressions. As the majority of preclinical pain research is in fact performed in the laboratory rat, we attempted to modify the scale for use in this species. We present herein the Rat Grimace Scale, and show its reliability, accuracy, and ability to quantify the time course of spontaneous pain in the intraplantar complete Freund's adjuvant, intraarticular kaolin-carrageenan, and laparotomy (post-operative pain) assays. The scale's ability to demonstrate the dose-dependent analgesic efficacy of morphine is also shown. In addition, we have developed software, Rodent Face Finder®, which successfully automates the most labor-intensive step in the process. Given the known mechanistic dissociations between spontaneous and evoked pain, and the primacy of the former as a clinical problem, we believe that widespread adoption of spontaneous pain measures such as the Rat Grimace Scale might lead to more successful translation of basic science findings into clinical application. PMID:21801409
A Case of Brown-Vialetto-Van Laere Syndrome Due To a Novel Mutation in SLC52A3 Gene
Thulasi, Venkatraman; Veerapandiyan, Aravindhan; Pletcher, Beth A.; Tong, Chun M.
2017-01-01
Brown-Vialetto-Van Laere syndrome is a rare disorder characterized by motor, sensory, and cranial neuronopathies, associated with mutations in SLC52A2 and SLC52A3 genes that code for human riboflavin transporters RFVT2 and RFVT3, respectively. The authors describe the clinical course of a 6-year-old girl with Brown-Vialetto-Van Laere syndrome and a novel homozygous mutation c.1156T>C in the SLC52A3 gene, who presented at the age of 2.5 years with progressive brain stem dysfunction including ptosis, facial weakness, hearing loss, dysphagia, anarthria with bilateral vocal cord paralysis, and ataxic gait. She subsequently developed respiratory failure requiring tracheostomy and worsening dysphagia necessitating a gastrostomy. Following riboflavin supplementation, resolution of facial diplegia and ataxia, improvements in ptosis, and bulbar function including vocalization and respiration were noted. However, her sensorineural hearing loss remained unchanged. Similar to other cases of Brown-Vialetto-Van Laere syndrome, our patient responded favorably to early riboflavin supplementation with significant but not complete neurologic recovery. PMID:28856173
Thulasi, Venkatraman; Veerapandiyan, Aravindhan; Pletcher, Beth A; Tong, Chun M; Ming, Xue
2017-01-01
Brown-Vialetto-Van Laere syndrome is a rare disorder characterized by motor, sensory, and cranial neuronopathies, associated with mutations in SLC52A2 and SLC52A3 genes that code for human riboflavin transporters RFVT2 and RFVT3, respectively. The authors describe the clinical course of a 6-year-old girl with Brown-Vialetto-Van Laere syndrome and a novel homozygous mutation c.1156T>C in the SLC52A3 gene, who presented at the age of 2.5 years with progressive brain stem dysfunction including ptosis, facial weakness, hearing loss, dysphagia, anarthria with bilateral vocal cord paralysis, and ataxic gait. She subsequently developed respiratory failure requiring tracheostomy and worsening dysphagia necessitating a gastrostomy. Following riboflavin supplementation, resolution of facial diplegia and ataxia, improvements in ptosis, and bulbar function including vocalization and respiration were noted. However, her sensorineural hearing loss remained unchanged. Similar to other cases of Brown-Vialetto-Van Laere syndrome, our patient responded favorably to early riboflavin supplementation with significant but not complete neurologic recovery.
Gross, Eric; El-Baz, Ayman S.; Sokhadze, Guela E.; Sears, Lonnie; Casanova, Manuel F.; Sokhadze, Estate M.
2012-01-01
Introduction Children diagnosed with an autism spectrum disorder (ASD) often lack the ability to recognize and properly respond to emotional stimuli. Emotional deficits also characterize children with attention deficit/hyperactivity disorder (ADHD), in addition to exhibiting limited attention span. These abnormalities may effect a difference in the induced EEG gamma wave burst (35–45 Hz) peaked approximately 300–400 milliseconds following an emotional stimulus. Because induced gamma oscillations are not fixed at a definite point in time post-stimulus, analysis of averaged EEG data with traditional methods may result in an attenuated gamma burst power. Methods We used a data alignment technique to improve the averaged data, making it a better representation of the individual induced EEG gamma oscillations. A study was designed to test the response of a subject to emotional stimuli, presented in the form of emotional facial expression images. In a four part experiment, the subjects were instructed to identify gender in the first two blocks of the test, followed by differentiating between basic emotions in the final two blocks (i.e. anger vs. disgust). EEG data was collected from ASD (n=10), ADHD (n=9), and control (n=11) subjects via a 128 channel EGI system, and processed through a continuous wavelet transform and bandpass filter to isolate the gamma frequencies. A custom MATLAB code was used to align the data from individual trials between 200–600 ms post-stimulus, EEG site, and condition by maximizing the Pearson product-moment correlation coefficient between trials. The gamma power for the 400 ms window of maximum induced gamma burst was then calculated and compared between subject groups. Results and Conclusion Condition (anger/disgust recognition, gender recognition) × Alignment × Group (ADHD, ASD, Controls) interaction was significant at most of parietal topographies (e.g., P3–P4, P7–P8). These interactions were better manifested in the aligned data set. Our results show that alignment of the induced gamma oscillations improves sensitivity of this measure in differentiation of EEG responses to emotional facial stimuli in ADHD and ASD. PMID:22754277
Gross, Eric; El-Baz, Ayman S; Sokhadze, Guela E; Sears, Lonnie; Casanova, Manuel F; Sokhadze, Estate M
2012-01-01
INTRODUCTION: Children diagnosed with an autism spectrum disorder (ASD) often lack the ability to recognize and properly respond to emotional stimuli. Emotional deficits also characterize children with attention deficit/hyperactivity disorder (ADHD), in addition to exhibiting limited attention span. These abnormalities may effect a difference in the induced EEG gamma wave burst (35-45 Hz) peaked approximately 300-400 milliseconds following an emotional stimulus. Because induced gamma oscillations are not fixed at a definite point in time post-stimulus, analysis of averaged EEG data with traditional methods may result in an attenuated gamma burst power. METHODS: We used a data alignment technique to improve the averaged data, making it a better representation of the individual induced EEG gamma oscillations. A study was designed to test the response of a subject to emotional stimuli, presented in the form of emotional facial expression images. In a four part experiment, the subjects were instructed to identify gender in the first two blocks of the test, followed by differentiating between basic emotions in the final two blocks (i.e. anger vs. disgust). EEG data was collected from ASD (n=10), ADHD (n=9), and control (n=11) subjects via a 128 channel EGI system, and processed through a continuous wavelet transform and bandpass filter to isolate the gamma frequencies. A custom MATLAB code was used to align the data from individual trials between 200-600 ms post-stimulus, EEG site, and condition by maximizing the Pearson product-moment correlation coefficient between trials. The gamma power for the 400 ms window of maximum induced gamma burst was then calculated and compared between subject groups. RESULTS AND CONCLUSION: Condition (anger/disgust recognition, gender recognition) × Alignment × Group (ADHD, ASD, Controls) interaction was significant at most of parietal topographies (e.g., P3-P4, P7-P8). These interactions were better manifested in the aligned data set. Our results show that alignment of the induced gamma oscillations improves sensitivity of this measure in differentiation of EEG responses to emotional facial stimuli in ADHD and ASD.
Wang, Senfen; Liu, Yuanxiang; Wei, Jinghai; Zhang, Jian; Wang, Zhaoyang; Xu, Zigang
2017-09-01
Tuberous sclerosis complex (TSC) is a genetic disorder and facial angiofibromas are disfiguring facial lesions. The aim of this study was to analyze the clinical and genetic features of TSC and to assess the treatment of facial angiofibromas using topical sirolimus in Chinese children. Information was collected on 29 patients with TSC. Genetic analyses were performed in 12 children and their parents. Children were treated with 0.1% sirolimus ointment for 36 weeks. Clinical efficacy and plasma sirolimus concentrations were evaluated at baseline and 12, 24, and 36 weeks. Twenty-seven (93%) of the 29 patients had hypomelanotic macules and 15 (52%) had shagreen patch; 11 of the 12 (92%) who underwent genetic analysis had gene mutations in the TSC1 or TSC2 gene. Twenty-four children completed 36 weeks of treatment with topical sirolimus; facial angiofibromas were clinically undetectable in four (17%). The mean decrease in the Facial Angiofibroma Severity Index (FASI) score at 36 weeks was 47.6 ± 30.4%. There was no significant difference in the FASI score between weeks 24 and 36 (F = 1.00, p = 0.33). There was no detectable systemic absorption of sirolimus. Hypomelanotic macules are often the first sign of TSC. Genetic testing has a high detection rate in patients with a clinical diagnosis of TSC. Topical sirolimus appears to be both effective and well-tolerated as a treatment of facial angiofibromas in children with TSC. The response typically plateaus after 12 to 24 weeks of treatment. © 2017 Wiley Periodicals, Inc.
Neural mechanism for judging the appropriateness of facial affect.
Kim, Ji-Woong; Kim, Jae-Jin; Jeong, Bum Seok; Ki, Seon Wan; Im, Dong-Mi; Lee, Soo Jung; Lee, Hong Shick
2005-12-01
Questions regarding the appropriateness of facial expressions in particular situations arise ubiquitously in everyday social interactions. To determine the appropriateness of facial affect, first of all, we should represent our own or the other's emotional state as induced by the social situation. Then, based on these representations, we should infer the possible affective response of the other person. In this study, we identified the brain mechanism mediating special types of social evaluative judgments of facial affect in which the internal reference is related to theory of mind (ToM) processing. Many previous ToM studies have used non-emotional stimuli, but, because so much valuable social information is conveyed through nonverbal emotional channels, this investigation used emotionally salient visual materials to tap ToM. Fourteen right-handed healthy subjects volunteered for our study. We used functional magnetic resonance imaging to examine brain activation during the judgmental task for the appropriateness of facial affects as opposed to gender matching tasks. We identified activation of a brain network, which includes both medial frontal cortex, left temporal pole, left inferior frontal gyrus, and left thalamus during the judgmental task for appropriateness of facial affect compared to the gender matching task. The results of this study suggest that the brain system involved in ToM plays a key role in judging the appropriateness of facial affect in an emotionally laden situation. In addition, our result supports that common neural substrates are involved in performing diverse kinds of ToM tasks irrespective of perceptual modalities and the emotional salience of test materials.