A Neural Basis of Facial Action Recognition in Humans
Srinivasan, Ramprakash; Golomb, Julie D.
2016-01-01
By combining different facial muscle actions, called action units, humans can produce an extraordinarily large number of facial expressions. Computational models and studies in cognitive science and social psychology have long hypothesized that the brain needs to visually interpret these action units to understand other people's actions and intentions. Surprisingly, no studies have identified the neural basis of the visual recognition of these action units. Here, using functional magnetic resonance imaging and an innovative machine learning analysis approach, we identify a consistent and differential coding of action units in the brain. Crucially, in a brain region thought to be responsible for the processing of changeable aspects of the face, multivoxel pattern analysis could decode the presence of specific action units in an image. This coding was found to be consistent across people, facilitating the estimation of the perceived action units on participants not used to train the multivoxel decoder. Furthermore, this coding of action units was identified when participants attended to the emotion category of the facial expression, suggesting an interaction between the visual analysis of action units and emotion categorization as predicted by the computational models mentioned above. These results provide the first evidence for a representation of action units in the brain and suggest a mechanism for the analysis of large numbers of facial actions and a loss of this capacity in psychopathologies. SIGNIFICANCE STATEMENT Computational models and studies in cognitive and social psychology propound that visual recognition of facial expressions requires an intermediate step to identify visible facial changes caused by the movement of specific facial muscles. Because facial expressions are indeed created by moving one's facial muscles, it is logical to assume that our visual system solves this inverse problem. Here, using an innovative machine learning method and neuroimaging data, we identify for the first time a brain region responsible for the recognition of actions associated with specific facial muscles. Furthermore, this representation is preserved across subjects. Our machine learning analysis does not require mapping the data to a standard brain and may serve as an alternative to hyperalignment. PMID:27098688
Joint Facial Action Unit Detection and Feature Fusion: A Multi-conditional Learning Approach.
Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja
2016-10-05
Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in specific patterns, and rarely in isolation. Yet, most existing methods for automatic action unit detection fail to exploit dependencies among them, and the corresponding facial features. To address this, we propose a novel multi-conditional latent variable model for simultaneous fusion of facial features and joint action unit detection. Specifically, the proposed model performs feature fusion in a generative fashion via a low-dimensional shared subspace, while simultaneously performing action unit detection using a discriminative classification approach. We show that by combining the merits of both approaches, the proposed methodology outperforms existing purely discriminative/generative methods for the target task. To reduce the number of parameters, and avoid overfitting, a novel Bayesian learning approach based on Monte Carlo sampling is proposed, to integrate out the shared subspace. We validate the proposed method on posed and spontaneous data from three publicly available datasets (CK+, DISFA and Shoulder-pain), and show that both feature fusion and joint learning of action units leads to improved performance compared to the state-of-the-art methods for the task.
Recognizing Action Units for Facial Expression Analysis
Tian, Ying-li; Kanade, Takeo; Cohn, Jeffrey F.
2010-01-01
Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams. PMID:25210210
Joint Patch and Multi-label Learning for Facial Action Unit Detection
Zhao, Kaili; Chu, Wen-Sheng; De la Torre, Fernando; Cohn, Jeffrey F.; Zhang, Honggang
2016-01-01
The face is one of the most powerful channel of nonverbal communication. The most commonly used taxonomy to describe facial behaviour is the Facial Action Coding System (FACS). FACS segments the visible effects of facial muscle activation into 30+ action units (AUs). AUs, which may occur alone and in thousands of combinations, can describe nearly all-possible facial expressions. Most existing methods for automatic AU detection treat the problem using one-vs-all classifiers and fail to exploit dependencies among AU and facial features. We introduce joint-patch and multi-label learning (JPML) to address these issues. JPML leverages group sparsity by selecting a sparse subset of facial patches while learning a multi-label classifier. In four of five comparisons on three diverse datasets, CK+, GFT, and BP4D, JPML produced the highest average F1 scores in comparison with state-of-the art. PMID:27382243
Madrigal-Garcia, Maria Isabel; Rodrigues, Marcos; Shenfield, Alex; Singer, Mervyn; Moreno-Cuesta, Jeronimo
2018-07-01
To identify facial expressions occurring in patients at risk of deterioration in hospital wards. Prospective observational feasibility study. General ward patients in a London Community Hospital, United Kingdom. Thirty-four patients at risk of clinical deterioration. A 5-minute video (25 frames/s; 7,500 images) was recorded, encrypted, and subsequently analyzed for action units by a trained facial action coding system psychologist blinded to outcome. Action units of the upper face, head position, eyes position, lips and jaw position, and lower face were analyzed in conjunction with clinical measures collected within the National Early Warning Score. The most frequently detected action units were action unit 43 (73%) for upper face, action unit 51 (11.7%) for head position, action unit 62 (5.8%) for eyes position, action unit 25 (44.1%) for lips and jaw, and action unit 15 (67.6%) for lower face. The presence of certain combined face displays was increased in patients requiring admission to intensive care, namely, action units 43 + 15 + 25 (face display 1, p < 0.013), action units 43 + 15 + 51/52 (face display 2, p < 0.003), and action units 43 + 15 + 51 + 25 (face display 3, p < 0.002). Having face display 1, face display 2, and face display 3 increased the risk of being admitted to intensive care eight-fold, 18-fold, and as a sure event, respectively. A logistic regression model with face display 1, face display 2, face display 3, and National Early Warning Score as independent covariates described admission to intensive care with an average concordance statistic (C-index) of 0.71 (p = 0.009). Patterned facial expressions can be identified in deteriorating general ward patients. This tool may potentially augment risk prediction of current scoring systems.
A study of patient facial expressivity in relation to orthodontic/surgical treatment.
Nafziger, Y J
1994-09-01
A dynamic analysis of the faces of patients seeking an aesthetic restoration of facial aberrations with orthognathic treatment requires (besides the routine static study, such as records, study models, photographs, and cephalometric tracings) the study of their facial expressions. To determine a classification method for the units of expressive facial behavior, the mobility of the face is studied with the aid of the facial action coding system (FACS) created by Ekman and Friesen. With video recordings of faces and photographic images taken from the video recordings, the authors have modified a technique of facial analysis structured on the visual observation of the anatomic basis of movement. The technique, itself, is based on the defining of individual facial expressions and then codifying such expressions through the use of minimal, anatomic action units. These action units actually combine to form facial expressions. With the help of FACS, the facial expressions of 18 patients before and after orthognathic surgery, and six control subjects without dentofacial deformation have been studied. I was able to register 6278 facial expressions and then further define 18,844 action units, from the 6278 facial expressions. A classification of the facial expressions made by subject groups and repeated in quantified time frames has allowed establishment of "rules" or "norms" relating to expression, thus further enabling the making of comparisons of facial expressiveness between patients and control subjects. This study indicates that the facial expressions of the patients were more similar to the facial expressions of the controls after orthognathic surgery. It was possible to distinguish changes in facial expressivity in patients after dentofacial surgery, the type and degree of change depended on the facial structure before surgery. Changes noted tended toward a functioning that is identical to that of subjects who do not suffer from dysmorphosis and toward greater lip competence, particularly the function of the orbicular muscle of the lips, with reduced compensatory activity of the lower lip and the chin. The results of our study are supported by the clinical observations and suggest that the FACS technique should be able to provide a coding for the study of facial expression.
Automated detection of pain from facial expressions: a rule-based approach using AAM
NASA Astrophysics Data System (ADS)
Chen, Zhanli; Ansari, Rashid; Wilkie, Diana J.
2012-02-01
In this paper, we examine the problem of using video analysis to assess pain, an important problem especially for critically ill, non-communicative patients, and people with dementia. We propose and evaluate an automated method to detect the presence of pain manifested in patient videos using a unique and large collection of cancer patient videos captured in patient homes. The method is based on detecting pain-related facial action units defined in the Facial Action Coding System (FACS) that is widely used for objective assessment in pain analysis. In our research, a person-specific Active Appearance Model (AAM) based on Project-Out Inverse Compositional Method is trained for each patient individually for the modeling purpose. A flexible representation of the shape model is used in a rule-based method that is better suited than the more commonly used classifier-based methods for application to the cancer patient videos in which pain-related facial actions occur infrequently and more subtly. The rule-based method relies on the feature points that provide facial action cues and is extracted from the shape vertices of AAM, which have a natural correspondence to face muscular movement. In this paper, we investigate the detection of a commonly used set of pain-related action units in both the upper and lower face. Our detection results show good agreement with the results obtained by three trained FACS coders who independently reviewed and scored the action units in the cancer patient videos.
Differences between Children and Adults in the Recognition of Enjoyment Smiles
ERIC Educational Resources Information Center
Del Giudice, Marco; Colle, Livia
2007-01-01
The authors investigated the differences between 8-year-olds (n = 80) and adults (n = 80) in recognition of felt versus faked enjoyment smiles by using a newly developed picture set that is based on the Facial Action Coding System. The authors tested the effect of different facial action units (AUs) on judgments of smile authenticity. Multiple…
A dynamic appearance descriptor approach to facial actions temporal modeling.
Jiang, Bihan; Valstar, Michel; Martinez, Brais; Pantic, Maja
2014-02-01
Both the configuration and the dynamics of facial expressions are crucial for the interpretation of human facial behavior. Yet to date, the vast majority of reported efforts in the field either do not take the dynamics of facial expressions into account, or focus only on prototypic facial expressions of six basic emotions. Facial dynamics can be explicitly analyzed by detecting the constituent temporal segments in Facial Action Coding System (FACS) Action Units (AUs)-onset, apex, and offset. In this paper, we present a novel approach to explicit analysis of temporal dynamics of facial actions using the dynamic appearance descriptor Local Phase Quantization from Three Orthogonal Planes (LPQ-TOP). Temporal segments are detected by combining a discriminative classifier for detecting the temporal segments on a frame-by-frame basis with Markov Models that enforce temporal consistency over the whole episode. The system is evaluated in detail over the MMI facial expression database, the UNBC-McMaster pain database, the SAL database, the GEMEP-FERA dataset in database-dependent experiments, in cross-database experiments using the Cohn-Kanade, and the SEMAINE databases. The comparison with other state-of-the-art methods shows that the proposed LPQ-TOP method outperforms the other approaches for the problem of AU temporal segment detection, and that overall AU activation detection benefits from dynamic appearance information.
Emotion categories and dimensions in the facial communication of affect: An integrated approach.
Mehu, Marc; Scherer, Klaus R
2015-12-01
We investigated the role of facial behavior in emotional communication, using both categorical and dimensional approaches. We used a corpus of enacted emotional expressions (GEMEP) in which professional actors are instructed, with the help of scenarios, to communicate a variety of emotional experiences. The results of Study 1 replicated earlier findings showing that only a minority of facial action units are associated with specific emotional categories. Likewise, facial behavior did not show a specific association with particular emotional dimensions. Study 2 showed that facial behavior plays a significant role both in the detection of emotions and in the judgment of their dimensional aspects, such as valence, arousal, dominance, and unpredictability. In addition, a mediation model revealed that the association between facial behavior and recognition of the signaler's emotional intentions is mediated by perceived emotional dimensions. We conclude that, from a production perspective, facial action units convey neither specific emotions nor specific emotional dimensions, but are associated with several emotions and several dimensions. From the perceiver's perspective, facial behavior facilitated both dimensional and categorical judgments, and the former mediated the effect of facial behavior on recognition accuracy. The classification of emotional expressions into discrete categories may, therefore, rely on the perception of more general dimensions such as valence and arousal and, presumably, the underlying appraisals that are inferred from facial movements. (c) 2015 APA, all rights reserved).
Coding and quantification of a facial expression for pain in lambs.
Guesgen, M J; Beausoleil, N J; Leach, M; Minot, E O; Stewart, M; Stafford, K J
2016-11-01
Facial expressions are routinely used to assess pain in humans, particularly those who are non-verbal. Recently, there has been an interest in developing coding systems for facial grimacing in non-human animals, such as rodents, rabbits, horses and sheep. The aims of this preliminary study were to: 1. Qualitatively identify facial feature changes in lambs experiencing pain as a result of tail-docking and compile these changes to create a Lamb Grimace Scale (LGS); 2. Determine whether human observers can use the LGS to differentiate tail-docked lambs from control lambs and differentiate lambs before and after docking; 3. Determine whether changes in facial action units of the LGS can be objectively quantified in lambs before and after docking; 4. Evaluate effects of restraint of lambs on observers' perceptions of pain using the LGS and on quantitative measures of facial action units. By comparing images of lambs before (no pain) and after (pain) tail-docking, the LGS was devised in consultation with scientists experienced in assessing facial expression in other species. The LGS consists of five facial action units: Orbital Tightening, Mouth Features, Nose Features, Cheek Flattening and Ear Posture. The aims of the study were addressed in two experiments. In Experiment I, still images of the faces of restrained lambs were taken from video footage before and after tail-docking (n=4) or sham tail-docking (n=3). These images were scored by a group of five naïve human observers using the LGS. Because lambs were restrained for the duration of the experiment, Ear Posture was not scored. The scores for the images were averaged to provide one value per feature per period and then scores for the four LGS action units were averaged to give one LGS score per lamb per period. In Experiment II, still images of the faces nine lambs were taken before and after tail-docking. Stills were taken when lambs were restrained and unrestrained in each period. A different group of five human observers scored the images from Experiment II. Changes in facial action units were also quantified objectively by a researcher using image measurement software. In both experiments LGS scores were analyzed using a linear MIXED model to evaluate the effects of tail docking on observers' perception of facial expression changes. Kendall's Index of Concordance was used to measure reliability among observers. In Experiment I, human observers were able to use the LGS to differentiate docked lambs from control lambs. LGS scores significantly increased from before to after treatment in docked lambs but not control lambs. In Experiment II there was a significant increase in LGS scores after docking. This was coupled with changes in other validated indicators of pain after docking in the form of pain-related behaviour. Only two components, Mouth Features and Orbital Tightening, showed significant quantitative changes after docking. The direction of these changes agree with the description of these facial action units in the LGS. Restraint affected people's perceptions of pain as well as quantitative measures of LGS components. Freely moving lambs were scored lower using the LGS over both periods and had a significantly smaller eye aperture and smaller nose and ear angles than when they were held. Agreement among observers for LGS scores were fair overall (Experiment I: W=0.60; Experiment II: W=0.66). This preliminary study demonstrates changes in lamb facial expression associated with pain. The results of these experiments should be interpreted with caution due to low lamb numbers. Copyright © 2016 Elsevier B.V. All rights reserved.
Realistic facial expression of virtual human based on color, sweat, and tears effects.
Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan
2014-01-01
Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.
Realistic Facial Expression of Virtual Human Based on Color, Sweat, and Tears Effects
Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan
2014-01-01
Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics. PMID:25136663
A model for production, perception, and acquisition of actions in face-to-face communication.
Kröger, Bernd J; Kopp, Stefan; Lowit, Anja
2010-08-01
The concept of action as basic motor control unit for goal-directed movement behavior has been used primarily for private or non-communicative actions like walking, reaching, or grasping. In this paper, literature is reviewed indicating that this concept can also be used in all domains of face-to-face communication like speech, co-verbal facial expression, and co-verbal gesturing. Three domain-specific types of actions, i.e. speech actions, facial actions, and hand-arm actions, are defined in this paper and a model is proposed that elucidates the underlying biological mechanisms of action production, action perception, and action acquisition in all domains of face-to-face communication. This model can be used as theoretical framework for empirical analysis or simulation with embodied conversational agents, and thus for advanced human-computer interaction technologies.
Mapping the emotional face. How individual face parts contribute to successful emotion recognition.
Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna
2017-01-01
Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.
Mapping the emotional face. How individual face parts contribute to successful emotion recognition
Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna
2017-01-01
Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation. PMID:28493921
Social Risk and Depression: Evidence from Manual and Automatic Facial Expression Analysis
Girard, Jeffrey M.; Cohn, Jeffrey F.; Mahoor, Mohammad H.; Mavadati, Seyedmohammad; Rosenwald, Dean P.
2014-01-01
Investigated the relationship between change over time in severity of depression symptoms and facial expression. Depressed participants were followed over the course of treatment and video recorded during a series of clinical interviews. Facial expressions were analyzed from the video using both manual and automatic systems. Automatic and manual coding were highly consistent for FACS action units, and showed similar effects for change over time in depression severity. For both systems, when symptom severity was high, participants made more facial expressions associated with contempt, smiled less, and those smiles that occurred were more likely to be accompanied by facial actions associated with contempt. These results are consistent with the “social risk hypothesis” of depression. According to this hypothesis, when symptoms are severe, depressed participants withdraw from other people in order to protect themselves from anticipated rejection, scorn, and social exclusion. As their symptoms fade, participants send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and produced the same pattern of depression effects suggests that automatic facial expression analysis may be ready for use in behavioral and clinical science. PMID:24598859
Selective Transfer Machine for Personalized Facial Action Unit Detection
Chu, Wen-Sheng; De la Torre, Fernando; Cohn, Jeffery F.
2014-01-01
Automatic facial action unit (AFA) detection from video is a long-standing problem in facial expression analysis. Most approaches emphasize choices of features and classifiers. They neglect individual differences in target persons. People vary markedly in facial morphology (e.g., heavy versus delicate brows, smooth versus deeply etched wrinkles) and behavior. Individual differences can dramatically influence how well generic classifiers generalize to previously unseen persons. While a possible solution would be to train person-specific classifiers, that often is neither feasible nor theoretically compelling. The alternative that we propose is to personalize a generic classifier in an unsupervised manner (no additional labels for the test subjects are required). We introduce a transductive learning method, which we refer to Selective Transfer Machine (STM), to personalize a generic classifier by attenuating person-specific biases. STM achieves this effect by simultaneously learning a classifier and re-weighting the training samples that are most relevant to the test subject. To evaluate the effectiveness of STM, we compared STM to generic classifiers and to cross-domain learning methods in three major databases: CK+ [20], GEMEP-FERA [32] and RU-FACS [2]. STM outperformed generic classifiers in all. PMID:25242877
Action Unit Models of Facial Expression of Emotion in the Presence of Speech
Shah, Miraj; Cooper, David G.; Cao, Houwei; Gur, Ruben C.; Nenkova, Ani; Verma, Ragini
2014-01-01
Automatic recognition of emotion using facial expressions in the presence of speech poses a unique challenge because talking reveals clues for the affective state of the speaker but distorts the canonical expression of emotion on the face. We introduce a corpus of acted emotion expression where speech is either present (talking) or absent (silent). The corpus is uniquely suited for analysis of the interplay between the two conditions. We use a multimodal decision level fusion classifier to combine models of emotion from talking and silent faces as well as from audio to recognize five basic emotions: anger, disgust, fear, happy and sad. Our results strongly indicate that emotion prediction in the presence of speech from action unit facial features is less accurate when the person is talking. Modeling talking and silent expressions separately and fusing the two models greatly improves accuracy of prediction in the talking setting. The advantages are most pronounced when silent and talking face models are fused with predictions from audio features. In this multi-modal prediction both the combination of modalities and the separate models of talking and silent facial expression of emotion contribute to the improvement. PMID:25525561
Automated and objective action coding of facial expressions in patients with acute facial palsy.
Haase, Daniel; Minnigerode, Laura; Volk, Gerd Fabian; Denzler, Joachim; Guntinas-Lichius, Orlando
2015-05-01
Aim of the present observational single center study was to objectively assess facial function in patients with idiopathic facial palsy with a new computer-based system that automatically recognizes action units (AUs) defined by the Facial Action Coding System (FACS). Still photographs using posed facial expressions of 28 healthy subjects and of 299 patients with acute facial palsy were automatically analyzed for bilateral AU expression profiles. All palsies were graded with the House-Brackmann (HB) grading system and with the Stennert Index (SI). Changes of the AU profiles during follow-up were analyzed for 77 patients. The initial HB grading of all patients was 3.3 ± 1.2. SI at rest was 1.86 ± 1.3 and during motion 3.79 ± 4.3. Healthy subjects showed a significant AU asymmetry score of 21 ± 11 % and there was no significant difference to patients (p = 0.128). At initial examination of patients, the number of activated AUs was significantly lower on the paralyzed side than on the healthy side (p < 0.0001). The final examination for patients took place 4 ± 6 months post baseline. The number of activated AUs and the ratio between affected and healthy side increased significantly between baseline and final examination (both p < 0.0001). The asymmetry score decreased between baseline and final examination (p < 0.0001). The number of activated AUs on the healthy side did not change significantly (p = 0.779). Radical rethinking in facial grading is worthwhile: automated FACS delivers fast and objective global and regional data on facial motor function for use in clinical routine and clinical trials.
NASA Astrophysics Data System (ADS)
Balbin, Jessie R.; Pinugu, Jasmine Nadja J.; Basco, Abigail Joy S.; Cabanada, Myla B.; Gonzales, Patrisha Melrose V.; Marasigan, Juan Carlos C.
2017-06-01
The research aims to build a tool in assessing patients for post-traumatic stress disorder or PTSD. The parameters used are heart rate, skin conductivity, and facial gestures. Facial gestures are recorded using OpenFace, an open-source face recognition program that uses facial action units in to track facial movements. Heart rate and skin conductivity is measured through sensors operated using Raspberry Pi. Results are stored in a database for easy and quick access. Databases to be used are uploaded to a cloud platform so that doctors have direct access to the data. This research aims to analyze these parameters and give accurate assessment of the patient.
A unified probabilistic framework for spontaneous facial action modeling and understanding.
Tong, Yan; Chen, Jixu; Ji, Qiang
2010-02-01
Facial expression is a natural and powerful means of human communication. Recognizing spontaneous facial actions, however, is very challenging due to subtle facial deformation, frequent head movements, and ambiguous and uncertain facial motion measurements. Because of these challenges, current research in facial expression recognition is limited to posed expressions and often in frontal view. A spontaneous facial expression is characterized by rigid head movements and nonrigid facial muscular movements. More importantly, it is the coherent and consistent spatiotemporal interactions among rigid and nonrigid facial motions that produce a meaningful facial expression. Recognizing this fact, we introduce a unified probabilistic facial action model based on the Dynamic Bayesian network (DBN) to simultaneously and coherently represent rigid and nonrigid facial motions, their spatiotemporal dependencies, and their image measurements. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, facial action recognition is accomplished through probabilistic inference by systematically integrating visual measurements with the facial action model. Experiments show that compared to the state-of-the-art techniques, the proposed system yields significant improvements in recognizing both rigid and nonrigid facial motions, especially for spontaneous facial expressions.
Objectifying facial expressivity assessment of Parkinson's patients: preliminary study.
Wu, Peng; Gonzalez, Isabel; Patsis, Georgios; Jiang, Dongmei; Sahli, Hichem; Kerckhofs, Eric; Vandekerckhove, Marie
2014-01-01
Patients with Parkinson's disease (PD) can exhibit a reduction of spontaneous facial expression, designated as "facial masking," a symptom in which facial muscles become rigid. To improve clinical assessment of facial expressivity of PD, this work attempts to quantify the dynamic facial expressivity (facial activity) of PD by automatically recognizing facial action units (AUs) and estimating their intensity. Spontaneous facial expressivity was assessed by comparing 7 PD patients with 8 control participants. To voluntarily produce spontaneous facial expressions that resemble those typically triggered by emotions, six emotions (amusement, sadness, anger, disgust, surprise, and fear) were elicited using movie clips. During the movie clips, physiological signals (facial electromyography (EMG) and electrocardiogram (ECG)) and frontal face video of the participants were recorded. The participants were asked to report on their emotional states throughout the experiment. We first examined the effectiveness of the emotion manipulation by evaluating the participant's self-reports. Disgust-induced emotions were significantly higher than the other emotions. Thus we focused on the analysis of the recorded data during watching disgust movie clips. The proposed facial expressivity assessment approach captured differences in facial expressivity between PD patients and controls. Also differences between PD patients with different progression of Parkinson's disease have been observed.
Perceptual integration of kinematic components in the recognition of emotional facial expressions.
Chiovetto, Enrico; Curio, Cristóbal; Endres, Dominik; Giese, Martin
2018-04-01
According to a long-standing hypothesis in motor control, complex body motion is organized in terms of movement primitives, reducing massively the dimensionality of the underlying control problems. For body movements, this low-dimensional organization has been convincingly demonstrated by the learning of low-dimensional representations from kinematic and EMG data. In contrast, the effective dimensionality of dynamic facial expressions is unknown, and dominant analysis approaches have been based on heuristically defined facial "action units," which reflect contributions of individual face muscles. We determined the effective dimensionality of dynamic facial expressions by learning of a low-dimensional model from 11 facial expressions. We found an amazingly low dimensionality with only two movement primitives being sufficient to simulate these dynamic expressions with high accuracy. This low dimensionality is confirmed statistically, by Bayesian model comparison of models with different numbers of primitives, and by a psychophysical experiment that demonstrates that expressions, simulated with only two primitives, are indistinguishable from natural ones. In addition, we find statistically optimal integration of the emotion information specified by these primitives in visual perception. Taken together, our results indicate that facial expressions might be controlled by a very small number of independent control units, permitting very low-dimensional parametrization of the associated facial expression.
Real-time speech-driven animation of expressive talking faces
NASA Astrophysics Data System (ADS)
Liu, Jia; You, Mingyu; Chen, Chun; Song, Mingli
2011-05-01
In this paper, we present a real-time facial animation system in which speech drives mouth movements and facial expressions synchronously. Considering five basic emotions, a hierarchical structure with an upper layer of emotion classification is established. Based on the recognized emotion label, the under-layer classification at sub-phonemic level has been modelled on the relationship between acoustic features of frames and audio labels in phonemes. Using certain constraint, the predicted emotion labels of speech are adjusted to gain the facial expression labels which are combined with sub-phonemic labels. The combinations are mapped into facial action units (FAUs), and audio-visual synchronized animation with mouth movements and facial expressions is generated by morphing between FAUs. The experimental results demonstrate that the two-layer structure succeeds in both emotion and sub-phonemic classifications, and the synthesized facial sequences reach a comparative convincing quality.
Objectifying Facial Expressivity Assessment of Parkinson's Patients: Preliminary Study
Patsis, Georgios; Jiang, Dongmei; Sahli, Hichem; Kerckhofs, Eric; Vandekerckhove, Marie
2014-01-01
Patients with Parkinson's disease (PD) can exhibit a reduction of spontaneous facial expression, designated as “facial masking,” a symptom in which facial muscles become rigid. To improve clinical assessment of facial expressivity of PD, this work attempts to quantify the dynamic facial expressivity (facial activity) of PD by automatically recognizing facial action units (AUs) and estimating their intensity. Spontaneous facial expressivity was assessed by comparing 7 PD patients with 8 control participants. To voluntarily produce spontaneous facial expressions that resemble those typically triggered by emotions, six emotions (amusement, sadness, anger, disgust, surprise, and fear) were elicited using movie clips. During the movie clips, physiological signals (facial electromyography (EMG) and electrocardiogram (ECG)) and frontal face video of the participants were recorded. The participants were asked to report on their emotional states throughout the experiment. We first examined the effectiveness of the emotion manipulation by evaluating the participant's self-reports. Disgust-induced emotions were significantly higher than the other emotions. Thus we focused on the analysis of the recorded data during watching disgust movie clips. The proposed facial expressivity assessment approach captured differences in facial expressivity between PD patients and controls. Also differences between PD patients with different progression of Parkinson's disease have been observed. PMID:25478003
De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey
2016-01-01
Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/. PMID:27346987
De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey
2015-05-01
Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/.
Namba, Shushi; Kabir, Russell S.; Miyatani, Makoto; Nakao, Takashi
2017-01-01
While numerous studies have examined the relationships between facial actions and emotions, they have yet to account for the ways that specific spontaneous facial expressions map onto emotional experiences induced without expressive intent. Moreover, previous studies emphasized that a fine-grained investigation of facial components could establish the coherence of facial actions with actual internal states. Therefore, this study aimed to accumulate evidence for the correspondence between spontaneous facial components and emotional experiences. We reinvestigated data from previous research which secretly recorded spontaneous facial expressions of Japanese participants as they watched film clips designed to evoke four different target emotions: surprise, amusement, disgust, and sadness. The participants rated their emotional experiences via a self-reported questionnaire of 16 emotions. These spontaneous facial expressions were coded using the Facial Action Coding System, the gold standard for classifying visible facial movements. We corroborated each facial action that was present in the emotional experiences by applying stepwise regression models. The results found that spontaneous facial components occurred in ways that cohere to their evolutionary functions based on the rating values of emotional experiences (e.g., the inner brow raiser might be involved in the evaluation of novelty). This study provided new empirical evidence for the correspondence between each spontaneous facial component and first-person internal states of emotion as reported by the expresser. PMID:28522979
Donato, Gianluca; Bartlett, Marian Stewart; Hager, Joseph C.; Ekman, Paul; Sejnowski, Terrence J.
2010-01-01
The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions. PMID:21188284
Novel dynamic Bayesian networks for facial action element recognition and understanding
NASA Astrophysics Data System (ADS)
Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong
2011-12-01
In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.
Space-by-time manifold representation of dynamic facial expressions for emotion categorization
Delis, Ioannis; Chen, Chaona; Jack, Rachael E.; Garrod, Oliver G. B.; Panzeri, Stefano; Schyns, Philippe G.
2016-01-01
Visual categorization is the brain computation that reduces high-dimensional information in the visual environment into a smaller set of meaningful categories. An important problem in visual neuroscience is to identify the visual information that the brain must represent and then use to categorize visual inputs. Here we introduce a new mathematical formalism—termed space-by-time manifold decomposition—that describes this information as a low-dimensional manifold separable in space and time. We use this decomposition to characterize the representations used by observers to categorize the six classic facial expressions of emotion (happy, surprise, fear, disgust, anger, and sad). By means of a Generative Face Grammar, we presented random dynamic facial movements on each experimental trial and used subjective human perception to identify the facial movements that correlate with each emotion category. When the random movements projected onto the categorization manifold region corresponding to one of the emotion categories, observers categorized the stimulus accordingly; otherwise they selected “other.” Using this information, we determined both the Action Unit and temporal components whose linear combinations lead to reliable categorization of each emotion. In a validation experiment, we confirmed the psychological validity of the resulting space-by-time manifold representation. Finally, we demonstrated the importance of temporal sequencing for accurate emotion categorization and identified the temporal dynamics of Action Unit components that cause typical confusions between specific emotions (e.g., fear and surprise) as well as those resolving these confusions. PMID:27305521
Automatic 2.5-D Facial Landmarking and Emotion Annotation for Social Interaction Assistance.
Zhao, Xi; Zou, Jianhua; Li, Huibin; Dellandrea, Emmanuel; Kakadiaris, Ioannis A; Chen, Liming
2016-09-01
People with low vision, Alzheimer's disease, and autism spectrum disorder experience difficulties in perceiving or interpreting facial expression of emotion in their social lives. Though automatic facial expression recognition (FER) methods on 2-D videos have been extensively investigated, their performance was constrained by challenges in head pose and lighting conditions. The shape information in 3-D facial data can reduce or even overcome these challenges. However, high expenses of 3-D cameras prevent their widespread use. Fortunately, 2.5-D facial data from emerging portable RGB-D cameras provide a good balance for this dilemma. In this paper, we propose an automatic emotion annotation solution on 2.5-D facial data collected from RGB-D cameras. The solution consists of a facial landmarking method and a FER method. Specifically, we propose building a deformable partial face model and fit the model to a 2.5-D face for localizing facial landmarks automatically. In FER, a novel action unit (AU) space-based FER method has been proposed. Facial features are extracted using landmarks and further represented as coordinates in the AU space, which are classified into facial expressions. Evaluated on three publicly accessible facial databases, namely EURECOM, FRGC, and Bosphorus databases, the proposed facial landmarking and expression recognition methods have achieved satisfactory results. Possible real-world applications using our algorithms have also been discussed.
Facial color is an efficient mechanism to visually transmit emotion
Benitez-Quiroz, Carlos F.; Srinivasan, Ramprakash
2018-01-01
Facial expressions of emotion in humans are believed to be produced by contracting one’s facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion. PMID:29555780
Facial color is an efficient mechanism to visually transmit emotion.
Benitez-Quiroz, Carlos F; Srinivasan, Ramprakash; Martinez, Aleix M
2018-04-03
Facial expressions of emotion in humans are believed to be produced by contracting one's facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion. Copyright © 2018 the Author(s). Published by PNAS.
A large-scale analysis of sex differences in facial expressions
Kodra, Evan; el Kaliouby, Rana; LaFrance, Marianne
2017-01-01
There exists a stereotype that women are more expressive than men; however, research has almost exclusively focused on a single facial behavior, smiling. A large-scale study examines whether women are consistently more expressive than men or whether the effects are dependent on the emotion expressed. Studies of gender differences in expressivity have been somewhat restricted to data collected in lab settings or which required labor-intensive manual coding. In the present study, we analyze gender differences in facial behaviors as over 2,000 viewers watch a set of video advertisements in their home environments. The facial responses were recorded using participants’ own webcams. Using a new automated facial coding technology we coded facial activity. We find that women are not universally more expressive across all facial actions. Nor are they more expressive in all positive valence actions and less expressive in all negative valence actions. It appears that generally women express actions more frequently than men, and in particular express more positive valence actions. However, expressiveness is not greater in women for all negative valence actions and is dependent on the discrete emotional state. PMID:28422963
Zanette, Sarah; Gao, Xiaoqing; Brunet, Megan; Bartlett, Marian Stewart; Lee, Kang
2016-10-01
The current study used computer vision technology to examine the nonverbal facial expressions of children (6-11years old) telling antisocial and prosocial lies. Children in the antisocial lying group completed a temptation resistance paradigm where they were asked not to peek at a gift being wrapped for them. All children peeked at the gift and subsequently lied about their behavior. Children in the prosocial lying group were given an undesirable gift and asked if they liked it. All children lied about liking the gift. Nonverbal behavior was analyzed using the Computer Expression Recognition Toolbox (CERT), which employs the Facial Action Coding System (FACS), to automatically code children's facial expressions while lying. Using CERT, children's facial expressions during antisocial and prosocial lying were accurately and reliably differentiated significantly above chance-level accuracy. The basic expressions of emotion that distinguished antisocial lies from prosocial lies were joy and contempt. Children expressed joy more in prosocial lying than in antisocial lying. Girls showed more joy and less contempt compared with boys when they told prosocial lies. Boys showed more contempt when they told prosocial lies than when they told antisocial lies. The key action units (AUs) that differentiate children's antisocial and prosocial lies are blink/eye closure, lip pucker, and lip raise on the right side. Together, these findings indicate that children's facial expressions differ while telling antisocial versus prosocial lies. The reliability of CERT in detecting such differences in facial expression suggests the viability of using computer vision technology in deception research. Copyright © 2016 Elsevier Inc. All rights reserved.
Confidence Preserving Machine for Facial Action Unit Detection
Zeng, Jiabei; Chu, Wen-Sheng; De la Torre, Fernando; Cohn, Jeffrey F.; Xiong, Zhang
2016-01-01
Facial action unit (AU) detection from video has been a long-standing problem in automated facial expression analysis. While progress has been made, accurate detection of facial AUs remains challenging due to ubiquitous sources of errors, such as inter-personal variability, pose, and low-intensity AUs. In this paper, we refer to samples causing such errors as hard samples, and the remaining as easy samples. To address learning with the hard samples, we propose the Confidence Preserving Machine (CPM), a novel two-stage learning framework that combines multiple classifiers following an “easy-to-hard” strategy. During the training stage, CPM learns two confident classifiers. Each classifier focuses on separating easy samples of one class from all else, and thus preserves confidence on predicting each class. During the testing stage, the confident classifiers provide “virtual labels” for easy test samples. Given the virtual labels, we propose a quasi-semi-supervised (QSS) learning strategy to learn a person-specific (PS) classifier. The QSS strategy employs a spatio-temporal smoothness that encourages similar predictions for samples within a spatio-temporal neighborhood. In addition, to further improve detection performance, we introduce two CPM extensions: iCPM that iteratively augments training samples to train the confident classifiers, and kCPM that kernelizes the original CPM model to promote nonlinearity. Experiments on four spontaneous datasets GFT [15], BP4D [56], DISFA [42], and RU-FACS [3] illustrate the benefits of the proposed CPM models over baseline methods and state-of-the-art semisupervised learning and transfer learning methods. PMID:27479964
The localization of facial motor impairment in sporadic Möbius syndrome.
Cattaneo, L; Chierici, E; Bianchi, B; Sesenna, E; Pavesi, G
2006-06-27
To investigate the neurophysiologic aspects of facial motor control in patients with sporadic Möbius syndrome defined as nonprogressive congenital facial and abducens palsy. The authors assessed 24 patients with sporadic Möbius syndrome by performing a complete clinical examination and neurophysiologic tests including facial nerve conduction studies, needle electromyography examination of facial muscles, and recording of the blink reflex and of the trigeminofacial inhibitory reflex. Two distinct groups of patients were identified according to neurophysiologic testing. The first group was characterized by increased facial distal motor latencies (DMLs) and poor recruitment of small and polyphasic motor unit action potentials (MUAPs). The second group was characterized by normal facial DMLs and neuropathic MUAPs. It is hypothesized that in the first group, the disorder is due to a rhombencephalic maldevelopment with selective sparing of small-size MUs, and in the second group, the disorder is related to an acquired nervous injury during intrauterine life, with subsequent neurogenic remodeling of MUs. The trigeminofacial reflexes showed that in most subjects of both groups, the functional impairment of facial movements was caused by a nuclear or peripheral site of lesion, with little evidence of brainstem interneuronal involvement. Two different neurophysiologically defined phenotypes can be distinguished in sporadic Möbius syndrome, with different pathogenetic implications.
The faces of pain: a cluster analysis of individual differences in facial activity patterns of pain.
Kunz, M; Lautenbacher, S
2014-07-01
There is general agreement that facial activity during pain conveys pain-specific information but is nevertheless characterized by substantial inter-individual differences. With the present study we aim to investigate whether these differences represent idiosyncratic variations or whether they can be clustered into distinct facial activity patterns. Facial actions during heat pain were assessed in two samples of pain-free individuals (n = 128; n = 112) and were later analysed using the Facial Action Coding System. Hierarchical cluster analyses were used to look for combinations of single facial actions in episodes of pain. The stability/replicability of facial activity patterns was determined across samples as well as across different basic social situations. Cluster analyses revealed four distinct activity patterns during pain, which stably occurred across samples and situations: (I) narrowed eyes with furrowed brows and wrinkled nose; (II) opened mouth with narrowed eyes; (III) raised eyebrows; and (IV) furrowed brows with narrowed eyes. In addition, a considerable number of participants were facially completely unresponsive during pain induction (stoic cluster). These activity patterns seem to be reaction stereotypies in the majority of individuals (in nearly two-thirds), whereas a minority displayed varying clusters across situations. These findings suggest that there is no uniform set of facial actions but instead there are at least four different facial activity patterns occurring during pain that are composed of different configurations of facial actions. Raising awareness about these different 'faces of pain' might hold the potential of improving the detection and, thereby, the communication of pain. © 2013 European Pain Federation - EFIC®
[Expression of the emotions in the drawing of a man by the child from 5 to 11 years of age].
Brechet, Claire; Picard, Delphine; Baldy, René
2007-06-01
This study examines the development of children's ability to express emotions in their human figure drawing. Sixty children of 5, 8, and 11 years were asked to draw "a man," and then a "sad", "happy," "angry" and "surprised" man. Expressivity of the drawings was assessed by means of two procedures: a limited choice and a free labelling procedure. Emotionally expressive drawings were then evaluated in terms of the number and the type of graphic cues that were used to express emotion. It was found that children are able to depict happiness and sadness at 8, anger and surprise at 11. With age, children use increasingly numerous and complex graphic cues for each emotion (i.e., facial expression, body position, and contextual cues). Graphic cues for facial expression (e.g., concave mouth, curved eyebrows, wide opened eyes) share strong similarities with specific "action units" described by Ekman and Friesen (1978) in their Facial Action Coding System. Children's ability to depict emotion in their human figure drawing is discussed in relation to perceptual, conceptual, and graphic abilities.
A comparison of facial expression properties in five hylobatid species.
Scheider, Linda; Liebal, Katja; Oña, Leonardo; Burrows, Anne; Waller, Bridget
2014-07-01
Little is known about facial communication of lesser apes (family Hylobatidae) and how their facial expressions (and use of) relate to social organization. We investigated facial expressions (defined as combinations of facial movements) in social interactions of mated pairs in five different hylobatid species belonging to three different genera using a recently developed objective coding system, the Facial Action Coding System for hylobatid species (GibbonFACS). We described three important properties of their facial expressions and compared them between genera. First, we compared the rate of facial expressions, which was defined as the number of facial expressions per units of time. Second, we compared their repertoire size, defined as the number of different types of facial expressions used, independent of their frequency. Third, we compared the diversity of expression, defined as the repertoire weighted by the rate of use for each type of facial expression. We observed a higher rate and diversity of facial expression, but no larger repertoire, in Symphalangus (siamangs) compared to Hylobates and Nomascus species. In line with previous research, these results suggest siamangs differ from other hylobatids in certain aspects of their social behavior. To investigate whether differences in facial expressions are linked to hylobatid socio-ecology, we used a Phylogenetic General Least Square (PGLS) regression analysis to correlate those properties with two social factors: group-size and level of monogamy. No relationship between the properties of facial expressions and these socio-ecological factors was found. One explanation could be that facial expressions in hylobatid species are subject to phylogenetic inertia and do not differ sufficiently between species to reveal correlations with factors such as group size and monogamy level. © 2014 Wiley Periodicals, Inc.
Enjoying vs. smiling: Facial muscular activation in response to emotional language.
Fino, Edita; Menegatti, Michela; Avenanti, Alessio; Rubini, Monica
2016-07-01
The present study examined whether emotionally congruent facial muscular activation - a somatic index of emotional language embodiment can be elicited by reading subject-verb sentences composed of action verbs, that refer directly to facial expressions (e.g., Mario smiles), but also by reading more abstract state verbs, which provide more direct access to the emotions felt by the agent (e.g., Mario enjoys). To address this issue, we measured facial electromyography (EMG) while participants evaluated state and action verb sentences. We found emotional sentences including both verb categories to have valence-congruent effects on emotional ratings and corresponding facial muscle activations. As expected, state verb-sentences were judged with higher valence ratings than action verb-sentences. Moreover, despite emotional congruent facial activations were similar for the two linguistic categories, in a late temporal window we found a tendency for greater EMG modulation when reading action relative to state verb sentences. These results support embodied theories of language comprehension and suggest that understanding emotional action and state verb sentences relies on partially dissociable motor and emotional processes. Copyright © 2016 Elsevier B.V. All rights reserved.
Toyoda, Aru; Maruhashi, Tamaki; Malaivijitnond, Suchinda; Koda, Hiroki
2017-10-01
Speech is unique to humans and characterized by facial actions of ∼5 Hz oscillations of lip, mouth or jaw movements. Lip-smacking, a facial display of primates characterized by oscillatory actions involving the vertical opening and closing of the jaw and lips, exhibits stable 5-Hz oscillation patterns, matching that of speech, suggesting that lip-smacking is a precursor of speech. We tested if facial or vocal actions exhibiting the same rate of oscillation are found in wide forms of facial or vocal displays in various social contexts, exhibiting diversity among species. We observed facial and vocal actions of wild stump-tailed macaques (Macaca arctoides), and selected video clips including facial displays (teeth chattering; TC), panting calls, and feeding. Ten open-to-open mouth durations during TC and feeding and five amplitude peak-to-peak durations in panting were analyzed. Facial display (TC) and vocalization (panting) oscillated within 5.74 ± 1.19 and 6.71 ± 2.91 Hz, respectively, similar to the reported lip-smacking of long-tailed macaques and the speech of humans. These results indicated a common mechanism for the central pattern generator underlying orofacial movements, which would evolve to speech. Similar oscillations in panting, which evolved from different muscular control than the orofacial action, suggested the sensory foundations for perceptual saliency particular to 5-Hz rhythms in macaques. This supports the pre-adaptation hypothesis of speech evolution, which states a central pattern generator for 5-Hz facial oscillation and perceptual background tuned to 5-Hz actions existed in common ancestors of macaques and humans, before the emergence of speech. © 2017 Wiley Periodicals, Inc.
Mele, Sonia; Ghirardi, Valentina; Craighero, Laila
2017-12-01
A long-term debate concerns whether the sensorimotor coding carried out during transitive actions observation reflects the low-level movement implementation details or the movement goals. On the contrary, phonemes and emotional facial expressions are intransitive actions that do not fall into this debate. The investigation of phonemes discrimination has proven to be a good model to demonstrate that the sensorimotor system plays a role in understanding actions acoustically presented. In the present study, we adapted the experimental paradigms already used in phonemes discrimination during face posture manipulation, to the discrimination of emotional facial expressions. We submitted participants to a lower or to an upper face posture manipulation during the execution of a four alternative labelling task of pictures randomly taken from four morphed continua between two emotional facial expressions. The results showed that the implementation of low-level movement details influence the discrimination of ambiguous facial expressions differing for a specific involvement of those movement details. These findings indicate that facial expressions discrimination is a good model to test the role of the sensorimotor system in the perception of actions visually presented.
Mimicking emotions: how 3-12-month-old infants use the facial expressions and eyes of a model.
Soussignan, Robert; Dollion, Nicolas; Schaal, Benoist; Durand, Karine; Reissland, Nadja; Baudouin, Jean-Yves
2018-06-01
While there is an extensive literature on the tendency to mimic emotional expressions in adults, it is unclear how this skill emerges and develops over time. Specifically, it is unclear whether infants mimic discrete emotion-related facial actions, whether their facial displays are moderated by contextual cues and whether infants' emotional mimicry is constrained by developmental changes in the ability to discriminate emotions. We therefore investigate these questions using Baby-FACS to code infants' facial displays and eye-movement tracking to examine infants' looking times at facial expressions. Three-, 7-, and 12-month-old participants were exposed to dynamic facial expressions (joy, anger, fear, disgust, sadness) of a virtual model which either looked at the infant or had an averted gaze. Infants did not match emotion-specific facial actions shown by the model, but they produced valence-congruent facial responses to the distinct expressions. Furthermore, only the 7- and 12-month-olds displayed negative responses to the model's negative expressions and they looked more at areas of the face recruiting facial actions involved in specific expressions. Our results suggest that valence-congruent expressions emerge in infancy during a period where the decoding of facial expressions becomes increasingly sensitive to the social signal value of emotions.
Spontaneous Facial Mimicry in Response to Dynamic Facial Expressions
ERIC Educational Resources Information Center
Sato, Wataru; Yoshikawa, Sakiko
2007-01-01
Based on previous neuroscientific evidence indicating activation of the mirror neuron system in response to dynamic facial actions, we hypothesized that facial mimicry would occur while subjects viewed dynamic facial expressions. To test this hypothesis, dynamic/static facial expressions of anger/happiness were presented using computer-morphing…
Prigent, Elise; Amorim, Michel-Ange; de Oliveira, Armando Mónica
2018-01-01
Humans have developed a specific capacity to rapidly perceive and anticipate other people's facial expressions so as to get an immediate impression of their emotional state of mind. We carried out two experiments to examine the perceptual and memory dynamics of facial expressions of pain. In the first experiment, we investigated how people estimate other people's levels of pain based on the perception of various dynamic facial expressions; these differ both in terms of the amount and intensity of activated action units. A second experiment used a representational momentum (RM) paradigm to study the emotional anticipation (memory bias) elicited by the same facial expressions of pain studied in Experiment 1. Our results highlighted the relationship between the level of perceived pain (in Experiment 1) and the direction and magnitude of memory bias (in Experiment 2): When perceived pain increases, the memory bias tends to be reduced (if positive) and ultimately becomes negative. Dynamic facial expressions of pain may reenact an "immediate perceptual history" in the perceiver before leading to an emotional anticipation of the agent's upcoming state. Thus, a subtle facial expression of pain (i.e., a low contraction around the eyes) that leads to a significant positive anticipation can be considered an adaptive process-one through which we can swiftly and involuntarily detect other people's pain.
Gunnery, Sarah D; Naumova, Elena N; Saint-Hilaire, Marie; Tickle-Degnen, Linda
2017-01-01
People with Parkinson's disease (PD) often experience a decrease in their facial expressivity, but little is known about how the coordinated movements across regions of the face are impaired in PD. The face has neurologically independent regions that coordinate to articulate distinct social meanings that others perceive as gestalt expressions, and so understanding how different regions of the face are affected is important. Using the Facial Action Coding System, this study comprehensively measured spontaneous facial expression across 600 frames for a multiple case study of people with PD who were rated as having varying degrees of facial expression deficits, and created correlation matrices for frequency and intensity of produced muscle activations across different areas of the face. Data visualization techniques were used to create temporal and correlational mappings of muscle action in the face at different degrees of facial expressivity. Results showed that as severity of facial expression deficit increased, there was a decrease in number, duration, intensity, and coactivation of facial muscle action. This understanding of how regions of the parkinsonian face move independently and in conjunction with other regions will provide a new focus for future research aiming to model how facial expression in PD relates to disease progression, stigma, and quality of life.
Hofree, Galit; Ruvolo, Paul; Reinert, Audrey; Bartlett, Marian S; Winkielman, Piotr
2018-01-01
Facial actions are key elements of non-verbal behavior. Perceivers' reactions to others' facial expressions often represent a match or mirroring (e.g., they smile to a smile). However, the information conveyed by an expression depends on context. Thus, when shown by an opponent, a smile conveys bad news and evokes frowning. The availability of anthropomorphic agents capable of facial actions raises the question of how people respond to such agents in social context. We explored this issue in a study where participants played a strategic game with or against a facially expressive android. Electromyography (EMG) recorded participants' reactions over zygomaticus muscle (smiling) and corrugator muscle (frowning). We found that participants' facial responses to android's expressions reflect their informational value, rather than a direct match. Overall, participants smiled more, and frowned less, when winning than losing. Critically, participants' responses to the game outcome were similar regardless of whether it was conveyed via the android's smile or frown. Furthermore, the outcome had greater impact on people's facial reactions when it was conveyed through android's face than a computer screen. These findings demonstrate that facial actions of artificial agents impact human facial responding. They also suggest a sophistication in human-robot communication that highlights the signaling value of facial expressions.
ERIC Educational Resources Information Center
Ekman, Paul; Friesen, Wallace V.
1976-01-01
The Facial Action Code (FAC) was derived from an analysis of the anatomical basis of facial movement. The development of the method is explained, contrasting it to other methods of measuring facial behavior. An example of how facial behavior is measured is provided, and ideas about research applications are discussed. (Author)
Sasaki, Ryo; Matsumine, Hajime; Watanabe, Yorikatsu; Takeuchi, Yuichi; Yamato, Masayuki; Okano, Teruo; Miyata, Mariko; Ando, Tomohiro
2014-11-01
Dental pulp tissue contains Schwann and neural progenitor cells. Tissue-engineered nerve conduits with dental pulp cells promote facial nerve regeneration in rats. However, no nerve functional or electrophysiologic evaluations were performed. This study investigated the compound muscle action potential recordings and facial functional analysis of dental pulp cell regenerated nerve in rats. A silicone tube containing rat dental pulp cells in type I collagen gel was transplanted into a 7-mm gap of the buccal branch of the facial nerve in Lewis rats; the same defect was created in the marginal mandibular branch, which was ligatured. Compound muscle action potential recordings of vibrissal muscles and facial functional analysis with facial palsy score of the nerve were performed. Tubulation with dental pulp cells showed significantly lower facial palsy scores than the autograft group between 3 and 10 weeks postoperatively. However, the dental pulp cell facial palsy scores showed no significant difference from those of autograft after 11 weeks. Amplitude and duration of compound muscle action potentials in the dental pulp cell group showed no significant difference from those of the intact and autograft groups, and there was no significant difference in the latency of compound muscle action potentials between the groups at 13 weeks postoperatively. However, the latency in the dental pulp cell group was prolonged more than that of the intact group. Tubulation with dental pulp cells could recover facial nerve defects functionally and electrophysiologically, and the recovery became comparable to that of nerve autografting in rats.
Proposal of Self-Learning and Recognition System of Facial Expression
NASA Astrophysics Data System (ADS)
Ogawa, Yukihiro; Kato, Kunihito; Yamamoto, Kazuhiko
We describe realization of more complicated function by using the information acquired from some equipped unripe functions. The self-learning and recognition system of the human facial expression, which achieved under the natural relation between human and robot, are proposed. The robot with this system can understand human facial expressions and behave according to their facial expressions after the completion of learning process. The system modelled after the process that a baby learns his/her parents’ facial expressions. Equipping the robot with a camera the system can get face images and equipping the CdS sensors on the robot’s head the robot can get the information of human action. Using the information of these sensors, the robot can get feature of each facial expression. After self-learning is completed, when a person changed his facial expression in front of the robot, the robot operates actions under the relevant facial expression.
Hofree, Galit; Ruvolo, Paul; Reinert, Audrey; Bartlett, Marian S.; Winkielman, Piotr
2018-01-01
Facial actions are key elements of non-verbal behavior. Perceivers’ reactions to others’ facial expressions often represent a match or mirroring (e.g., they smile to a smile). However, the information conveyed by an expression depends on context. Thus, when shown by an opponent, a smile conveys bad news and evokes frowning. The availability of anthropomorphic agents capable of facial actions raises the question of how people respond to such agents in social context. We explored this issue in a study where participants played a strategic game with or against a facially expressive android. Electromyography (EMG) recorded participants’ reactions over zygomaticus muscle (smiling) and corrugator muscle (frowning). We found that participants’ facial responses to android’s expressions reflect their informational value, rather than a direct match. Overall, participants smiled more, and frowned less, when winning than losing. Critically, participants’ responses to the game outcome were similar regardless of whether it was conveyed via the android’s smile or frown. Furthermore, the outcome had greater impact on people’s facial reactions when it was conveyed through android’s face than a computer screen. These findings demonstrate that facial actions of artificial agents impact human facial responding. They also suggest a sophistication in human-robot communication that highlights the signaling value of facial expressions. PMID:29740307
Altering sensorimotor feedback disrupts visual discrimination of facial expressions.
Wood, Adrienne; Lupyan, Gary; Sherrin, Steven; Niedenthal, Paula
2016-08-01
Looking at another person's facial expression of emotion can trigger the same neural processes involved in producing the expression, and such responses play a functional role in emotion recognition. Disrupting individuals' facial action, for example, interferes with verbal emotion recognition tasks. We tested the hypothesis that facial responses also play a functional role in the perceptual processing of emotional expressions. We altered the facial action of participants with a gel facemask while they performed a task that involved distinguishing target expressions from highly similar distractors. Relative to control participants, participants in the facemask condition demonstrated inferior perceptual discrimination of facial expressions, but not of nonface stimuli. The findings suggest that somatosensory/motor processes involving the face contribute to the visual perceptual-and not just conceptual-processing of facial expressions. More broadly, our study contributes to growing evidence for the fundamentally interactive nature of the perceptual inputs from different sensory modalities.
Action recognition is sensitive to the identity of the actor.
Ferstl, Ylva; Bülthoff, Heinrich; de la Rosa, Stephan
2017-09-01
Recognizing who is carrying out an action is essential for successful human interaction. The cognitive mechanisms underlying this ability are little understood and have been subject of discussions in embodied approaches to action recognition. Here we examine one solution, that visual action recognition processes are at least partly sensitive to the actor's identity. We investigated the dependency between identity information and action related processes by testing the sensitivity of neural action recognition processes to clothing and facial identity information with a behavioral adaptation paradigm. Our results show that action adaptation effects are in fact modulated by both clothing information and the actor's facial identity. The finding demonstrates that neural processes underlying action recognition are sensitive to identity information (including facial identity) and thereby not exclusively tuned to actions. We suggest that such response properties are useful to help humans in knowing who carried out an action. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Samad, Manar D.; Bobzien, Jonna L.; Harrington, John W.; Iftekharuddin, Khan M.
2016-03-01
Autism Spectrum Disorders (ASD) can impair non-verbal communication including the variety and extent of facial expressions in social and interpersonal communication. These impairments may appear as differential traits in the physiology of facial muscles of an individual with ASD when compared to a typically developing individual. The differential traits in the facial expressions as shown by facial muscle-specific changes (also known as 'facial oddity' for subjects with ASD) may be measured visually. However, this mode of measurement may not discern the subtlety in facial oddity distinctive to ASD. Earlier studies have used intrusive electrophysiological sensors on the facial skin to gauge facial muscle actions from quantitative physiological data. This study demonstrates, for the first time in the literature, novel quantitative measures for facial oddity recognition using non-intrusive facial imaging sensors such as video and 3D optical cameras. An Institutional Review Board (IRB) approved that pilot study has been conducted on a group of individuals consisting of eight participants with ASD and eight typically developing participants in a control group to capture their facial images in response to visual stimuli. The proposed computational techniques and statistical analyses reveal higher mean of actions in the facial muscles of the ASD group versus the control group. The facial muscle-specific evaluation reveals intense yet asymmetric facial responses as facial oddity in participants with ASD. This finding about the facial oddity may objectively define measurable differential markers in the facial expressions of individuals with ASD.
Intact imitation of emotional facial actions in autism spectrum conditions.
Press, Clare; Richardson, Daniel; Bird, Geoffrey
2010-09-01
It has been proposed that there is a core impairment in autism spectrum conditions (ASC) to the mirror neuron system (MNS): If observed actions cannot be mapped onto the motor commands required for performance, higher order sociocognitive functions that involve understanding another person's perspective, such as theory of mind, may be impaired. However, evidence of MNS impairment in ASC is mixed. The present study used an 'automatic imitation' paradigm to assess MNS functioning in adults with ASC and matched controls, when observing emotional facial actions. Participants performed a pre-specified angry or surprised facial action in response to observed angry or surprised facial actions, and the speed of their action was measured with motion tracking equipment. Both the ASC and control groups demonstrated automatic imitation of the facial actions, such that responding was faster when they acted with the same emotional expression that they had observed. There was no difference between the two groups in the magnitude of the effect. These findings suggest that previous apparent demonstrations of impairments to the MNS in ASC may be driven by a lack of visual attention to the stimuli or motor sequencing impairments, and therefore that there is, in fact, no MNS impairment in ASC. We discuss these findings with reference to the literature on MNS functioning and imitation in ASC, as well as theories of the role of the MNS in sociocognitive functioning in typical development. Copyright 2010 Elsevier Ltd. All rights reserved.
The identification of unfolding facial expressions.
Fiorentini, Chiara; Schmidt, Susanna; Viviani, Paolo
2012-01-01
We asked whether the identification of emotional facial expressions (FEs) involves the simultaneous perception of the facial configuration or the detection of emotion-specific diagnostic cues. We recorded at high speed (500 frames s-1) the unfolding of the FE in five actors, each expressing six emotions (anger, surprise, happiness, disgust, fear, sadness). Recordings were coded every 10 frames (20 ms of real time) with the Facial Action Coding System (FACS, Ekman et al 2002, Salt Lake City, UT: Research Nexus eBook) to identify the facial actions contributing to each expression, and their intensity changes over time. Recordings were shown in slow motion (1/20 of recording speed) to one hundred observers in a forced-choice identification task. Participants were asked to identify the emotion during the presentation as soon as they felt confident to do so. Responses were recorded along with the associated response times (RTs). The RT probability density functions for both correct and incorrect responses were correlated with the facial activity during the presentation. There were systematic correlations between facial activities, response probabilities, and RT peaks, and significant differences in RT distributions for correct and incorrect answers. The results show that a reliable response is possible long before the full FE configuration is reached. This suggests that identification is reached by integrating in time individual diagnostic facial actions, and does not require perceiving the full apex configuration.
NASA Astrophysics Data System (ADS)
Gîlcă, G.; Bîzdoacă, N. G.; Diaconu, I.
2016-08-01
This article aims to implement some practical applications using the Socibot Desktop social robot. We mean to realize three applications: creating a speech sequence using the Kiosk menu of the browser interface, creating a program in the Virtual Robot browser interface and making a new guise to be loaded into the robot's memory in order to be projected onto it face. The first application is actually created in the Compose submenu that contains 5 file categories: audio, eyes, face, head, mood, this being helpful in the creation of the projected sequence. The second application is more complex, the completed program containing: audio files, speeches (can be created in over 20 languages), head movements, the robot's facial parameters function of each action units (AUs) of the facial muscles, its expressions and its line of sight. Last application aims to change the robot's appearance with the guise created by us. The guise was created in Adobe Photoshop and then loaded into the robot's memory.
Bodily action penetrates affective perception
Rigutti, Sara; Gerbino, Walter
2016-01-01
Fantoni & Gerbino (2014) showed that subtle postural shifts associated with reaching can have a strong hedonic impact and affect how actors experience facial expressions of emotion. Using a novel Motor Action Mood Induction Procedure (MAMIP), they found consistent congruency effects in participants who performed a facial emotion identification task after a sequence of visually-guided reaches: a face perceived as neutral in a baseline condition appeared slightly happy after comfortable actions and slightly angry after uncomfortable actions. However, skeptics about the penetrability of perception (Zeimbekis & Raftopoulos, 2015) would consider such evidence insufficient to demonstrate that observer’s internal states induced by action comfort/discomfort affect perception in a top-down fashion. The action-modulated mood might have produced a back-end memory effect capable of affecting post-perceptual and decision processing, but not front-end perception. Here, we present evidence that performing a facial emotion detection (not identification) task after MAMIP exhibits systematic mood-congruent sensitivity changes, rather than response bias changes attributable to cognitive set shifts; i.e., we show that observer’s internal states induced by bodily action can modulate affective perception. The detection threshold for happiness was lower after fifty comfortable than uncomfortable reaches; while the detection threshold for anger was lower after fifty uncomfortable than comfortable reaches. Action valence induced an overall sensitivity improvement in detecting subtle variations of congruent facial expressions (happiness after positive comfortable actions, anger after negative uncomfortable actions), in the absence of significant response bias shifts. Notably, both comfortable and uncomfortable reaches impact sensitivity in an approximately symmetric way relative to a baseline inaction condition. All of these constitute compelling evidence of a genuine top-down effect on perception: specifically, facial expressions of emotion are penetrable by action-induced mood. Affective priming by action valence is a candidate mechanism for the influence of observer’s internal states on properties experienced as phenomenally objective and yet loaded with meaning. PMID:26893964
The Influence of Facial Signals on the Automatic Imitation of Hand Actions
Butler, Emily E.; Ward, Robert; Ramsey, Richard
2016-01-01
Imitation and facial signals are fundamental social cues that guide interactions with others, but little is known regarding the relationship between these behaviors. It is clear that during expression detection, we imitate observed expressions by engaging similar facial muscles. It is proposed that a cognitive system, which matches observed and performed actions, controls imitation and contributes to emotion understanding. However, there is little known regarding the consequences of recognizing affective states for other forms of imitation, which are not inherently tied to the observed emotion. The current study investigated the hypothesis that facial cue valence would modulate automatic imitation of hand actions. To test this hypothesis, we paired different types of facial cue with an automatic imitation task. Experiments 1 and 2 demonstrated that a smile prompted greater automatic imitation than angry and neutral expressions. Additionally, a meta-analysis of this and previous studies suggests that both happy and angry expressions increase imitation compared to neutral expressions. By contrast, Experiments 3 and 4 demonstrated that invariant facial cues, which signal trait-levels of agreeableness, had no impact on imitation. Despite readily identifying trait-based facial signals, levels of agreeableness did not differentially modulate automatic imitation. Further, a Bayesian analysis showed that the null effect was between 2 and 5 times more likely than the experimental effect. Therefore, we show that imitation systems are more sensitive to prosocial facial signals that indicate “in the moment” states than enduring traits. These data support the view that a smile primes multiple forms of imitation including the copying actions that are not inherently affective. The influence of expression detection on wider forms of imitation may contribute to facilitating interactions between individuals, such as building rapport and affiliation. PMID:27833573
The Influence of Facial Signals on the Automatic Imitation of Hand Actions.
Butler, Emily E; Ward, Robert; Ramsey, Richard
2016-01-01
Imitation and facial signals are fundamental social cues that guide interactions with others, but little is known regarding the relationship between these behaviors. It is clear that during expression detection, we imitate observed expressions by engaging similar facial muscles. It is proposed that a cognitive system, which matches observed and performed actions, controls imitation and contributes to emotion understanding. However, there is little known regarding the consequences of recognizing affective states for other forms of imitation, which are not inherently tied to the observed emotion. The current study investigated the hypothesis that facial cue valence would modulate automatic imitation of hand actions. To test this hypothesis, we paired different types of facial cue with an automatic imitation task. Experiments 1 and 2 demonstrated that a smile prompted greater automatic imitation than angry and neutral expressions. Additionally, a meta-analysis of this and previous studies suggests that both happy and angry expressions increase imitation compared to neutral expressions. By contrast, Experiments 3 and 4 demonstrated that invariant facial cues, which signal trait-levels of agreeableness, had no impact on imitation. Despite readily identifying trait-based facial signals, levels of agreeableness did not differentially modulate automatic imitation. Further, a Bayesian analysis showed that the null effect was between 2 and 5 times more likely than the experimental effect. Therefore, we show that imitation systems are more sensitive to prosocial facial signals that indicate "in the moment" states than enduring traits. These data support the view that a smile primes multiple forms of imitation including the copying actions that are not inherently affective. The influence of expression detection on wider forms of imitation may contribute to facilitating interactions between individuals, such as building rapport and affiliation.
Raghupathi, Krishna R.; Azagarsamy, Malar A.; Thayumanavan, S.
2012-01-01
Stimuli sensitive, facially amphiphilic dendrimers have been synthesized and their enzyme-responsive nature has been determined with dual fluorescence responses of both covalently conjugated and non-covalently bound reporter units. These dual responses are correlated to ascertain the effect of enzymatic action on micellar aggregates and the consequential guest release. The release of the guest molecule is conveniently tuned by stabilizing the micellar aggregates through photochemical crosslinking of hydrophobic coumarin units. This photo-crosslinking is also utilized as a tool to investigate the mode of enzyme-substrate interaction in the context of aggregate-monomer equilibrium. PMID:21887830
Universals and cultural variations in 22 emotional expressions across five cultures.
Cordaro, Daniel T; Sun, Rui; Keltner, Dacher; Kamble, Shanmukh; Huddar, Niranjan; McNeil, Galen
2018-02-01
We collected and Facial Action Coding System (FACS) coded over 2,600 free-response facial and body displays of 22 emotions in China, India, Japan, Korea, and the United States to test 5 hypotheses concerning universals and cultural variants in emotional expression. New techniques enabled us to identify cross-cultural core patterns of expressive behaviors for each of the 22 emotions. We also documented systematic cultural variations of expressive behaviors within each culture that were shaped by the cultural resemblance in values, and identified a gradient of universality for the 22 emotions. Our discussion focused on the science of new expressions and how the evidence from this investigation identifies the extent to which emotional displays vary across cultures. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
What's in a "face file"? Feature binding with facial identity, emotion, and gaze direction.
Fitousi, Daniel
2017-07-01
A series of four experiments investigated the binding of facial (i.e., facial identity, emotion, and gaze direction) and non-facial (i.e., spatial location and response location) attributes. Evidence for the creation and retrieval of temporary memory face structures across perception and action has been adduced. These episodic structures-dubbed herein "face files"-consisted of both visuo-visuo and visuo-motor bindings. Feature binding was indicated by partial-repetition costs. That is repeating a combination of facial features or altering them altogether, led to faster responses than repeating or alternating only one of the features. Taken together, the results indicate that: (a) "face files" affect both action and perception mechanisms, (b) binding can take place with facial dimensions and is not restricted to low-level features (Hommel, Visual Cognition 5:183-216, 1998), and (c) the binding of facial and non-facial attributes is facilitated if the dimensions share common spatial or motor codes. The theoretical contributions of these results to "person construal" theories (Freeman, & Ambady, Psychological Science, 20(10), 1183-1188, 2011), as well as to face recognition models (Haxby, Hoffman, & Gobbini, Biological Psychiatry, 51(1), 59-67, 2000) are discussed.
[Contribution of botulinum toxin to maxillo-facial surgery].
Batifol, D; de Boutray, M; Goudot, P; Lorenzo, S
2013-04-01
Botulinum toxin has a wide range of use in maxillo-facial surgery due to its action on muscles, on the glandular system, and against pain. It already has been given several market authorizations as indicated for: blepharospasm, spasmodic stiff neck, and glabellar lines. Furthermore, several studies are ongoing to prove its effectiveness and usefulness for many other pathologies: treatment of pain following cervical spine surgery; action on salivary glands after trauma, hypertrophy, or hyper-salivation; analgesic action (acknowledged but still being experimented) on neuralgia, articular pain, and keloids scars due to its anti-inflammatory properties. Botulinum toxin injections in the cervico-facial area are more and more used and should be to be correctly assessed. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Hyvärinen, Antti; Tarkka, Ina M; Mervaala, Esa; Pääkkönen, Ari; Valtonen, Hannu; Nuutinen, Juhani
2008-12-01
The purpose of this study was to assess clinical and neurophysiological changes after 6 mos of transcutaneous electrical stimulation in patients with unresolved facial nerve paralysis. A pilot case series of 10 consecutive patients with chronic facial nerve paralysis either of idiopathic origin or because of herpes zoster oticus participated in this open study. All patients received below sensory threshold transcutaneous electrical stimulation for 6 mos for their facial nerve paralysis. The intervention consisted of gradually increasing the duration of electrical stimulation of three sites on the affected area for up to 6 hrs/day. Assessments of the facial nerve function were performed using the House-Brackmann clinical scale and neurophysiological measurements of compound motor action potential distal latencies on the affected and nonaffected sides. Patients were tested before and after the intervention. A significant improvement was observed in the facial nerve upper branch compound motor action potential distal latency on the affected side in all patients. An improvement of one grade in House-Brackmann scale was observed and some patients also reported subjective improvement. Transcutaneous electrical stimulation treatment may have a positive effect on unresolved facial nerve paralysis. This study illustrates a possibly effective treatment option for patients with the chronic facial paresis with no other expectations of recovery.
Rodriguez-Lorenzo, Andres; Audolfsson, Thorir; Wong, Corrine; Cheng, Angela; Arbique, Gary; Nowinski, Daniel; Rozen, Shai
2015-10-01
The aim of this study was to evaluate the contribution of a single unilateral facial vein in the venous outflow of total-face allograft using three-dimensional computed tomographic imaging techniques to further elucidate the mechanisms of venous complications following total-face transplant. Full-face soft-tissue flaps were harvested from fresh adult human cadavers. A single facial vein was identified and injected distally to the submandibular gland with a radiopaque contrast (barium sulfate/gelatin mixture) in every specimen. Following vascular injections, three-dimensional computed tomographic venographies of the faces were performed. Images were viewed using TeraRecon Software (Teracon, Inc., San Mateo, CA, USA) allowing analysis of the venous anatomy and perfusion in different facial subunits by observing radiopaque filling venous patterns. Three-dimensional computed tomographic venographies demonstrated a venous network with different degrees of perfusion in subunits of the face in relation to the facial vein injection side: 100% of ipsilateral and contralateral forehead units, 100% of ipsilateral and 75% of contralateral periorbital units, 100% of ipsilateral and 25% of contralateral cheek units, 100% of ipsilateral and 75% of contralateral nose units, 100% of ipsilateral and 75% of contralateral upper lip units, 100% of ipsilateral and 25% of contralateral lower lip units, and 50% of ipsilateral and 25% of contralateral chin units. Venographies of the full-face grafts revealed better perfusion in the ipsilateral hemifaces from the facial vein in comparison with the contralateral hemifaces. Reduced perfusion was observed mostly in the contralateral cheek unit and contralateral lower face including the lower lip and chin units. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Mandrini, Silvia; Comelli, Mario; Dall'angelo, Anna; Togni, Rossella; Cecini, Miriam; Pavese, Chiara; Dalla Toffola, Elena
2016-12-01
Only few studies have considered the effects of the combined treatment with onabotulinumtoxinA (BoNT-A) injections and biofeedback (BFB) rehabilitation in the recovery of postparetic facial synkinesis (PPFS). To explore the presence of a persistent improvement in facial function out of the pharmacological effect of BoNT-A in subjects with established PPFS, after repeated sessions of BoNT-A injections combined with an educational facial training program using mirror biofeedback (BFB) exercises. Secondary objective was to investigate the trend of the presumed persistent improvement. Case-series study. Outpatient Clinic of Physical Medicine and Rehabilitation Unit. Twenty-seven patients (22 females; mean age 45±16 years) affected by an established peripheral facial palsy, treated with a minimum of three BoNT-A injections in association with mirror BFB rehabilitation. The interval between consecutive BoNT-A injections was at least five months. At baseline and before every BoNT-A injection+mirror BFB session (when the effect of the previous BoNT-A injection had vanished), patients were assessed with the Italian version of Sunnybrook Facial Grading System (SB). The statistical analysis considered SB composite and partial scores before each treatment session compared to the baseline scores. A significant improvement of the SB composite and partial scores was observed until the fourth session. Considering the "Symmetry of Voluntary Movement" partial score, the main improvement was observed in the muscles of the lower part of the face. In a chronic stage of postparetic facial synkinesis, patients may benefit from a combined therapy with repeated BoNT-A injections and an educational facial training program with mirror BFB exercises, gaining an improvement of the facial function up to the fourth session. This improvement reflects the acquired ability to use facial muscle correctly. It doesn't involve the injected muscles but those trained with mirror biofeedback exercises and it persists also when BoNT-A action has vanished. The combined therapy with repeated BoNT-A injections and an educational facial training program using mirror BFB exercises may be useful in the motor recovery of the muscles of the lower part of the face not injected but trained.
Affect in Human-Robot Interaction
2014-01-01
is capable of learning and producing a large number of facial expressions based on Ekman’s Facial Action Coding System, FACS (Ekman and Friesen 1978... tactile (pushed, stroked, etc.), auditory (loud sound), temperature and olfactory (alcohol, smoke, etc.). The personality of the robot consists of...robot’s behavior through decision-making, learning , or action selection, a number of researchers used the fuzzy logic approach to emotion generation
Eye contact modulates facial mimicry in 4-month-old infants: An EMG and fNIRS study.
de Klerk, Carina C J M; Hamilton, Antonia F de C; Southgate, Victoria
2018-05-16
Mimicry, the tendency to spontaneously and unconsciously copy others' behaviour, plays an important role in social interactions. It facilitates rapport between strangers, and is flexibly modulated by social signals, such as eye contact. However, little is known about the development of this phenomenon in infancy, and it is unknown whether mimicry is modulated by social signals from early in life. Here we addressed this question by presenting 4-month-old infants with videos of models performing facial actions (e.g., mouth opening, eyebrow raising) and hand actions (e.g., hand opening and closing, finger actions) accompanied by direct or averted gaze, while we measured their facial and hand muscle responses using electromyography to obtain an index of mimicry (Experiment 1). In Experiment 2 the infants observed the same stimuli while we used functional near-infrared spectroscopy to investigate the brain regions involved in modulating mimicry by eye contact. We found that 4-month-olds only showed evidence of mimicry when they observed facial actions accompanied by direct gaze. Experiment 2 suggests that this selective facial mimicry may have been associated with activation over posterior superior temporal sulcus. These findings provide the first demonstration of modulation of mimicry by social signals in young human infants, and suggest that mimicry plays an important role in social interactions from early in life. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Social communication in siamangs (Symphalangus syndactylus): use of gestures and facial expressions.
Liebal, Katja; Pika, Simone; Tomasello, Michael
2004-01-01
The current study represents the first systematic investigation of the social communication of captive siamangs (Symphalangus syndactylus). The focus was on intentional signals, including tactile and visual gestures, as well as facial expressions and actions. Fourteen individuals from different groups were observed and the signals used by individuals were recorded. Thirty-one different signals, consisting of 12 tactile gestures, 8 visual gestures, 7 actions, and 4 facial expressions, were observed, with tactile gestures and facial expressions appearing most frequently. The range of the signal repertoire increased steadily until the age of six, but declined afterwards in adults. The proportions of the different signal categories used within communicative interactions, in particular actions and facial expressions, also varied depending on age. Group differences could be traced back mainly to social factors or housing conditions. Differences in the repertoire of males and females were most obvious in the sexual context. Overall, most signals were used flexibly, with the majority performed in three or more social contexts and almost one-third of signals used in combination with other signals. Siamangs also adjusted their signals appropriately for the recipient, for example, using visual signals most often when the recipient was already attending (audience effects). These observations are discussed in the context of siamang ecology, social structure, and cognition.
Facial Displays Are Tools for Social Influence.
Crivelli, Carlos; Fridlund, Alan J
2018-05-01
Based on modern theories of signal evolution and animal communication, the behavioral ecology view of facial displays (BECV) reconceives our 'facial expressions of emotion' as social tools that serve as lead signs to contingent action in social negotiation. BECV offers an externalist, functionalist view of facial displays that is not bound to Western conceptions about either expressions or emotions. It easily accommodates recent findings of diversity in facial displays, their public context-dependency, and the curious but common occurrence of solitary facial behavior. Finally, BECV restores continuity of human facial behavior research with modern functional accounts of non-human communication, and provides a non-mentalistic account of facial displays well-suited to new developments in artificial intelligence and social robotics. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Boyd, Hope; Murnen, Sarah K
2017-06-01
We examined the extent to which popular dolls and action figures were portrayed with gendered body proportions, and the extent to which these gendered ideals were associated with heterosexual "success." We coded internet depictions of 72 popular female dolls and 71 popular male action figures from the websites of three national stores in the United States. Sixty-two percent of dolls had a noticeably thin body, while 42.3% of action figures had noticeably muscular bodies. Further, more thin dolls were portrayed with more sex object features than less thin dolls, including revealing, tight clothing and high-heeled shoes; bodies positioned with a curved spine, bent knee, and head cant; and with a sexually appealing facial expression. More muscular male action figures were more likely than less muscular ones to be shown with hands in fists and with an angry, emotional expression, suggesting male dominance. Copyright © 2017 Elsevier Ltd. All rights reserved.
Facial Expression Generation from Speaker's Emotional States in Daily Conversation
NASA Astrophysics Data System (ADS)
Mori, Hiroki; Ohshima, Koh
A framework for generating facial expressions from emotional states in daily conversation is described. It provides a mapping between emotional states and facial expressions, where the former is represented by vectors with psychologically-defined abstract dimensions, and the latter is coded by the Facial Action Coding System. In order to obtain the mapping, parallel data with rated emotional states and facial expressions were collected for utterances of a female speaker, and a neural network was trained with the data. The effectiveness of proposed method is verified by a subjective evaluation test. As the result, the Mean Opinion Score with respect to the suitability of generated facial expression was 3.86 for the speaker, which was close to that of hand-made facial expressions.
Selective Transfer Machine for Personalized Facial Expression Analysis
Chu, Wen-Sheng; De la Torre, Fernando; Cohn, Jeffrey F.
2017-01-01
Automatic facial action unit (AU) and expression detection from videos is a long-standing problem. The problem is challenging in part because classifiers must generalize to previously unknown subjects that differ markedly in behavior and facial morphology (e.g., heavy versus delicate brows, smooth versus deeply etched wrinkles) from those on which the classifiers are trained. While some progress has been achieved through improvements in choices of features and classifiers, the challenge occasioned by individual differences among people remains. Person-specific classifiers would be a possible solution but for a paucity of training data. Sufficient training data for person-specific classifiers typically is unavailable. This paper addresses the problem of how to personalize a generic classifier without additional labels from the test subject. We propose a transductive learning method, which we refer as a Selective Transfer Machine (STM), to personalize a generic classifier by attenuating person-specific mismatches. STM achieves this effect by simultaneously learning a classifier and re-weighting the training samples that are most relevant to the test subject. We compared STM to both generic classifiers and cross-domain learning methods on four benchmarks: CK+ [44], GEMEP-FERA [67], RU-FACS [4] and GFT [57]. STM outperformed generic classifiers in all. PMID:28113267
Facial Specialty. Teacher Edition. Cosmetology Series.
ERIC Educational Resources Information Center
Oklahoma State Dept. of Vocational and Technical Education, Stillwater. Curriculum and Instructional Materials Center.
This publication is one of a series of curriculum guides designed to direct and support instruction in vocational cosmetology programs in the State of Oklahoma. It contains seven units for the facial specialty: identifying enemies of the skin, using aromatherapy on the skin, giving facials without the aid of machines, giving facials with the aid…
The interaction between embodiment and empathy in facial expression recognition
Jospe, Karine; Flöel, Agnes; Lavidor, Michal
2018-01-01
Abstract Previous research has demonstrated that the Action-Observation Network (AON) is involved in both emotional-embodiment (empathy) and action-embodiment mechanisms. In this study, we hypothesized that interfering with the AON will impair action recognition and that this impairment will be modulated by empathy levels. In Experiment 1 (n = 90), participants were asked to recognize facial expressions while their facial motion was restricted. In Experiment 2 (n = 50), we interfered with the AON by applying transcranial Direct Current Stimulation to the motor cortex. In both experiments, we found that interfering with the AON impaired the performance of participants with high empathy levels; however, for the first time, we demonstrated that the interference enhanced the performance of participants with low empathy. This novel finding suggests that the embodiment module may be flexible, and that it can be enhanced in individuals with low empathy by simple manipulation of motor activation. PMID:29378022
The shared neural basis of empathy and facial imitation accuracy.
Braadbaart, L; de Grauw, H; Perrett, D I; Waiter, G D; Williams, J H G
2014-01-01
Empathy involves experiencing emotion vicariously, and understanding the reasons for those emotions. It may be served partly by a motor simulation function, and therefore share a neural basis with imitation (as opposed to mimicry), as both involve sensorimotor representations of intentions based on perceptions of others' actions. We recently showed a correlation between imitation accuracy and Empathy Quotient (EQ) using a facial imitation task and hypothesised that this relationship would be mediated by the human mirror neuron system. During functional Magnetic Resonance Imaging (fMRI), 20 adults observed novel 'blends' of facial emotional expressions. According to instruction, they either imitated (i.e. matched) the expressions or executed alternative, pre-prescribed mismatched actions as control. Outside the scanner we replicated the association between imitation accuracy and EQ. During fMRI, activity was greater during mismatch compared to imitation, particularly in the bilateral insula. Activity during imitation correlated with EQ in somatosensory cortex, intraparietal sulcus and premotor cortex. Imitation accuracy correlated with activity in insula and areas serving motor control. Overlapping voxels for the accuracy and EQ correlations occurred in premotor cortex. We suggest that both empathy and facial imitation rely on formation of action plans (or a simulation of others' intentions) in the premotor cortex, in connection with representations of emotional expressions based in the somatosensory cortex. In addition, the insula may play a key role in the social regulation of facial expression. © 2013.
Wollina, Uwe
2016-03-01
Facial aging is a major indication for minimal invasive esthetic procedures. Dermal fillers are a cornerstone in the approach for facial sculpturing. But where to start? Our concept is midfacial volume restoration in first place. This will result in a healthy and youthful appearance creating a facial V-shape. But midfacial filler injection does not only improve the malar area. It has also beneficial effects on neighboring esthetic units. We report on such improvements in periocular and nasolabial region, upper lips and perioral tissue, and the jaw line and discuss anatomical background. We hypothesize that midfacial deep filler injections also may activate subdermal white adipose tissue stem cells contributing to longer lasting rejuvenation. © 2015 Wiley Periodicals, Inc.
Another Scale for the Assessment of Facial Paralysis? ADS Scale: Our Proposition, How to Use It.
Di Stadio, Arianna
2015-12-01
Several authors in the years propose different methods to evaluate areas and specific movement's disease in patient affected by facial palsy. Despite these efforts the House Brackmann is anyway the most used assessment in medical community. The aims of our study is the proposition and assessing a new rating Arianna Disease Scale (ADS) for the clinical evaluation of facial paralysis. Sixty patients affected by unilateral facial Bell paralysis were enrolled in a prospective study from 2012 to 2014. Their facial nerve function was evaluated with our assessment analysing facial district divided in upper, middle and lower third. We analysed different facial expressions. Each movement corresponded to the action of different muscles. The action of each muscle was scored from 0 to 1, with 0 corresponding from complete flaccid paralysis to muscle's normal function ending with a score of 1. Synkinesis was considered and evaluated also in the scale with a fixed 0.5 score. Our results considered ease and speed of evaluation of the assessment, the accuracy of muscle deficit and the ability to calculate synkinesis using a score. All the three observers agreed 100% in the highest degree of deficit. We found some discrepancies in intermediate score with 92% agreement in upper face, 87% in middle and 80% in lower face, where there were more muscles involved in movements. Our scale had some limitations linked to the small group of patients evaluated and we had a little difficulty understanding the intermediate score of 0.3 and 0.7. However, this was an accurate tool to quickly evaluate facial nerve function. This has potential as an alternative scale to and to diagnose facial nerve disorders.
The Functional Role of the Periphery in Emotional Language Comprehension
Havas, David A.; Matheson, James
2013-01-01
Language can impact emotion, even when it makes no reference to emotion states. For example, reading sentences with positive meanings (“The water park is refreshing on the hot summer day”) induces patterns of facial feedback congruent with the sentence emotionality (smiling), whereas sentences with negative meanings induce a frown. Moreover, blocking facial afference with botox selectively slows comprehension of emotional sentences. Therefore, theories of cognition should account for emotion-language interactions above the level of explicit emotion words, and the role of peripheral feedback in comprehension. For this special issue exploring frontiers in the role of the body and environment in cognition, we propose a theory in which facial feedback provides a context-sensitive constraint on the simulation of actions described in language. Paralleling the role of emotions in real-world behavior, our account proposes that (1) facial expressions accompany sudden shifts in wellbeing as described in language; (2) facial expressions modulate emotional action systems during reading; and (3) emotional action systems prepare the reader for an effective simulation of the ensuing language content. To inform the theory and guide future research, we outline a framework based on internal models for motor control. To support the theory, we assemble evidence from diverse areas of research. Taking a functional view of emotion, we tie the theory to behavioral and neural evidence for a role of facial feedback in cognition. Our theoretical framework provides a detailed account that can guide future research on the role of emotional feedback in language processing, and on interactions of language and emotion. It also highlights the bodily periphery as relevant to theories of embodied cognition. PMID:23750145
Atkinson, Jo; Campbell, Ruth; Marshall, Jane; Thacker, Alice; Woll, Bencie
2004-01-01
Simple negation in natural languages represents a complex interrelationship of syntax, prosody, semantics and pragmatics, and may be realised in various ways: lexically, morphologically and prosodically. In almost all spoken languages, the first two of these are the primary realisations of syntactic negation. In contrast, in many signed languages negation can occur without lexical or morphological marking. Thus, in British Sign Language (BSL), negation is obligatorily expressed using face-head actions alone (facial negation) with the option of articulating a manual form alongside the required face-head actions (lexical negation). What are the processes underlying facial negation? Here, we explore this question neuropsychologically. If facial negation reflects lexico-syntactic processing in BSL, it may be relatively spared in people with unilateral right hemisphere (RH) lesions, as has been suggested for other 'grammatical facial actions' [Language and Speech 42 (1999) 307; Emmorey, K. (2002). Language, cognition and the brain: Insights from sign language research. Mahwah, NJ: Erlbaum (Lawrence)]. Three BSL users with RH lesions were specifically impaired in perceiving facial compared with manual (lexical and morphological) negation. This dissociation was absent in three users of BSL with left hemisphere lesions and different degrees of language disorder, who also showed relative sparing of negation comprehension. We conclude that, in contrast to some analyses [Applied Psycholinguistics 18 (1997) 411; Emmorey, K. (2002). Language, cognition and the brain: Insights from sign language research. Mahwah, NJ: Erlbaum (Lawrence); Archives of Neurology 36 (1979) 837], non-manual negation in sign may not be a direct surface realisation of syntax [Language and Speech 42 (1999) 143; Language and Speech 42 (1999) 127]. Difficulties with facial negation in the RH-lesion group were associated with specific impairments in processing facial images, including facial expressions. However, they did not reflect generalised 'face-blindness', since the reading of (English) speech patterns from faces was spared in this group. We propose that some aspects of the linguistic analysis of sign language are achieved by prosodic analysis systems (analysis of face and head gestures), which are lateralised to the minor hemisphere.
Face to face: blocking facial mimicry can selectively impair recognition of emotional expressions.
Oberman, Lindsay M; Winkielman, Piotr; Ramachandran, Vilayanur S
2007-01-01
People spontaneously mimic a variety of behaviors, including emotional facial expressions. Embodied cognition theories suggest that mimicry reflects internal simulation of perceived emotion in order to facilitate its understanding. If so, blocking facial mimicry should impair recognition of expressions, especially of emotions that are simulated using facial musculature. The current research tested this hypothesis using four expressions (happy, disgust, fear, and sad) and two mimicry-interfering manipulations (1) biting on a pen and (2) chewing gum, as well as two control conditions. Experiment 1 used electromyography over cheek, mouth, and nose regions. The bite manipulation consistently activated assessed muscles, whereas the chew manipulation activated muscles only intermittently. Further, expressing happiness generated most facial action. Experiment 2 found that the bite manipulation interfered most with recognition of happiness. These findings suggest that facial mimicry differentially contributes to recognition of specific facial expressions, thus allowing for more refined predictions from embodied cognition theories.
United States Homeland Security and National Biometric Identification
2002-04-09
security number. Biometrics is the use of unique individual traits such as fingerprints, iris eye patterns, voice recognition, and facial recognition to...technology to control access onto their military bases using a Defense Manpower Management Command developed software application. FACIAL Facial recognition systems...installed facial recognition systems in conjunction with a series of 200 cameras to fight street crime and identify terrorists. The cameras, which are
Does Facial Amimia Impact the Recognition of Facial Emotions? An EMG Study in Parkinson’s Disease
Argaud, Soizic; Delplanque, Sylvain; Houvenaghel, Jean-François; Auffret, Manon; Duprez, Joan; Vérin, Marc; Grandjean, Didier; Sauleau, Paul
2016-01-01
According to embodied simulation theory, understanding other people’s emotions is fostered by facial mimicry. However, studies assessing the effect of facial mimicry on the recognition of emotion are still controversial. In Parkinson’s disease (PD), one of the most distinctive clinical features is facial amimia, a reduction in facial expressiveness, but patients also show emotional disturbances. The present study used the pathological model of PD to examine the role of facial mimicry on emotion recognition by investigating EMG responses in PD patients during a facial emotion recognition task (anger, joy, neutral). Our results evidenced a significant decrease in facial mimicry for joy in PD, essentially linked to the absence of reaction of the zygomaticus major and the orbicularis oculi muscles in response to happy avatars, whereas facial mimicry for expressions of anger was relatively preserved. We also confirmed that PD patients were less accurate in recognizing positive and neutral facial expressions and highlighted a beneficial effect of facial mimicry on the recognition of emotion. We thus provide additional arguments for embodied simulation theory suggesting that facial mimicry is a potential lever for therapeutic actions in PD even if it seems not to be necessarily required in recognizing emotion as such. PMID:27467393
NASA Astrophysics Data System (ADS)
Amijoyo Mochtar, Andi
2018-02-01
Applications of robotics have become important for human life in recent years. There are many specification of robots that have been improved and encriched with the technology advances. One of them are humanoid robot with facial expression which closer with the human facial expression naturally. The purpose of this research is to make computation on facial expressions and conduct the tensile strength for silicone rubber as artificial skin. Facial expressions were calculated by determining dimension, material properties, number of node elements, boundary condition, force condition, and analysis type. A Facial expression robot is determined by the direction and the magnitude external force on the driven point. The expression face of robot is identical with the human facial expression where the muscle structure in face according to the human face anatomy. For developing facial expression robots, facial action coding system (FACS) in approached due to follow expression human. The tensile strength is conducting due to check the proportional force of artificial skin that can be applied on the future of robot facial expression. Combining of calculated and experimental results can generate reliable and sustainable robot facial expression that using silicone rubber as artificial skin.
Reconstruction of massive facial avulsive injury, secondary to animal bite.
Motamed, Sadrollah; Niazi, Feizollah; Moosavizadeh, Seyed Mehdi; Gholizade Pasha, Abdolhamid; Motamed, Ali
2014-02-01
Management of facial soft tissue trauma requires complex reconstruction surgery. Animal bite on face is a common cause of facial tissue trauma with severe destruction. Evaluation of unit involvement is the first effort, followed by designation of reconstruction. In this case, we performed multiple reconstruction options.
Crotoxin in humans: analysis of the effects on extraocular and facial muscles.
Ribeiro, Geraldo de Barros; Almeida, Henderson Celestino de; Velarde, David Toledo
2012-01-01
Crotoxin is the main neurotoxin of South American rattlesnake Crotalus durissus terrificus. The neurotoxic action is characterized by a presynaptic blockade. The purpose of this research is to assess the ability of crotoxin to induce temporary paralysis of extraocular and facial muscles in humans. Doses of crotoxin used ranged from 2 to 5 units (U), each unit corresponding to one LD50. We first applied 2U of crotoxin in one of the extraocular muscles of 3 amaurotic individuals to be submitted to ocular evisceration. In the second stage, we applied crotoxin in 12 extraocular muscles of 9 patients with strabismic amblyopia. In the last stage, crotoxin was used in the treatment of blepharospasm in another 3 patients. No patient showed any systemic side effect or change in vision or any eye structure problem after the procedure. The only local side effects observed were slight conjunctival hyperemia, which recovered spontaneously. In 2 patients there was no change in ocular deviation after 2U crotoxin application. Limitation of the muscle action was observed in 8 of the 12 applications. The change in ocular deviation after application of 2U of crotoxin (9 injections) was in average 15.7 prism diopters (PD). When the dose was 4U (2 applications) the change was in average 37.5 PD and a single application of 5U produced a change of 16 PD in ocular deviation. This effect lasted from 1 to 3 months. Two of the 3 patients with blepharospasm had the hemifacial spasm improved with crotoxin, which returned after 2 months. This study provides data suggesting that crotoxin may be a useful new therapeutic option for the treatment of strabismus and blepharospasm. We expect that with further studies crotoxin could be an option for many other medical areas.
Shih, Yu-Ling; Lin, Chia-Yen
2016-08-01
Action anticipation plays an important role in the successful performance of open skill sports, such as ball and combat sports. Evidence has shown that elite athletes of open sports excel in action anticipation. Most studies have targeted ball sports and agreed that information on body mechanics is one of the key determinants for successful action anticipation in open sports. However, less is known about combat sports, and whether facial emotions have an influence on athletes' action anticipation skill. It has been suggested that the understanding of intention in combat sports relies heavily on emotional context. Based on this suggestion, the present study compared the action anticipation performances of taekwondo athletes, weightlifting athletes, and non-athletes and then correlated these with their performances of emotion recognition. This study primarily found that accurate action anticipation does not necessarily rely on the dynamic information of movement, and that action anticipation performance is correlated with that of emotion recognition in taekwondo athletes, but not in weightlifting athletes. Our results suggest that the recognition of facial emotions plays a role in the action prediction in such combat sports as taekwondo.
15 minute consultation: a structured approach to the management of facial paralysis in a child.
Malik, Vikas; Joshi, Vineeta; Green, Kevin M J; Bruce, Iain A
2012-06-01
To present a structured approach for an outpatient consultation of a child with facial paralysis. Review of literature and description of approach followed in our unit. A focused history and examination is key to establish the cause and draw a management plan for paediatric facial paralysis.
Chiranjeevi, Pojala; Gopalakrishnan, Viswanath; Moogi, Pratibha
2015-09-01
Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning-based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, and so on, in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as user stays neutral for majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this paper, we propose a light-weight neutral versus emotion classification engine, which acts as a pre-processer to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at key emotion (KE) points using a statistical texture model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a statistical texture model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves emotion recognition (ER) accuracy and simultaneously reduces computational complexity of the ER system, as validated on multiple databases.
Do facial movements express emotions or communicate motives?
Parkinson, Brian
2005-01-01
This article addresses the debate between emotion-expression and motive-communication approaches to facial movements, focusing on Ekman's (1972) and Fridlund's (1994) contrasting models and their historical antecedents. Available evidence suggests that the presence of others either reduces or increases facial responses, depending on the quality and strength of the emotional manipulation and on the nature of the relationship between interactants. Although both display rules and social motives provide viable explanations of audience "inhibition" effects, some audience facilitation effects are less easily accommodated within an emotion-expression perspective. In particular, emotion is not a sufficient condition for a corresponding "expression," even discounting explicit regulation, and, apparently, "spontaneous" facial movements may be facilitated by the presence of others. Further, there is no direct evidence that any particular facial movement provides an unambiguous expression of a specific emotion. However, information communicated by facial movements is not necessarily extrinsic to emotion. Facial movements not only transmit emotion-relevant information but also contribute to ongoing processes of emotional action in accordance with pragmatic theories.
Ghosh, Rajarshi; Gopalkrishnan, Kulandaswamy
2018-06-01
The aim of this study is to retrospectively analyze the incidence of facial fractures along with age, gender predilection, etiology, commonest site, associated dental injuries, and any complications of patients operated in Craniofacial Unit of SDM College of Dental Sciences and Hospital. This retrospective study was conducted at the Department of OMFS, SDM College of Dental Sciences, Dharwad from January 2003 to December 2013. Data were recorded for the cause of injury, age and gender distribution, frequency and type of injury, localization and frequency of soft tissue injuries, dentoalveolar trauma, facial bone fractures, complications, concomitant injuries, and different treatment protocols.All the data were analyzed using statistical analysis that is chi-squared test. A total of 1146 patients reported at our unit with facial fractures during these 10 years. Males accounted for a higher frequency of facial fractures (88.8%). Mandible was the commonest bone to be fractured among all the facial bones (71.2%). Maxillary central incisors were the most common teeth to be injured (33.8%) and avulsion was the most common type of injury (44.6%). Commonest postoperative complication was plate infection (11%) leading to plate removal. Other injuries associated with facial fractures were rib fractures, head injuries, upper and lower limb fractures, etc., among these rib fractures were seen most frequently (21.6%). This study was performed to compare the different etiologic factors leading to diverse facial fracture patterns. By statistical analysis of this record the authors come to know about the relationship of facial fractures with gender, age, associated comorbidities, etc.
The effect of facial expressions on peripersonal and interpersonal spaces.
Ruggiero, Gennaro; Frassinetti, Francesca; Coello, Yann; Rapuano, Mariachiara; di Cola, Armando Schiano; Iachini, Tina
2017-11-01
Identifying individuals' intent through the emotional valence conveyed by their facial expression influences our capacity to approach-avoid these individuals during social interactions. Here, we explore if and how the emotional valence of others' facial expressiveness modulates peripersonal-action and interpersonal-social spaces. Through Immersive Virtual Reality, participants determined reachability-distance (for peripersonal space) and comfort-distance (for interpersonal space) from male/female virtual confederates exhibiting happy, angry and neutral facial expressions while being approached by (passive-approach) or walking toward (active-approach) them. Results showed an increase of distance when seeing angry rather than happy confederates in both approach conditions of comfort-distance. The effect also appeared in reachability-distance, but only in the passive-approach. Anger prompts avoidant behaviors, and thus an expansion of distance, particularly with a potential violation of near body space by an intruder. Overall, the findings suggest that peripersonal-action space, in comparison with interpersonal-social space, is similarly sensitive to the emotional valence of stimuli. We propose that this similarity could reflect a common adaptive mechanism shared by these spaces, presumably at different degrees, for ensuring self-protection functions.
Bad to the bone: facial structure predicts unethical behaviour.
Haselhuhn, Michael P; Wong, Elaine M
2012-02-07
Researchers spanning many scientific domains, including primatology, evolutionary biology and psychology, have sought to establish an evolutionary basis for morality. While researchers have identified social and cognitive adaptations that support ethical behaviour, a consensus has emerged that genetically determined physical traits are not reliable signals of unethical intentions or actions. Challenging this view, we show that genetically determined physical traits can serve as reliable predictors of unethical behaviour if they are also associated with positive signals in intersex and intrasex selection. Specifically, we identify a key physical attribute, the facial width-to-height ratio, which predicts unethical behaviour in men. Across two studies, we demonstrate that men with wider faces (relative to facial height) are more likely to explicitly deceive their counterparts in a negotiation, and are more willing to cheat in order to increase their financial gain. Importantly, we provide evidence that the link between facial metrics and unethical behaviour is mediated by a psychological sense of power. Our results demonstrate that static physical attributes can indeed serve as reliable cues of immoral action, and provide additional support for the view that evolutionary forces shape ethical judgement and behaviour.
Spontaneous and posed facial expression in Parkinson's disease.
Smith, M C; Smith, M K; Ellgring, H
1996-09-01
Spontaneous and posed emotional facial expressions in individuals with Parkinson's disease (PD, n = 12) were compared with those of healthy age-matched controls (n = 12). The intensity and amount of facial expression in PD patients were expected to be reduced for spontaneous but not posed expressions. Emotional stimuli were video clips selected from films, 2-5 min in duration, designed to elicit feelings of happiness, sadness, fear, disgust, or anger. Facial movements were coded using Ekman and Friesen's (1978) Facial Action Coding System (FACS). In addition, participants rated their emotional experience on 9-point Likert scales. The PD group showed significantly less overall facial reactivity than did controls when viewing the films. The predicted Group X Condition (spontaneous vs. posed) interaction effect on smile intensity was found when PD participants with more severe disease were compared with those with milder disease and with controls. In contrast, ratings of emotional experience were similar for both groups. Depression was positively associated with emotion rating but not with measures of facial activity. Spontaneous facial expression appears to be selectively affected in PD, whereas posed expression and emotional experience remain relatively intact.
Facial correlates of emotional behaviour in the domestic cat (Felis catus).
Bennett, Valerie; Gourkow, Nadine; Mills, Daniel S
2017-08-01
Leyhausen's (1979) work on cat behaviour and facial expressions associated with offensive and defensive behaviour is widely embraced as the standard for interpretation of agonistic behaviour in this species. However, it is a largely anecdotal description that can be easily misunderstood. Recently a facial action coding system has been developed for cats (CatFACS), similar to that used for objectively coding human facial expressions. This study reports on the use of this system to describe the relationship between behaviour and facial expressions of cats in confinement contexts without and with human interaction, in order to generate hypotheses about the relationship between these expressions and underlying emotional state. Video recordings taken of 29 cats resident in a Canadian animal shelter were analysed using 1-0 sampling of 275 4-s video clips. Observations under the two conditions were analysed descriptively using hierarchical cluster analysis for binomial data and indicated that in both situations, about half of the data clustered into three groups. An argument is presented that these largely reflect states based on varying degrees of relaxed engagement, fear and frustration. Facial actions associated with fear included blinking and half-blinking and a left head and gaze bias at lower intensities. Facial actions consistently associated with frustration included hissing, nose-licking, dropping of the jaw, the raising of the upper lip, nose wrinkling, lower lip depression, parting of the lips, mouth stretching, vocalisation and showing of the tongue. Relaxed engagement appeared to be associated with a right gaze and head turn bias. The results also indicate potential qualitative changes associated with differences in intensity in emotional expression following human intervention. The results were also compared to the classic description of "offensive and defensive moods" in cats (Leyhausen, 1979) and previous work by Gourkow et al. (2014a) on behavioural styles in cats in order to assess if these observations had replicable features noted by others. This revealed evidence of convergent validity between the methods However, the use of CatFACS revealed elements relating to vocalisation and response lateralisation, not previously reported in this literature. Copyright © 2017 Elsevier B.V. All rights reserved.
Tuncay, Figen; Borman, Pinar; Taşer, Burcu; Ünlü, İlhan; Samim, Erdal
2015-03-01
The aim of this study was to determine the efficacy of electrical stimulation when added to conventional physical therapy with regard to clinical and neurophysiologic changes in patients with Bell palsy. This was a randomized controlled trial. Sixty patients diagnosed with Bell palsy (39 right sided, 21 left sided) were included in the study. Patients were randomly divided into two therapy groups. Group 1 received physical therapy applying hot pack, facial expression exercises, and massage to the facial muscles, whereas group 2 received electrical stimulation treatment in addition to the physical therapy, 5 days per week for a period of 3 wks. Patients were evaluated clinically and electrophysiologically before treatment (at the fourth week of the palsy) and again 3 mos later. Outcome measures included the House-Brackmann scale and Facial Disability Index scores, as well as facial nerve latencies and amplitudes of compound muscle action potentials derived from the frontalis and orbicularis oris muscles. Twenty-nine men (48.3%) and 31 women (51.7%) with Bell palsy were included in the study. In group 1, 16 (57.1%) patients had no axonal degeneration and 12 (42.9%) had axonal degeneration, compared with 17 (53.1%) and 15 (46.9%) patients in group 2, respectively. The baseline House-Brackmann and Facial Disability Index scores were similar between the groups. At 3 mos after onset, the Facial Disability Index scores were improved similarly in both groups. The classification of patients according to House-Brackmann scale revealed greater improvement in group 2 than in group 1. The mean motor nerve latencies and compound muscle action potential amplitudes of both facial muscles were statistically shorter in group 2, whereas only the mean motor latency of the frontalis muscle decreased in group 1. The addition of 3 wks of daily electrical stimulation shortly after facial palsy onset (4 wks), improved functional facial movements and electrophysiologic outcome measures at the 3-mo follow-up in patients with Bell palsy. Further research focused on determining the most effective dosage and length of intervention with electrical stimulation is warranted.
Social Use of Facial Expressions in Hylobatids
Scheider, Linda; Waller, Bridget M.; Oña, Leonardo; Burrows, Anne M.; Liebal, Katja
2016-01-01
Non-human primates use various communicative means in interactions with others. While primate gestures are commonly considered to be intentionally and flexibly used signals, facial expressions are often referred to as inflexible, automatic expressions of affective internal states. To explore whether and how non-human primates use facial expressions in specific communicative interactions, we studied five species of small apes (gibbons) by employing a newly established Facial Action Coding System for hylobatid species (GibbonFACS). We found that, despite individuals often being in close proximity to each other, in social (as opposed to non-social contexts) the duration of facial expressions was significantly longer when gibbons were facing another individual compared to non-facing situations. Social contexts included grooming, agonistic interactions and play, whereas non-social contexts included resting and self-grooming. Additionally, gibbons used facial expressions while facing another individual more often in social contexts than non-social contexts where facial expressions were produced regardless of the attentional state of the partner. Also, facial expressions were more likely ‘responded to’ by the partner’s facial expressions when facing another individual than non-facing. Taken together, our results indicate that gibbons use their facial expressions differentially depending on the social context and are able to use them in a directed way in communicative interactions with other conspecifics. PMID:26978660
Wolf, Karsten; Raedler, Thomas; Henke, Kai; Kiefer, Falk; Mass, Reinhard; Quante, Markus; Wiedemann, Klaus
2005-01-01
The purpose of this pilot study was to establish the validity of an improved facial electromyogram (EMG) method for the measurement of facial pain expression. Darwin defined pain in connection with fear as a simultaneous occurrence of eye staring, brow contraction and teeth chattering. Prkachin was the first to use the video-based Facial Action Coding System to measure facial expressions while using four different types of pain triggers, identifying a group of facial muscles around the eyes. The activity of nine facial muscles in 10 healthy male subjects was analyzed. Pain was induced through a laser system with a randomized sequence of different intensities. Muscle activity was measured with a new, highly sensitive and selective facial EMG. The results indicate two groups of muscles as key for pain expression. These results are in concordance with Darwin's definition. As in Prkachin's findings, one muscle group is assembled around the orbicularis oculi muscle, initiating eye staring. The second group consists of the mentalis and depressor anguli oris muscles, which trigger mouth movements. The results demonstrate the validity of the facial EMG method for measuring facial pain expression. Further studies with psychometric measurements, a larger sample size and a female test group should be conducted.
The Impact of Experience on Affective Responses during Action Observation.
Kirsch, Louise P; Snagg, Arielle; Heerey, Erin; Cross, Emily S
2016-01-01
Perceiving others in action elicits affective and aesthetic responses in observers. The present study investigates the extent to which these responses relate to an observer's general experience with observed movements. Facial electromyographic (EMG) responses were recorded in experienced dancers and non-dancers as they watched short videos of movements performed by professional ballet dancers. Responses were recorded from the corrugator supercilii (CS) and zygomaticus major (ZM) muscles, both of which show engagement during the observation of affect-evoking stimuli. In the first part of the experiment, participants passively watched the videos while EMG data were recorded. In the second part, they explicitly rated how much they liked each movement. Results revealed a relationship between explicit affective judgments of the movements and facial muscle activation only among those participants who were experienced with the movements. Specifically, CS activity was higher for disliked movements and ZM activity was higher for liked movements among dancers but not among non-dancers. The relationship between explicit liking ratings and EMG data in experienced observers suggests that facial muscles subtly echo affective judgments even when viewing actions that are not intentionally emotional in nature, thus underscoring the potential of EMG as a method to examine subtle shifts in implicit affective responses during action observation.
De Boulle, Koenraad; Fagien, Steven; Sommer, Boris; Glogau, Richard
2010-04-26
Botulinum toxin type A treatment is the foundation of minimally invasive aesthetic facial procedures. Clinicians and their patients recognize the important role, both negative and positive, that facial expression, particularly the glabellar frown lines, plays in self-perception, emotional well-being, and perception by others. This article provides up-to-date information on fundamental properties and mechanisms of action of the major approved formulations of botulinum toxin type A, summarizes recent changes in naming conventions (nonproprietary names) mandated by the United States Food and Drug Administration, and describes the reasons for these changes. The request for these changes provides recognition that formulations of botulinum toxins (eg, onabotulinumtoxinA and abobotulinumtoxinA) are not interchangeable and that dosing recommendations cannot be based on any one single conversion ratio. The extensive safety, tolerability, and efficacy data are summarized in detail, including the patient-reported outcomes that contribute to overall patient satisfaction and probability treatment continuation. Based on this in-depth review, the authors conclude that botulinum toxin type A treatment remains a cornerstone of facial aesthetic treatments, and clinicians must realize that techniques and dosing from one formulation cannot be applied to others, that each patient should undergo a full aesthetic evaluation, and that products and procedures must be selected in the context of individual needs and goals.
Signs of Facial Aging in Men in a Diverse, Multinational Study: Timing and Preventive Behaviors.
Rossi, Anthony M; Eviatar, Joseph; Green, Jeremy B; Anolik, Robert; Eidelman, Michael; Keaney, Terrence C; Narurkar, Vic; Jones, Derek; Kolodziejczyk, Julia; Drinkwater, Adrienne; Gallagher, Conor J
2017-11-01
Men are a growing patient population in aesthetic medicine and are increasingly seeking minimally invasive cosmetic procedures. To examine differences in the timing of facial aging and in the prevalence of preventive facial aging behaviors in men by race/ethnicity. Men aged 18 to 75 years in the United States, Canada, United Kingdom, and Australia rated their features using photonumeric rating scales for 10 facial aging characteristics. Impact of race/ethnicity (Caucasian, black, Asian, Hispanic) on severity of each feature was assessed. Subjects also reported the frequency of dermatologic facial product use. The study included 819 men. Glabellar lines, crow's feet lines, and nasolabial folds showed the greatest change with age. Caucasian men reported more severe signs of aging and earlier onset, by 10 to 20 years, compared with Asian, Hispanic, and, particularly, black men. In all racial/ethnic groups, most men did not regularly engage in basic, antiaging preventive behaviors, such as use of sunscreen. Findings from this study conducted in a globally diverse sample may guide clinical discussions with men about the prevention and treatment of signs of facial aging, to help men of all races/ethnicities achieve their desired aesthetic outcomes.
Facial recognition: a cognitive study of elderly dementia patients and normal older adults.
Zandi, T; Cooper, M; Garrison, L
1992-01-01
Dementia patients' and normal elderlies' recognition of familiar, ordinary emotional and facial expressions was tested. In three conditions subjects were required to name the emotions depicted in pictures and to produce them while presented with the verbal labels of the expressions. The dementia patients' best performance occurred when they had access to the verbal labels while viewing the pictures. The major deficiency in facial recognition was found to be dysnomia related. Findings of this study suggest that the connection between the gnostic units of expression and the gnostic units of verbal labeling is not impaired significantly among the dementia patients.
Development and validation of an Argentine set of facial expressions of emotion.
Vaiman, Marcelo; Wagner, Mónica Anna; Caicedo, Estefanía; Pereno, Germán Leandro
2017-02-01
Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion research is receiving in this region. Here we present the development and validation of the Universidad Nacional de Cordoba, Expresiones de Emociones Faciales (UNCEEF), a Facial Action Coding System (FACS)-verified set of pictures of Argentineans expressing the six basic emotions, plus neutral expressions. FACS scores, recognition rates, Hu scores, and discrimination indices are reported. Evidence of convergent validity was obtained using the Pictures of Facial Affect in an Argentine sample. However, recognition accuracy was greater for UNCEEF. The importance of local sets of emotion pictures is discussed.
Anaplastology in times of facial transplantation: Still a reasonable treatment option?
Toso, Sabine Maria; Menzel, Kerstin; Motzkus, Yvonne; Klein, Martin; Menneking, Horst; Raguse, Jan-Dirk; Nahles, Susanne; Hoffmeister, Bodo; Adolphs, Nicolai
2015-09-01
Optimum functional and aesthetic facial reconstruction is still a challenge in patients who suffer from inborn or acquired facial deformity. It is known that functional and aesthetic impairment can result in significant psychosocial strain, leading to the social isolation of patients who are affected by major facial deformities. Microvascular techniques and increasing experience in facial transplantation certainly contribute to better restorative outcomes. However, these technologies also have some drawbacks, limitations and unsolved problems. Extensive facial defects which include several aesthetic units and dentition can be restored by combining dental prostheses and anaplastology, thus providing an adequate functional and aesthetic outcome in selected patients without the drawbacks of major surgical procedures. Referring to some representative patient cases, it is shown how extreme facial disfigurement after oncological surgery can be palliated by combining intraoral dentures with extraoral facial prostheses using individualized treatment and without the need for major reconstructive surgery. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Chechko, Natalya; Pagel, Alena; Otte, Ellen; Koch, Iring; Habel, Ute
2016-01-01
Spontaneous emotional expressions (rapid facial mimicry) perform both emotional and social functions. In the current study, we sought to test whether there were deficits in automatic mimic responses to emotional facial expressions in patients (15 of them) with stable schizophrenia compared to 15 controls. In a perception-action interference paradigm (the Simon task; first experiment), and in the context of a dual-task paradigm (second experiment), the task-relevant stimulus feature was the gender of a face, which, however, displayed a smiling or frowning expression (task-irrelevant stimulus feature). We measured the electromyographical activity in the corrugator supercilii and zygomaticus major muscle regions in response to either compatible or incompatible stimuli (i.e., when the required response did or did not correspond to the depicted facial expression). The compatibility effect based on interactions between the implicit processing of a task-irrelevant emotional facial expression and the conscious production of an emotional facial expression did not differ between the groups. In stable patients (in spite of a reduced mimic reaction), we observed an intact capacity to respond spontaneously to facial emotional stimuli. PMID:27303335
Automatic decoding of facial movements reveals deceptive pain expressions
Bartlett, Marian Stewart; Littlewort, Gwen C.; Frank, Mark G.; Lee, Kang
2014-01-01
Summary In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1–3]. Two motor pathways control facial movement [4–7]. A subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions. A cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8–11]. Machine vision may, however, be able to distinguish deceptive from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here we show that human observers could not discriminate real from faked expressions of pain better than chance, and after training, improved accuracy to a modest 55%. However a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system’s superiority is attributable to its ability to differentiate the dynamics of genuine from faked expressions. Thus by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling. PMID:24656830
Bock, Astrid; Huber, Eva; Peham, Doris; Benecke, Cord
2015-01-01
The development (Study 1) and validation (Study 2) of a categorical system for the attribution of facial expressions of negative emotions to specific functions. The facial expressions observed inOPDinterviews (OPD-Task-Force 2009) are coded according to the Facial Action Coding System (FACS; Ekman et al. 2002) and attributed to categories of basic emotional displays using EmFACS (Friesen & Ekman 1984). In Study 1 we analyze a partial sample of 20 interviews and postulate 10 categories of functions that can be arranged into three main categories (interactive, self and object). In Study 2 we rate the facial expressions (n=2320) from the OPD interviews (10 minutes each interview) of 80 female subjects (16 healthy, 64 with DSM-IV diagnosis; age: 18-57 years) according to the categorical system and correlate them with problematic relationship experiences (measured with IIP,Horowitz et al. 2000). Functions of negative facial expressions can be attributed reliably and validly with the RFE-Coding System. The attribution of interactive, self-related and object-related functions allows for a deeper understanding of the emotional facial expressions of patients with mental disorders.
The Perception and Mimicry of Facial Movements Predict Judgments of Smile Authenticity
Korb, Sebastian; With, Stéphane; Niedenthal, Paula; Kaiser, Susanne; Grandjean, Didier
2014-01-01
The mechanisms through which people perceive different types of smiles and judge their authenticity remain unclear. Here, 19 different types of smiles were created based on the Facial Action Coding System (FACS), using highly controlled, dynamic avatar faces. Participants observed short videos of smiles while their facial mimicry was measured with electromyography (EMG) over four facial muscles. Smile authenticity was judged after each trial. Avatar attractiveness was judged once in response to each avatar’s neutral face. Results suggest that, in contrast to most earlier work using static pictures as stimuli, participants relied less on the Duchenne marker (the presence of crow’s feet wrinkles around the eyes) in their judgments of authenticity. Furthermore, mimicry of smiles occurred in the Zygomaticus Major, Orbicularis Oculi, and Corrugator muscles. Consistent with theories of embodied cognition, activity in these muscles predicted authenticity judgments, suggesting that facial mimicry influences the perception of smiles. However, no significant mediation effect of facial mimicry was found. Avatar attractiveness did not predict authenticity judgments or mimicry patterns. PMID:24918939
Stability of Facial Affective Expressions in Schizophrenia
Fatouros-Bergman, H.; Spang, J.; Merten, J.; Preisler, G.; Werbart, A.
2012-01-01
Thirty-two videorecorded interviews were conducted by two interviewers with eight patients diagnosed with schizophrenia. Each patient was interviewed four times: three weekly interviews by the first interviewer and one additional interview by the second interviewer. 64 selected sequences where the patients were speaking about psychotic experiences were scored for facial affective behaviour with Emotion Facial Action Coding System (EMFACS). In accordance with previous research, the results show that patients diagnosed with schizophrenia express negative facial affectivity. Facial affective behaviour seems not to be dependent on temporality, since within-subjects ANOVA revealed no substantial changes in the amount of affects displayed across the weekly interview occasions. Whereas previous findings found contempt to be the most frequent affect in patients, in the present material disgust was as common, but depended on the interviewer. The results suggest that facial affectivity in these patients is primarily dominated by the negative emotions of disgust and, to a lesser extent, contempt and implies that this seems to be a fairly stable feature. PMID:22966449
Facial bacterial infections: folliculitis.
Laureano, Ana Cristina; Schwartz, Robert A; Cohen, Philip J
2014-01-01
Facial bacterial infections are most commonly caused by infections of the hair follicles. Wherever pilosebaceous units are found folliculitis can occur, with the most frequent bacterial culprit being Staphylococcus aureus. We review different origins of facial folliculitis, distinguishing bacterial forms from other infectious and non-infectious mimickers. We distinguish folliculitis from pseudofolliculitis and perifolliculitis. Clinical features, etiology, pathology, and management options are also discussed. Copyright © 2014. Published by Elsevier Inc.
When action meets emotions: how facial displays of emotion influence goal-related behavior.
Ferri, Francesca; Stoianov, Ivilin Peev; Gianelli, Claudia; D'Amico, Luigi; Borghi, Anna M; Gallese, Vittorio
2010-10-01
Many authors have proposed that facial expressions, by conveying emotional states of the person we are interacting with, influence the interaction behavior. We aimed at verifying how specific the effect is of the facial expressions of emotions of an individual (both their valence and relevance/specificity for the purpose of the action) with respect to how the action aimed at the same individual is executed. In addition, we investigated whether and how the effects of emotions on action execution are modulated by participants' empathic attitudes. We used a kinematic approach to analyze the simulation of feeding others, which consisted of recording the "feeding trajectory" by using a computer mouse. Actors could express different highly arousing emotions, namely happiness, disgust, anger, or a neutral expression. Response time was sensitive to the interaction between valence and relevance/specificity of emotion: disgust caused faster response. In addition, happiness induced slower feeding time and longer time to peak velocity, but only in blocks where it alternated with expressions of disgust. The kinematic profiles described how the effect of the specificity of the emotional context for feeding, namely a modulation of accuracy requirements, occurs. An early acceleration in kinematic relative-to-neutral feeding profiles occurred when actors expressed positive emotions (happiness) in blocks with specific-to-feeding negative emotions (disgust). On the other hand, the end-part of the action was slower when feeding happy with respect to neutral faces, confirming the increase of accuracy requirements and motor control. These kinematic effects were modulated by participants' empathic attitudes. In conclusion, the social dimension of emotions, that is, their ability to modulate others' action planning/execution, strictly depends on their relevance and specificity to the purpose of the action. This finding argues against a strict distinction between social and nonsocial emotions.
Umekawa, Motoyuki; Hatano, Keiko; Matsumoto, Hideyuki; Shimizu, Takahiro; Hashida, Hideji
2017-05-27
The patient was a 47-year-old man who presented with diplopia and gait instability with a gradual onset over the course of three days. Neurological examinations showed ophthalmoplegia, diminished tendon reflexes, and truncal ataxia. Tests for anti-GQ1b antibodies and several other antibodies to ganglioside complex were positive. We made a diagnosis of Fisher syndrome. After administration of intravenous immunoglobulin, the patient's symptoms gradually improved. However, bilateral facial palsy appeared during the recovery phase. Brain MRI showed intensive contrast enhancement of bilateral facial nerves. During the onset phase of facial palsy, the amplitude of the compound muscle action potential (CMAP) in the facial nerves was preserved. During the peak phase, the facial CMAP amplitude was within the lower limit of normal values, or mildly decreased. During the recovery phase, the CMAP amplitude was normalized, and the R1 and R2 responses of the blink reflex were prolonged. The delayed facial nerve palsy improved spontaneously, and the enhancement on brain MRI disappeared. Serial neurophysiological and neuroradiological examinations suggested that the main lesions existed in the proximal part of the facial nerves and the mild lesions existed in the facial nerve terminals, probably due to reversible conduction failure.
Bad to the bone: facial structure predicts unethical behaviour
Haselhuhn, Michael P.; Wong, Elaine M.
2012-01-01
Researchers spanning many scientific domains, including primatology, evolutionary biology and psychology, have sought to establish an evolutionary basis for morality. While researchers have identified social and cognitive adaptations that support ethical behaviour, a consensus has emerged that genetically determined physical traits are not reliable signals of unethical intentions or actions. Challenging this view, we show that genetically determined physical traits can serve as reliable predictors of unethical behaviour if they are also associated with positive signals in intersex and intrasex selection. Specifically, we identify a key physical attribute, the facial width-to-height ratio, which predicts unethical behaviour in men. Across two studies, we demonstrate that men with wider faces (relative to facial height) are more likely to explicitly deceive their counterparts in a negotiation, and are more willing to cheat in order to increase their financial gain. Importantly, we provide evidence that the link between facial metrics and unethical behaviour is mediated by a psychological sense of power. Our results demonstrate that static physical attributes can indeed serve as reliable cues of immoral action, and provide additional support for the view that evolutionary forces shape ethical judgement and behaviour. PMID:21733897
Responsibility and the sense of agency enhance empathy for pain
Lepron, Evelyne; Causse, Michaël; Farrer, Chlöé
2015-01-01
Being held responsible for our actions strongly determines our moral judgements and decisions. This study examined whether responsibility also influences our affective reaction to others' emotions. We conducted two experiments in order to assess the effect of responsibility and of a sense of agency (the conscious feeling of controlling an action) on the empathic response to pain. In both experiments, participants were presented with video clips showing an actor's facial expression of pain of varying intensity. The empathic response was assessed with behavioural (pain intensity estimation from facial expressions and unpleasantness for the observer ratings) and electrophysiological measures (facial electromyography). Experiment 1 showed enhanced empathic response (increased unpleasantness for the observer and facial electromyography responses) as participants' degree of responsibility for the actor's pain increased. This effect was mainly accounted for by the decisional component of responsibility (compared with the execution component). In addition, experiment 2 found that participants' unpleasantness rating also increased when they had a sense of agency over the pain, while controlling for decision and execution processes. The findings suggest that increased empathy induced by responsibility and a sense of agency may play a role in regulating our moral conduct. PMID:25473014
Blend Shape Interpolation and FACS for Realistic Avatar
NASA Astrophysics Data System (ADS)
Alkawaz, Mohammed Hazim; Mohamad, Dzulkifli; Basori, Ahmad Hoirul; Saba, Tanzila
2015-03-01
The quest of developing realistic facial animation is ever-growing. The emergence of sophisticated algorithms, new graphical user interfaces, laser scans and advanced 3D tools imparted further impetus towards the rapid advancement of complex virtual human facial model. Face-to-face communication being the most natural way of human interaction, the facial animation systems became more attractive in the information technology era for sundry applications. The production of computer-animated movies using synthetic actors are still challenging issues. Proposed facial expression carries the signature of happiness, sadness, angry or cheerful, etc. The mood of a particular person in the midst of a large group can immediately be identified via very subtle changes in facial expressions. Facial expressions being very complex as well as important nonverbal communication channel are tricky to synthesize realistically using computer graphics. Computer synthesis of practical facial expressions must deal with the geometric representation of the human face and the control of the facial animation. We developed a new approach by integrating blend shape interpolation (BSI) and facial action coding system (FACS) to create a realistic and expressive computer facial animation design. The BSI is used to generate the natural face while the FACS is employed to reflect the exact facial muscle movements for four basic natural emotional expressions such as angry, happy, sad and fear with high fidelity. The results in perceiving the realistic facial expression for virtual human emotions based on facial skin color and texture may contribute towards the development of virtual reality and game environment of computer aided graphics animation systems.
Infant Expressions in an Approach/Withdrawal Framework
Sullivan, Margaret Wolan
2014-01-01
Since the introduction of empirical methods for studying facial expression, the interpretation of infant facial expressions has generated much debate. The premise of this paper is that action tendencies of approach and withdrawal constitute a core organizational feature of emotion in humans, promoting coherence of behavior, facial signaling and physiological responses. The approach/withdrawal framework can provide a taxonomy of contexts and the neurobehavioral framework for the systematic, empirical study of individual differences in expression, physiology, and behavior within individuals as well as across contexts over time. By adopting this framework in developmental work on basic emotion processes, it may be possible to better understand the behavioral principles governing facial displays, and how individual differences in them are related to physiology and behavior, function in context. PMID:25412273
Changes in nuclear morphology and chromatin texture of basal keratinocytes in melasma.
Brianezi, G; Handel, A C; Schmitt, J V; Miot, L D B; Miot, H A
2015-04-01
The pathogenesis of melasma and the role of keratinocytes in disease development and maintenance are not completely understood. Dermal abnormalities, the expression of inflammatory mediators, growth factors, epithelial expression of melanocortin and sexual hormones receptors suggest that not only melanocytes, but entire epidermal melanin unit is involved in melasma physiopathology. To compare nuclear morphological features and chromatin texture between basal keratinocytes in facial melasma and adjacent normal skin. We took facial skin biopsies (2 mm melasma and adjacent normal skin) from women processed for haematoxylin and eosin. Thirty non-overlapping basal keratinocyte nuclei were segmented and descriptors of area, highest diameter, perimeter, circularity, pixel intensity, profilometric index (Ra) and fractal dimension were extracted using ImageJ software. Basal keratinocyte nuclei from facial melasma epidermis displayed larger size, irregular shape, hyperpigmentation and chromatin heterogeneity by fractal dimension than perilesional skin. Basal keratinocytes from facial melasma display changes in nuclear form and chromatin texture, suggesting that the phenotype differences between melasma and adjacent facial skin can result from complete epidermal melanin unit alterations, not just hypertrophic melanocytes. © 2014 European Academy of Dermatology and Venereology.
Murata, Aiko; Saito, Hisamichi; Schug, Joanna; Ogawa, Kenji; Kameda, Tatsuya
2016-01-01
A number of studies have shown that individuals often spontaneously mimic the facial expressions of others, a tendency known as facial mimicry. This tendency has generally been considered a reflex-like "automatic" response, but several recent studies have shown that the degree of mimicry may be moderated by contextual information. However, the cognitive and motivational factors underlying the contextual moderation of facial mimicry require further empirical investigation. In this study, we present evidence that the degree to which participants spontaneously mimic a target's facial expressions depends on whether participants are motivated to infer the target's emotional state. In the first study we show that facial mimicry, assessed by facial electromyography, occurs more frequently when participants are specifically instructed to infer a target's emotional state than when given no instruction. In the second study, we replicate this effect using the Facial Action Coding System to show that participants are more likely to mimic facial expressions of emotion when they are asked to infer the target's emotional state, rather than make inferences about a physical trait unrelated to emotion. These results provide convergent evidence that the explicit goal of understanding a target's emotional state affects the degree of facial mimicry shown by the perceiver, suggesting moderation of reflex-like motor activities by higher cognitive processes.
Murata, Aiko; Saito, Hisamichi; Schug, Joanna; Ogawa, Kenji; Kameda, Tatsuya
2016-01-01
A number of studies have shown that individuals often spontaneously mimic the facial expressions of others, a tendency known as facial mimicry. This tendency has generally been considered a reflex-like “automatic” response, but several recent studies have shown that the degree of mimicry may be moderated by contextual information. However, the cognitive and motivational factors underlying the contextual moderation of facial mimicry require further empirical investigation. In this study, we present evidence that the degree to which participants spontaneously mimic a target’s facial expressions depends on whether participants are motivated to infer the target’s emotional state. In the first study we show that facial mimicry, assessed by facial electromyography, occurs more frequently when participants are specifically instructed to infer a target’s emotional state than when given no instruction. In the second study, we replicate this effect using the Facial Action Coding System to show that participants are more likely to mimic facial expressions of emotion when they are asked to infer the target’s emotional state, rather than make inferences about a physical trait unrelated to emotion. These results provide convergent evidence that the explicit goal of understanding a target’s emotional state affects the degree of facial mimicry shown by the perceiver, suggesting moderation of reflex-like motor activities by higher cognitive processes. PMID:27055206
Téllez, Maria J; Ulkatan, Sedat; Urriza, Javier; Arranz-Arranz, Beatriz; Deletis, Vedran
2016-02-01
To improve the recognition and possibly prevent confounding peripheral activation of the facial nerve caused by leaking transcranial electrical stimulation (TES) current during corticobulbar tract monitoring. We applied a single stimulus and a short train of electrical stimuli directly to the extracranial portion of the facial nerve. We compared the peripherally elicited compound muscle action potential (CMAP) of the facial nerve with the responses elicited by TES during intraoperative monitoring of the corticobulbar tract. A single stimulus applied directly to the facial nerve at subthreshold intensities did not evoke a CMAP, whereas short trains of subthreshold stimuli repeatedly evoked CMAPs. This is due to the phenomenon of sub- or near-threshold super excitability of the cranial nerve. Therefore, the facial responses evoked by short trains TES, when the leaked current reaches the facial nerve at sub- or near-threshold intensity, could lead to false interpretation. Our results revealed a potential pitfall in the current methodology for facial corticobulbar tract monitoring that is due to the activation of the facial nerve by subthreshold trains of stimuli. This study proposes a new criterion to exclude peripheral activation during corticobulbar tract monitoring. The failure to recognize and avoid facial nerve activation due to leaking current in the peripheral portion of the facial nerve during TES decreases the reliability of corticobulbar tract monitoring by increasing the possibility of false interpretation. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Chu, Simon; McNeill, Kimberley; Ireland, Jane L; Qurashi, Inti
2015-12-15
We investigated the relationship between a change in sleep quality and facial emotion recognition accuracy in a group of mentally-disordered inpatients at a secure forensic psychiatric unit. Patients whose sleep improved over time also showed improved facial emotion recognition while patients who showed no sleep improvement showed no change in emotion recognition. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Modeling 3D Facial Shape from DNA
Claes, Peter; Liberton, Denise K.; Daniels, Katleen; Rosana, Kerri Matthes; Quillen, Ellen E.; Pearson, Laurel N.; McEvoy, Brian; Bauchet, Marc; Zaidi, Arslan A.; Yao, Wei; Tang, Hua; Barsh, Gregory S.; Absher, Devin M.; Puts, David A.; Rocha, Jorge; Beleza, Sandra; Pereira, Rinaldo W.; Baynam, Gareth; Suetens, Paul; Vandermeulen, Dirk; Wagner, Jennifer K.; Boster, James S.; Shriver, Mark D.
2014-01-01
Human facial diversity is substantial, complex, and largely scientifically unexplained. We used spatially dense quasi-landmarks to measure face shape in population samples with mixed West African and European ancestry from three locations (United States, Brazil, and Cape Verde). Using bootstrapped response-based imputation modeling (BRIM), we uncover the relationships between facial variation and the effects of sex, genomic ancestry, and a subset of craniofacial candidate genes. The facial effects of these variables are summarized as response-based imputed predictor (RIP) variables, which are validated using self-reported sex, genomic ancestry, and observer-based facial ratings (femininity and proportional ancestry) and judgments (sex and population group). By jointly modeling sex, genomic ancestry, and genotype, the independent effects of particular alleles on facial features can be uncovered. Results on a set of 20 genes showing significant effects on facial features provide support for this approach as a novel means to identify genes affecting normal-range facial features and for approximating the appearance of a face from genetic markers. PMID:24651127
Marinelli, John P; Van Gompel, Jamie J; Link, Michael J; Carlson, Matthew L
2018-05-01
Secondary trigeminal neuralgia (TN) is uncommon. When a space-occupying lesion with mass effect is identified, the associated TN is often exclusively attributed to the tumor. This report illustrates the importance of considering coexistent actionable pathology when surgically treating secondary TN. A 51-year-old woman presented with abrupt-onset TN of the V2 and V3 nerve divisions with hypesthesia. She denied changes in hearing, balance, or facial nerve dysfunction. Magnetic resonance imaging revealed a 1.6-cm contrast-enhancing cerebellopontine angle tumor that effaced the trigeminal nerve, consistent with a vestibular schwannoma. In addition, a branch of the superior cerebellar artery abutted the cisternal segment of the trigeminal nerve on T2-weighted thin-slice magnetic resonance imaging. Intraoperative electrical stimulation of the tumor elicited a response from the facial nerve at low threshold over the entire accessible tumor surface, indicating that the tumor was a facial nerve schwannoma. Considering the patient's lack of facial nerve deficit and that the tumor exhibited no safe entry point for intracapsular debulking, tumor resection was not performed. Working between the tumor and tentorium, a branch of the superior cerebellar artery was identified and decompressed with a Teflon pad. At last follow-up, the patient exhibited resolution of her TN. Her hearing and facial nerve function remained intact. Despite obstruction from a medium-sized tumor, it is still possible to achieve microvascular decompression of the fifth cranial nerve. This emphasizes the importance of considering other actionable pathology during surgical management of presumed tumor-induced TN. Further, TN is relatively uncommon with medium-sized vestibular schwannomas and coexistent causes should be considered. Copyright © 2018 Elsevier Inc. All rights reserved.
Imitating expressions: emotion-specific neural substrates in facial mimicry.
Lee, Tien-Wen; Josephs, Oliver; Dolan, Raymond J; Critchley, Hugo D
2006-09-01
Intentionally adopting a discrete emotional facial expression can modulate the subjective feelings corresponding to that emotion; however, the underlying neural mechanism is poorly understood. We therefore used functional brain imaging (functional magnetic resonance imaging) to examine brain activity during intentional mimicry of emotional and non-emotional facial expressions and relate regional responses to the magnitude of expression-induced facial movement. Eighteen healthy subjects were scanned while imitating video clips depicting three emotional (sad, angry, happy), and two 'ingestive' (chewing and licking) facial expressions. Simultaneously, facial movement was monitored from displacement of fiducial markers (highly reflective dots) on each subject's face. Imitating emotional expressions enhanced activity within right inferior prefrontal cortex. This pattern was absent during passive viewing conditions. Moreover, the magnitude of facial movement during emotion-imitation predicted responses within right insula and motor/premotor cortices. Enhanced activity in ventromedial prefrontal cortex and frontal pole was observed during imitation of anger, in ventromedial prefrontal and rostral anterior cingulate during imitation of sadness and in striatal, amygdala and occipitotemporal during imitation of happiness. Our findings suggest a central role for right inferior frontal gyrus in the intentional imitation of emotional expressions. Further, by entering metrics for facial muscular change into analysis of brain imaging data, we highlight shared and discrete neural substrates supporting affective, action and social consequences of somatomotor emotional expression.
FaceWarehouse: a 3D facial expression database for visual computing.
Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun
2014-03-01
We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.
Voluntary facial action generates emotion-specific autonomic nervous system activity.
Levenson, R W; Ekman, P; Friesen, W V
1990-07-01
Four experiments were conducted to determine whether voluntarily produced emotional facial configurations are associated with differentiated patterns of autonomic activity, and if so, how this might be mediated. Subjects received muscle-by-muscle instructions and coaching to produce facial configurations for anger, disgust, fear, happiness, sadness, and surprise while heart rate, skin conductance, finger temperature, and somatic activity were monitored. Results indicated that voluntary facial activity produced significant levels of subjective experience of the associated emotion, and that autonomic distinctions among emotions: (a) were found both between negative and positive emotions and among negative emotions, (b) were consistent between group and individual subjects' data, (c) were found in both male and female subjects, (d) were found in both specialized (actors, scientists) and nonspecialized populations, (e) were stronger when the voluntary facial configurations most closely resembled actual emotional expressions, and (f) were stronger when experience of the associated emotion was reported. The capacity of voluntary facial activity to generate emotion-specific autonomic activity: (a) did not require subjects to see facial expressions (either in a mirror or on an experimenter's face), and (b) could not be explained by differences in the difficulty of making the expressions or by differences in concomitant somatic activity.
Spontaneous facial expressions of emotion of congenitally and noncongenitally blind individuals.
Matsumoto, David; Willingham, Bob
2009-01-01
The study of the spontaneous expressions of blind individuals offers a unique opportunity to understand basic processes concerning the emergence and source of facial expressions of emotion. In this study, the authors compared the expressions of congenitally and noncongenitally blind athletes in the 2004 Paralympic Games with each other and with those produced by sighted athletes in the 2004 Olympic Games. The authors also examined how expressions change from 1 context to another. There were no differences between congenitally blind, noncongenitally blind, and sighted athletes, either on the level of individual facial actions or in facial emotion configurations. Blind athletes did produce more overall facial activity, but these were isolated to head and eye movements. The blind athletes' expressions differentiated whether they had won or lost a medal match at 3 different points in time, and there were no cultural differences in expression. These findings provide compelling evidence that the production of spontaneous facial expressions of emotion is not dependent on observational learning but simultaneously demonstrates a learned component to the social management of expressions, even among blind individuals.
Optimising ballistic facial coverage from military fragmenting munitions: a consensus statement.
Breeze, J; Tong, D C; Powers, D; Martin, N A; Monaghan, A M; Evriviades, D; Combes, J; Lawton, G; Taylor, C; Kay, A; Baden, J; Reed, B; MacKenzie, N; Gibbons, A J; Heppell, S; Rickard, R F
2017-02-01
VIRTUS is the first United Kingdom (UK) military personal armour system to provide components that are capable of protecting the whole face from low velocity ballistic projectiles. Protection is modular, using a helmet worn with ballistic eyewear, a visor, and a mandibular guard. When all four components are worn together the face is completely covered, but the heat, discomfort, and weight may not be optimal in all types of combat. We organized a Delphi consensus group analysis with 29 military consultant surgeons from the UK, United States, Canada, Australia, and New Zealand to identify a potential hierarchy of functional facial units in order of importance that require protection. We identified the causes of those facial injuries that are hardest to reconstruct, and the most effective combinations of facial protection. Protection is required from both penetrating projectiles and burns. There was strong consensus that blunt injury to the facial skeleton was currently not a military priority. Functional units that should be prioritised are eyes and eyelids, followed consecutively by the nose, lips, and ears. Twenty-nine respondents felt that the visor was more important than the mandibular guard if only one piece was to be worn. Essential cover of the brain and eyes is achieved from all directions using a combination of helmet and visor. Nasal cover currently requires the mandibular guard unless the visor can be modified to cover it as well. Any such prototype would need extensive ergonomics and assessment of integration, as any changes would have to be acceptable to the people who wear them in the long term. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Supplemental oxygen: ensuring its safe delivery during facial surgery.
Reyes, R J; Smith, A A; Mascaro, J R; Windle, B H
1995-04-01
Electrosurgical coagulation in the presence of blow-by oxygen is a potential source of fire in facial surgery. A case report of a patient sustaining partial-thickness facial burns secondary to such a flash fire is presented. A fiberglass facial model is then used to study the variables involved in providing supplemental oxygen when an electrosurgical unit is employed. Oxygen flow, oxygen delivery systems, distance from the oxygen source, and coagulation current levels were varied. A nasal cannula and an adapted suction tubing provided the oxygen delivery systems on the model. Both the "displaced" nasal cannula and the adapted suction tubing ignited at a minimum coagulation level of 30 W, an oxygen flow of 2 liters/minute, and a linear distance of 5 cm from the oxygen source. The properly placed nasal cannula did not ignite at any combination of oxygen flow, coagulation current level, or distance from the oxygen source. Facial cutaneous surgery in patients provided supplemental oxygen should be practiced with caution when an electrosurgical unit is used for coagulation. The oxygen delivery systems adapted for use are hazardous and should not be used until their safety has been demonstrated.
Bologna, Matteo; Berardelli, Isabella; Paparella, Giulia; Marsili, Luca; Ricciardi, Lucia; Fabbrini, Giovanni; Berardelli, Alfredo
2016-01-01
Altered emotional processing, including reduced emotion facial expression and defective emotion recognition, has been reported in patients with Parkinson's disease (PD). However, few studies have objectively investigated facial expression abnormalities in PD using neurophysiological techniques. It is not known whether altered facial expression and recognition in PD are related. To investigate possible deficits in facial emotion expression and emotion recognition and their relationship, if any, in patients with PD. Eighteen patients with PD and 16 healthy controls were enrolled in this study. Facial expressions of emotion were recorded using a 3D optoelectronic system and analyzed using the facial action coding system. Possible deficits in emotion recognition were assessed using the Ekman test. Participants were assessed in one experimental session. Possible relationship between the kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients were evaluated using the Spearman's test and multiple regression analysis. The facial expression of all six basic emotions had slower velocity and lower amplitude in patients in comparison to healthy controls (all P s < 0.05). Patients also yielded worse Ekman global score and disgust, sadness, and fear sub-scores than healthy controls (all P s < 0.001). Altered facial expression kinematics and emotion recognition deficits were unrelated in patients (all P s > 0.05). Finally, no relationship emerged between kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients (all P s > 0.05). The results in this study provide further evidence of altered emotional processing in PD. The lack of any correlation between altered facial emotion expression kinematics and emotion recognition deficits in patients suggests that these abnormalities are mediated by separate pathophysiological mechanisms.
[Screening for psychiatric risk factors in a facial trauma patients. Validating a questionnaire].
Foletti, J M; Bruneau, S; Farisse, J; Thiery, G; Chossegros, C; Guyot, L
2014-12-01
We recorded similarities between patients managed in the psychiatry department and in the maxillo-facial surgical unit. Our hypothesis was that some psychiatric conditions act as risk factors for facial trauma. We had for aim to test our hypothesis and to validate a simple and efficient questionnaire to identify these psychiatric disorders. Fifty-eight consenting patients with facial trauma, recruited prospectively in the 3 maxillo-facial surgery departments of the Marseille area during 3 months (December 2012-March 2013) completed a self-questionnaire based on the French version of 3 validated screening tests (Self Reported Psychopathy test, Rapid Alcohol Problem Screening test quantity-frequency, and Personal Health Questionnaire). This preliminary study confirmed that psychiatric conditions detected by our questionnaire, namely alcohol abuse and dependence, substance abuse, and depression, were risk factors for facial trauma. Maxillo-facial surgeons are often unaware of psychiatric disorders that may be the cause of facial trauma. The self-screening test we propose allows documenting the psychiatric history of patients and implementing earlier psychiatric care. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Overview of Facial Plastic Surgery and Current Developments
Chuang, Jessica; Barnes, Christian; Wong, Brian J. F.
2016-01-01
Facial plastic surgery is a multidisciplinary specialty largely driven by otolaryngology but includes oral maxillary surgery, dermatology, ophthalmology, and plastic surgery. It encompasses both reconstructive and cosmetic components. The scope of practice for facial plastic surgeons in the United States may include rhinoplasty, browlifts, blepharoplasty, facelifts, microvascular reconstruction of the head and neck, craniomaxillofacial trauma reconstruction, and correction of defects in the face after skin cancer resection. Facial plastic surgery also encompasses the use of injectable fillers, neural modulators (e.g., BOTOX Cosmetic, Allergan Pharmaceuticals, Westport, Ireland), lasers, and other devices aimed at rejuvenating skin. Facial plastic surgery is a constantly evolving field with continuing innovative advances in surgical techniques and cosmetic adjunctive technologies. This article aims to give an overview of the various procedures that encompass the field of facial plastic surgery and to highlight the recent advances and trends in procedures and surgical techniques. PMID:28824978
Foroni, Francesco; Semin, Gün R
2009-08-01
Observing and producing a smile activate the very same facial muscles. In Experiment 1, we predicted and found that verbal stimuli (action verbs) that refer to emotional expressions elicit the same facial muscle activity (facial electromyography) as visual stimuli do. These results are evidence that language referring to facial muscular activity is not amodal, as traditionally assumed, but is instead bodily grounded. These findings were extended in Experiment 2, in which subliminally presented verbal stimuli were shown to drive muscle activation and to shape judgments, but not when muscle activation was blocked. These experiments provide an important bridge between research on the neurobiological basis of language and related behavioral research. The implications of these findings for theories of language and other domains of cognitive psychology (e.g., priming) are discussed.
Domaneschi, Filippo; Passarelli, Marcello; Chiorri, Carlo
2017-08-01
Language scientists have broadly addressed the problem of explaining how language users recognize the kind of speech act performed by a speaker uttering a sentence in a particular context. They have done so by investigating the role played by the illocutionary force indicating devices (IFIDs), i.e., all linguistic elements that indicate the illocutionary force of an utterance. The present work takes a first step in the direction of an experimental investigation of non-verbal IFIDs because it investigates the role played by facial expressions and, in particular, of upper-face action units (AUs) in the comprehension of three basic types of illocutionary force: assertions, questions, and orders. The results from a pilot experiment on production and two comprehension experiments showed that (1) certain upper-face AUs seem to constitute non-verbal signals that contribute to the understanding of the illocutionary force of questions and orders; (2) assertions are not expected to be marked by any upper-face AU; (3) some upper-face AUs can be associated, with different degrees of compatibility, with both questions and orders.
Splash Safety During Dermatologic Procedures Among US Dermatology Residents.
Korta, Dorota Z; Chapman, Lance W; Lee, Patrick K; Linden, Kenneth G
2017-07-01
Dermatologists are at potential risk of acquiring infections from contamination of the mucous membranes by blood and body fluids. However, there are little data on splash safety during procedural dermatology. To determine dermatology resident perceptions about splash risk during dermatologic procedures and to quantify the rate of protective equipment use. An anonymous on-line survey was sent to 108 United States ACGME-approved dermatology residency programs assessing frequency of facial protection during dermatologic procedures, personal history of splash injury, and, if applicable, reasons for not always wearing facial protection. A total of 153 dermatology residents responded. Rates of facial protection varied by procedure, with the highest rates during surgery and the lowest during local anesthetic injection. Over 54% of respondents reported suffering facial splash while not wearing facial protection during a procedure. In contrast, 88.9% of respondents correctly answered that there is a small risk of acquiring infection from mucosal splash. Residency program recommendations for facial protection seem to vary by procedure. The authors' results demonstrate that although facial splash is a common injury, facial protection rates and protective recommendations vary significantly by procedure. These data support the recommendation for enhanced facial protection guidelines during procedural dermatology.
An Argument for the Use of Biometrics to Prevent Terrorist Access to the United States
2003-12-06
that they are who they claim to be. Remote methods such as facial recognition do not rely on interaction with the individual, and can be used with or...quickly, although there is a relatively high error rate. Acsys Biometric Systems, a leader in facial recognition , reports their best system has only a...change their appearance. The facial recognition system also presents a privacy concern in the minds of many individuals. By remotely scanning without an
Manikandan, N
2007-04-01
To determine the effect of facial neuromuscular re-education over conventional therapeutic measures in improving facial symmetry in patients with Bell's palsy. Randomized controlled trial. Neurorehabilitation unit. Fifty-nine patients diagnosed with Bell's palsy were included in the study after they met the inclusion criteria. Patients were randomly divided into two groups: control (n = 30) and experimental (n = 29). Control group patients received conventional therapeutic measures while the facial neuromuscular re-education group patients received techniques that were tailored to each patient in three sessions per day for six days per week for a period of two weeks. All the patients were evaluated using a Facial Grading Scale before treatment and after three months. The Facial Grading Scale scores showed significant improvement in both control (mean 32 (range 9.7-54) to 54.5 (42.2-71.7)) and the experimental (33 (18-43.5) to 66 (54-76.7)) group. Facial Grading Scale change scores showed that experimental group (27.5 (20-43.77)) improved significantly more than the control group (16.5 (12.2-24.7)). Analysis of Facial Grading Scale subcomponents did not show statistical significance, except in the movement score (12 (8-16) to 24 (12-18)). Individualized facial neuromuscular re-education is more effective in improving facial symmetry in patients with Bell's palsy than conventional therapeutic measures.
Al Taki, Amjad; Guidoum, Amina
2014-01-01
Objectives: The objective of this study is to assess the differences in facial profile preference among different layers of people in the United Arab Emirates. Facial profile self-awareness among the different groups was also evaluated. Materials and Methods: A total sample of 222 participants (mean [standard deviation] age = 25.71 [8.3] years, almost 80% of the participants were of Arab origin and 55% were males); consisting of 60 laypersons, 60 dental students, 60 general practitioners, 16 oral surgeons, and 26 orthodontists. Facial profile photographs of a male and female adult with straight profiles and a Class I skeletal relationship were used as a baseline template. Computerized photographic image modification was carried out on the templates to obtain seven different facial profile silhouettes for each gender. To assess differences in facial profile perception, participants were asked to rank the profiles of each gender on a scale from most to least attractive (1 [highest score] and 7 [least score]). Awareness and satisfaction with the facial appearance on a profile view was assessed using questionnaires completed by the non-expert groups. Results: The straight facial profile was perceived to be highly attractive by all five groups. The least attractive profiles were the bimaxillary protrusion and the mandibular retrusion for the male and the female profiles, respectively. Lip protrusion was more esthetically acceptable in females. Significant differences in perception existed among groups. The female profile esthetic perception was highly correlated between the expert groups (P > 0.05). Overall agreement between the non-expert group's perceptions of their own profiles and evaluation by the expert orthodontist was 51% (κ = 0.089). Candidates who perceived themselves as having a Class III facial profile were the least satisfied with their profile. Conclusions: Dental professionals, dental students, and laypersons had a similar perception trends in female and male aesthetic preference. Laypersons were more tolerant to profiles with bi-maxillary retrusion. The expert group's esthetic perception was highly correlated only for the female profiles. Most of the non-experts were unable to correctly identify their facial profile. PMID:24987664
Incidence of facial clefts in Cambridge, United Kingdom.
Bister, Dirk; Set, Patricia; Cash, Charlotte; Coleman, Nicholas; Fanshawe, Thomas
2011-08-01
The aim of this study was to determine the incidence of facial clefting in Cambridge, UK, using multiple resources of ascertainment and to relate the findings to antenatal ultrasound screening (AUS) detection rates. AUS records from an obstetric ultrasound department, post-natal records from the regional craniofacial unit, and autopsy reports of foetuses over 16 weeks' gestational age from a regional pathology department from 1993 to 1997 were retrospectively reviewed. Cross-referencing between the three data sets identified all cases of facial clefts. Of 23,577 live and stillbirths, 30 had facial clefts. AUS detected 17 of these. Sixteen of the 30 had isolated facial clefts. Others had associated anomalies, chromosomal defects, or syndromes. Percentages and confidence intervals were calculated from the above data. Twenty-one resulted in live births, seven terminations, and two foetal deaths. Overall, detection rate by AUS was 65 percent [67 percent isolated cleft lip, 93 per cent cleft lip and palate (CLP), and 22 percent isolated cleft palate], with no false positives. The incidence of facial clefts was 0.127 percent (95 percent confidence interval 0.089-0.182 percent); the incidence for isolated CLP was lower than previously reported: 0.067 percent (0.042-0.110 percent). With one exception, all terminations were in foetuses with multiple anomalies. The figures presented will enable joint CLP clinics to give parents information of termination rates. The study allows pre-pregnancy counselling of families previously affected by clefting about the reliability of AUS detection rates.
Barret, Juan P
2014-01-01
The innovation of composite vascularized allotransplantation has provided plastic and reconstructive surgeons with the ultimate tool for those patients that present with facial deformities that cannot be reconstructed with classical or more traditional techniques. Transplanting normal tissues allows for a true restorative surgery. Initial experiences included the substitution of missing anatomy, whereas after the first world's full-face transplant performed in Barcelona in March 2010, a true ablative surgery with a total restoration proved to be effective. We review the world's experience and the performance of our restorative protocol to depict this change in the reconstructive paradigm of facial transplantation. Facial transplants should be performed after a careful analysis of the defect, with a comprehensive ablation plan following esthetic units with sacrifice of all required tissues with a focus of global restoration of anatomy, aesthetics and function, respecting normal functioning muscles. Nowadays, facial transplants following strict esthetic units should restore disfigurement extending to small central areas, whereas major defects may require a total ablation and restoration with full-face transplants. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
Clinical feasibility test on a minimally invasive laser therapy system in microsurgery of nerves.
Mack, K F; Leinung, M; Stieve, M; Lenarz, T; Schwab, B
2008-01-01
The clinical feasibility test described here evaluates the basis for a laser therapy system that enables tumour tissue to be separated from nerves in a minimally invasive manner. It was first investigated whether, using an Er:YAG laser, laser-induced nerve (specifically, facial nerve) responses in the rabbit in vivo can be reliably detected with the hitherto standard monitoring techniques. Peripherally recordable neuromuscular signals (i.e. compound action potentials, CAPs) were used to monitor nerve function and to establish a feedback loop. The first occurrence of laser-evoked CAPs was taken as the criterion for deciding when to switch off the laser. When drawing up criteria governing the control and termination of the laser application, the priority was the maintenance of nerve function. Five needle-electrode arrays specially developed for this purpose, each with a miniature preamplifier, were then placed into the facial musculature instead of single-needle electrodes. The system was tested in vivo under realistic surgical conditions (i.e. facial-nerve surgery in the rabbit). This modified multi-channel electromyography (EMG) system enabled laser-evoked CAPs to be detected that have amplitudes 10 times smaller than those picked up by commercially available systems. This optimization, and the connection of the neuromuscular unit with the Er:YAG laser via the electrode array to create a feedback loop, were designed to make it possible to maintain online control of the laser ablation process in the vicinity of neuronal tissue, thus ensuring that tissue excision is both reliable and does not affect function. Our results open up new possibilities in minimally invasive surgery near neural structures.
Intact Imitation of Emotional Facial Actions in Autism Spectrum Conditions
ERIC Educational Resources Information Center
Press, Clare; Richardson, Daniel; Bird, Geoffrey
2010-01-01
It has been proposed that there is a core impairment in autism spectrum conditions (ASC) to the mirror neuron system (MNS): If observed actions cannot be mapped onto the motor commands required for performance, higher order sociocognitive functions that involve understanding another person's perspective, such as theory of mind, may be impaired.…
Real-time face and gesture analysis for human-robot interaction
NASA Astrophysics Data System (ADS)
Wallhoff, Frank; Rehrl, Tobias; Mayer, Christoph; Radig, Bernd
2010-05-01
Human communication relies on a large number of different communication mechanisms like spoken language, facial expressions, or gestures. Facial expressions and gestures are one of the main nonverbal communication mechanisms and pass large amounts of information between human dialog partners. Therefore, to allow for intuitive human-machine interaction, a real-time capable processing and recognition of facial expressions, hand and head gestures are of great importance. We present a system that is tackling these challenges. The input features for the dynamic head gestures and facial expressions are obtained from a sophisticated three-dimensional model, which is fitted to the user in a real-time capable manner. Applying this model different kinds of information are extracted from the image data and afterwards handed over to a real-time capable data-transferring framework, the so-called Real-Time DataBase (RTDB). In addition to the head and facial-related features, also low-level image features regarding the human hand - optical flow, Hu-moments are stored into the RTDB for the evaluation process of hand gestures. In general, the input of a single camera is sufficient for the parallel evaluation of the different gestures and facial expressions. The real-time capable recognition of the dynamic hand and head gestures are performed via different Hidden Markov Models, which have proven to be a quick and real-time capable classification method. On the other hand, for the facial expressions classical decision trees or more sophisticated support vector machines are used for the classification process. These obtained results of the classification processes are again handed over to the RTDB, where other processes (like a Dialog Management Unit) can easily access them without any blocking effects. In addition, an adjustable amount of history can be stored by the RTDB buffer unit.
van Dinther, J J S; Van Rompaey, V; Somers, T; Zarowski, A; Offeciers, F E
2011-01-01
To assess the prognostic significance of pre-operative electrophysiological tests for facial nerve outcome in vestibular schwannoma surgery. Retrospective study design in a tertiary referral neurology unit. We studied a total of 123 patients with unilateral vestibular schwannoma who underwent microsurgical removal of the lesion. Nine patients were excluded because they had clinically abnormal pre-operative facial function. Pre-operative electrophysiological facial nerve function testing (EPhT) was performed. Short-term (1 month) and long-term (1 year) post-operative clinical facial nerve function were assessed. When pre-operative facial nerve function, evaluated by EPhT, was normal, the outcome from clinical follow-up at 1-month post-operatively was excellent in 78% (i.e. HB I-II) of patients, moderate in 11% (i.e. HB III-IV), and bad in 11% (i.e. HB V-VI). After 1 year, 86% had excellent outcomes, 13% had moderate outcomes, and 1% had bad outcomes. Of all patients with normal clinical facial nerve function, 22% had an abnormal EPhT result and 78% had a normal result. No statistically significant differences could be observed in short-term and long-term post-operative facial function between the groups. In this study, electrophysiological tests were not able to predict facial nerve outcome after vestibular schwannoma surgery. Tumour size remains the best pre-operative prognostic indicator of facial nerve function outcome, i.e. a better outcome in smaller lesions.
Fujiwara, Takeo; Mizuki, Rie; Miki, Takahiro; Chemtob, Claude
2015-01-01
"Emotional numbing" is a symptom of post-traumatic stress disorder (PTSD) characterized by a loss of interest in usually enjoyable activities, feeling detached from others, and an inability to express a full range of emotions. Emotional numbing is usually assessed through self-report, and is particularly difficult to ascertain among young children. We conducted a pilot study to explore the use of facial expression ratings in response to a comedy video clip to assess emotional reactivity among preschool children directly exposed to the Great East Japan Earthquake. This study included 23 child participants. Child PTSD symptoms were measured using a modified version of the Parent's Report of the Child's Reaction to Stress scale. Children were filmed while watching a 2-min video compilation of natural scenes ('baseline video') followed by a 2-min video clip from a television comedy ('comedy video'). Children's facial expressions were processed the using Noldus FaceReader software, which implements the Facial Action Coding System (FACS). We investigated the association between PTSD symptom scores and facial emotion reactivity using linear regression analysis. Children with higher PTSD symptom scores showed a significantly greater proportion of neutral facial expressions, controlling for sex, age, and baseline facial expression (p < 0.05). This pilot study suggests that facial emotion reactivity, measured using facial expression recognition software, has the potential to index emotional numbing in young children. This pilot study adds to the emerging literature on using experimental psychopathology methods to characterize children's reactions to disasters.
Schulte-Rüther, Martin; Otte, Ellen; Adigüzel, Kübra; Firk, Christine; Herpertz-Dahlmann, Beate; Koch, Iring; Konrad, Kerstin
2017-02-01
It has been suggested that an early deficit in the human mirror neuron system (MNS) is an important feature of autism. Recent findings related to simple hand and finger movements do not support a general dysfunction of the MNS in autism. Studies investigating facial actions (e.g., emotional expressions) have been more consistent, however, mostly relied on passive observation tasks. We used a new variant of a compatibility task for the assessment of automatic facial mimicry responses that allowed for simultaneous control of attention to facial stimuli. We used facial electromyography in 18 children and adolescents with Autism spectrum disorder (ASD) and 18 typically developing controls (TDCs). We observed a robust compatibility effect in ASD, that is, the execution of a facial expression was facilitated if a congruent facial expression was observed. Time course analysis of RT distributions and comparison to a classic compatibility task (symbolic Simon task) revealed that the facial compatibility effect appeared early and increased with time, suggesting fast and sustained activation of motor codes during observation of facial expressions. We observed a negative correlation of the compatibility effect with age across participants and in ASD, and a positive correlation between self-rated empathy and congruency for smiling faces in TDC but not in ASD. This pattern of results suggests that basic motor mimicry is intact in ASD, but is not associated with complex social cognitive abilities such as emotion understanding and empathy. Autism Res 2017, 10: 298-310. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.
Fujiwara, Takeo; Mizuki, Rie; Miki, Takahiro; Chemtob, Claude
2015-01-01
“Emotional numbing” is a symptom of post-traumatic stress disorder (PTSD) characterized by a loss of interest in usually enjoyable activities, feeling detached from others, and an inability to express a full range of emotions. Emotional numbing is usually assessed through self-report, and is particularly difficult to ascertain among young children. We conducted a pilot study to explore the use of facial expression ratings in response to a comedy video clip to assess emotional reactivity among preschool children directly exposed to the Great East Japan Earthquake. This study included 23 child participants. Child PTSD symptoms were measured using a modified version of the Parent’s Report of the Child’s Reaction to Stress scale. Children were filmed while watching a 2-min video compilation of natural scenes (‘baseline video’) followed by a 2-min video clip from a television comedy (‘comedy video’). Children’s facial expressions were processed the using Noldus FaceReader software, which implements the Facial Action Coding System (FACS). We investigated the association between PTSD symptom scores and facial emotion reactivity using linear regression analysis. Children with higher PTSD symptom scores showed a significantly greater proportion of neutral facial expressions, controlling for sex, age, and baseline facial expression (p < 0.05). This pilot study suggests that facial emotion reactivity, measured using facial expression recognition software, has the potential to index emotional numbing in young children. This pilot study adds to the emerging literature on using experimental psychopathology methods to characterize children’s reactions to disasters. PMID:26528206
Guillot, P
2013-01-01
A solid understanding of anatomy, basic surgical principles, and tissue movement is essential when undertaking the reconstruction of facial cutaneous surgical defects. Aesthetic facial reconstruction requires understanding ability to use the tissue adjacent to the defect to create a reconstruction that preserves the function of the area and the cosmetic facial units and subunits. The closure of non interrupted white upper lip defects by using a combined advancement and rotation flap is preferred for defects not overtaking 2.5 cm in diameter.
Cartaud, Alice; Ruggiero, Gennaro; Ott, Laurent; Iachini, Tina; Coello, Yann
2018-01-01
Accurate control of interpersonal distances in social contexts is an important determinant of effective social interactions. Although comfortable interpersonal distance seems to be dependent on social factors such as the gender, age and activity of the confederates, it also seems to be modulated by the way we represent our peripersonal-action space. To test this hypothesis, the present study investigated the relation between the emotional responses registered through electrodermal activity (EDA) triggered by human-like point-light displays (PLDs) carrying different facial expressions (neutral, angry, happy) when located in the participants peripersonal or extrapersonal space, and the comfort distance with the same PLDs when approaching and crossing the participants fronto-parallel axis on the right or left side. The results show an increase of the phasic EDA for PLDs with angry facial expressions located in the peripersonal space (reachability judgment task), in comparison to the same PLDs located in the extrapersonal space, which was not observed for PLDs with neutral or happy facial expressions. The results also show an increase of the comfort distance for PLDs approaching the participants with an angry facial expression (interpersonal comfort distance judgment task), in comparison to PLDs with happy and neutral ones, which was related to the increase of the physiological response. Overall, the findings indicate that comfort social space can be predicted from the emotional reaction triggered by a confederate when located within the observer's peripersonal space. This suggests that peripersonal-action space and interpersonal-social space are similarly sensitive to the emotional valence of the confederate, which could reflect a common adaptive mechanism in specifying theses spaces to subtend interactions with both the physical and social environment, but also to ensure body protection from potential threats.
Accurately Assessing Lines on the Aging Face.
Renton, Kim; Keefe, Kathy Young
The ongoing positive aging trend has resulted in many research studies being conducted to determine the characteristics of aging and what steps we can take to prevent the extrinsic signs of aging. Much of this attention has been focused on the prevention and treatment of facial wrinkles. To treat or prevent facial wrinkles correctly, their causative action first needs to be determined. published very compelling evidence that the development of wrinkles is complex and is caused by more factors than just the combination of poor lifestyle choices.
USDA-ARS?s Scientific Manuscript database
We describe females and males of Osmia (Melanosmia) calaminthae, new species, an apparent floral specialist on Calamintha ashei (Lamiaceae). Females of O. calaminthae have short, erect, simple facial hairs. The species is currently only known from sandy scrub at a number of sites in the southern La...
Quantitative facial electromyography monitoring after hypoglossal‐facial jump nerve suture
Flasar, Jan; Volk, Gerd Fabian; Granitzka, Thordis; Geißler, Katharina; Irintchev, Andrey; Lehmann, Thomas
2017-01-01
Objectives/Hypothesis The time course of the reinnervation of the paralyzed face after hypoglossal‐facial jump nerve suture using electromyography (EMG) was assessed. The relation to the clinical outcome was analyzed. Study Design Retrospective single‐center cohort study Methods Reestablishment of motor units was studied by quantitative EMG and motor unit potential (MUP) analysis in 11 patients after hypoglossal‐facial jump nerve suture. Functional recovery was evaluated using the Stennert index (0 = normal; 10 = maximal palsy). Results Clinically, first movements were seen between 6 and >10 months after surgery in individual patients. Maximal improvement was achieved at 18 months. The Stennert index decreased from 7.9 ± 2.0 preoperatively to a final postoperative score of 5.8 ± 2.4. EMG monitoring performed for 2.8 to 60 months after surgery revealed that pathological spontaneous activity disappeared within 2 weeks. MUPs were first recorded after the 2nd month and present in all 11 patients 8–10 months post‐surgery. Polyphasic regeneration potentials first appeared at 4–10 months post‐surgery. The MUP amplitudes increased between the 3rd and 15th months after surgery to values of control muscles. The MUP duration was significantly increased above normal values between the 3rd and 24th months after surgery. Conclusion Reinnervation can be detected at least 2 months earlier by EMG than by clinical evaluation. Changes should be followed for at least 18 months to assess outcome. EMG changes reflected the remodeling of motor units due to axonal regeneration and collateral sprouting by hypoglossal nerve fibers into the reinnervated facial muscle fibers. Level of Evidence 3b. PMID:29094077
Overview of pediatric peripheral facial nerve paralysis: analysis of 40 patients.
Özkale, Yasemin; Erol, İlknur; Saygı, Semra; Yılmaz, İsmail
2015-02-01
Peripheral facial nerve paralysis in children might be an alarming sign of serious disease such as malignancy, systemic disease, congenital anomalies, trauma, infection, middle ear surgery, and hypertension. The cases of 40 consecutive children and adolescents who were diagnosed with peripheral facial nerve paralysis at Baskent University Adana Hospital Pediatrics and Pediatric Neurology Unit between January 2010 and January 2013 were retrospectively evaluated. We determined that the most common cause was Bell palsy, followed by infection, tumor lesion, and suspected chemotherapy toxicity. We noted that younger patients had generally poorer outcome than older patients regardless of disease etiology. Peripheral facial nerve paralysis has been reported in many countries in America and Europe; however, knowledge about its clinical features, microbiology, neuroimaging, and treatment in Turkey is incomplete. The present study demonstrated that Bell palsy and infection were the most common etiologies of peripheral facial nerve paralysis. © The Author(s) 2014.
Bilateral traumatic facial paralysis. Case report.
Undabeitia, Jose; Liu, Brian; Pendleton, Courtney; Nogues, Pere; Noboa, Roberto; Undabeitia, Jose Ignacio
2013-01-01
Although traumatic injury of the facial nerve is a relatively common condition in neurosurgical practice, bilateral lesions related to fracture of temporal bones are seldom seen. We report the case of a 38-year-old patient admitted to Intensive Care Unit after severe head trauma requiring ventilatory support (Glasgow Coma Scale of 7 on admission). A computed tomography (CT) scan confirmed a longitudinal fracture of the right temporal bone and a transversal fracture of the left. After successful weaning from respirator, bilateral facial paralysis was observed. The possible aetiologies for facial diplegia differ from those of unilateral injury. Due to the lack of facial asymmetry, it can be easily missed in critically ill patients, and both the high resolution CT scan and electromyographic studies can be helpful for correct diagnosis. Copyright © 2012 Sociedad Española de Neurocirugía. Published by Elsevier España. All rights reserved.
Skilful communication: Emotional facial expressions recognition in very old adults.
María Sarabia-Cobo, Carmen; Navas, María José; Ellgring, Heiner; García-Rodríguez, Beatriz
2016-02-01
The main objective of this study was to assess the changes associated with ageing in the ability to identify emotional facial expressions and to what extent such age-related changes depend on the intensity with which each basic emotion is manifested. A randomised controlled trial carried out on 107 subjects who performed a six alternative forced-choice emotional expressions identification task. The stimuli consisted of 270 virtual emotional faces expressing the six basic emotions (happiness, sadness, surprise, fear, anger and disgust) at three different levels of intensity (low, pronounced and maximum). The virtual faces were generated by facial surface changes, as described in the Facial Action Coding System (FACS). A progressive age-related decline in the ability to identify emotional facial expressions was detected. The ability to recognise the intensity of expressions was one of the most strongly impaired variables associated with age, although the valence of emotion was also poorly identified, particularly in terms of recognising negative emotions. Nurses should be mindful of how ageing affects communication with older patients. In this study, very old adults displayed more difficulties in identifying emotional facial expressions, especially low intensity expressions and those associated with difficult emotions like disgust or fear. Copyright © 2015 Elsevier Ltd. All rights reserved.
Discrimination between smiling faces: Human observers vs. automated face analysis.
Del Líbano, Mario; Calvo, Manuel G; Fernández-Martín, Andrés; Recio, Guillermo
2018-05-11
This study investigated (a) how prototypical happy faces (with happy eyes and a smile) can be discriminated from blended expressions with a smile but non-happy eyes, depending on type and intensity of the eye expression; and (b) how smile discrimination differs for human perceivers versus automated face analysis, depending on affective valence and morphological facial features. Human observers categorized faces as happy or non-happy, or rated their valence. Automated analysis (FACET software) computed seven expressions (including joy/happiness) and 20 facial action units (AUs). Physical properties (low-level image statistics and visual saliency) of the face stimuli were controlled. Results revealed, first, that some blended expressions (especially, with angry eyes) had lower discrimination thresholds (i.e., they were identified as "non-happy" at lower non-happy eye intensities) than others (especially, with neutral eyes). Second, discrimination sensitivity was better for human perceivers than for automated FACET analysis. As an additional finding, affective valence predicted human discrimination performance, whereas morphological AUs predicted FACET discrimination. FACET can be a valid tool for categorizing prototypical expressions, but is currently more limited than human observers for discrimination of blended expressions. Configural processing facilitates detection of in/congruence(s) across regions, and thus detection of non-genuine smiling faces (due to non-happy eyes). Copyright © 2018 Elsevier B.V. All rights reserved.
Kret, Mariska E.
2015-01-01
Humans are well adapted to quickly recognize and adequately respond to another’s emotions. Different theories propose that mimicry of emotional expressions (facial or otherwise) mechanistically underlies, or at least facilitates, these swift adaptive reactions. When people unconsciously mimic their interaction partner’s expressions of emotion, they come to feel reflections of those companions’ emotions, which in turn influence the observer’s own emotional and empathic behavior. The majority of research has focused on facial actions as expressions of emotion. However, the fact that emotions are not just expressed by facial muscles alone is often still ignored in emotion perception research. In this article, I therefore argue for a broader exploration of emotion signals from sources beyond the face muscles that are more automatic and difficult to control. Specifically, I will focus on the perception of implicit sources such as gaze and tears and autonomic responses such as pupil-dilation, eyeblinks and blushing that are subtle yet visible to observers and because they can hardly be controlled or regulated by the sender, provide important “veridical” information. Recently, more research is emerging about the mimicry of these subtle affective signals including pupil-mimicry. I will here review this literature and suggest avenues for future research that will eventually lead to a better comprehension of how these signals help in making social judgments and understand each other’s emotions. PMID:26074855
Support vector machine-based facial-expression recognition method combining shape and appearance
NASA Astrophysics Data System (ADS)
Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun
2010-11-01
Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.
Registered Replication Report: Strack, Martin, & Stepper (1988).
Acosta, Alberto; Adams, Reginald B; Albohn, Daniel N; Allard, Eric S; Beek, Titia; Benning, Stephen D; Blouin- Hudon, Eve-Marie; Bulnes, Luis Carlo; Caldwell, Tracy L; Calin-Jageman, Robert J; Capaldi, Colin A; Carfagno, Nicholas S; Chasten, Kelsie T; Cleeremans, Axel; Connell, Louise; DeCicco, Jennifer M.; Dijkhoff, Laura; Dijkstra, Katinka; Fischer, Agneta H; Foroni, Francesco; Gronau, Quentin F; Hess, Ursula; Holmes, Kevin J; Jones, Jacob L H; Klein, Olivier; Koch, Christopher; Korb, Sebastian; Lewinski, Peter; Liao, Julia D; Lund, Sophie; Lupiáñez, Juan; Lynott, Dermot; Nance, Christin N; Oosterwijk, Suzanne; Özdog˘ru, Asil Ali; Pacheco-Unguetti, Antonia Pilar; Pearson, Bethany; Powis, Christina; Riding, Sarah; Roberts, Tomi-Ann; Rumiati, Raffaella I; Senden, Morgane; Shea-Shumsky, Noah B; Sobocko, Karin; Soto, Jose A; Steiner, Troy G; Talarico, Jennifer M; vanAllen, Zack M; Wagenmakers, E-J; Vandekerckhove, Marie; Wainwright, Bethany; Wayand, Joseph F; Zeelenberg, Rene; Zetzer, Emily E; Zwaan, Rolf A
2016-11-01
According to the facial feedback hypothesis, people's affective responses can be influenced by their own facial expression (e.g., smiling, pouting), even when their expression did not result from their emotional experiences. For example, Strack, Martin, and Stepper (1988) instructed participants to rate the funniness of cartoons using a pen that they held in their mouth. In line with the facial feedback hypothesis, when participants held the pen with their teeth (inducing a "smile"), they rated the cartoons as funnier than when they held the pen with their lips (inducing a "pout"). This seminal study of the facial feedback hypothesis has not been replicated directly. This Registered Replication Report describes the results of 17 independent direct replications of Study 1 from Strack et al. (1988), all of which followed the same vetted protocol. A meta-analysis of these studies examined the difference in funniness ratings between the "smile" and "pout" conditions. The original Strack et al. (1988) study reported a rating difference of 0.82 units on a 10-point Likert scale. Our meta-analysis revealed a rating difference of 0.03 units with a 95% confidence interval ranging from -0.11 to 0.16. © The Author(s) 2016.
Infants’ sensitivity to emotion in music and emotion-action understanding
Siu, Tik-Sze Carrey; Cheung, Him
2017-01-01
Emerging evidence has indicated infants’ early sensitivity to acoustic cues in music. Do they interpret these cues in emotional terms to represent others’ affective states? The present study examined infants’ development of emotional understanding of music with a violation-of-expectation paradigm. Twelve- and 20-month-olds were presented with emotionally concordant and discordant music-face displays on alternate trials. The 20-month-olds, but not the 12-month-olds, were surprised by emotional incongruence between musical and facial expressions, suggesting their sensitivity to musical emotion. In a separate non-music task, only the 20-month-olds were able to use an actress’s affective facial displays to predict her subsequent action. Interestingly, for the 20-month-olds, such emotion-action understanding correlated with sensitivity to musical expressions measured in the first task. These two abilities however did not correlate with family income, parental estimation of language and communicative skills, and quality of parent-child interaction. The findings suggest that sensitivity to musical emotion and emotion-action understanding may be supported by a generalised common capacity to represent emotion from social cues, which lays a foundation for later social-communicative development. PMID:28152081
Reconstruction Techniques of Choice for the Facial Cosmetic Units.
Russo, F; Linares, M; Iglesias, M E; Martínez-Amo, J L; Cabo, F; Tercedor, J; Costa-Vieira, R; Toledo-Pastrana, T; Ródenas, J M; Leis, V
2017-10-01
A broad range of skin flaps can be used to repair facial surgical defects after the excision of a tumor. The aim of our study was to develop a practical guideline covering the most useful skin grafts for each of the distinct facial cosmetic units. This was a multicenter study in which 10 dermatologists with extensive experience in reconstructive surgery chose their preferred technique for each cosmetic unit. The choice of flaps was based on personal experience, taking into account factors such as suitability of the reconstruction technique for the specific defect, the final cosmetic result, surgical difficulty, and risk of complications. Each dermatologist proposed 2 flaps in order of preference for each cosmetic subunit. A score of 10 was given to the first flap and a score of 5 to the second. The total score obtained for each of the options proposed by the participating dermatologists was used to draw up a list of the 3 best grafts for each site. There was notable unanimity of criteria among most of the dermatologists for reconstructive techniques such as the glabellar flap for defects of the medial canthus of the eye, the bilateral advancement flag flap or H flap for the forehead, the rotary door flap for the auricle of the ear, the Mustarde flap for the infraorbital cheek, the O-Z rotation flap for the scalp, the Tenzel flap for the lower eyelid, and the island flap for the upper lip. The results of this study will be useful as a practical guide to choosing the best reconstruction technique for each of the facial cosmetic units. Copyright © 2017 AEDV. Publicado por Elsevier España, S.L.U. All rights reserved.
[On the contribution of magnets in sequelae of facial paralysis. Preliminary clinical study].
Fombeur, J P; Koubbi, G; Chevalier, A M; Mousset, C
1988-01-01
This trial was designed to evaluate the efficacy of EPOREC 1 500 magnets as an adjuvant to rehabilitation following peripheral facial paralysis. Magnetotherapy is used in many other specialties, and in particular in rheumatology. The properties of repulsion between identical poles were used to decrease the effect of sequelae in the form of contractures on the facial muscles. There were two groups of 20 patients: one group with physiotherapy only and the other with standard rehabilitation together with the use of magnets. These 40 patients had facial paralysis of various origins (trauma, excision of acoustic neuroma, Bell's palsy etc). Obviously all patients had an intact nerve. It was at the time of the development of contractures that magnets could be used in terms of evaluation of their efficacy of action on syncinesiae, contractures and spasticity. Magnets were worn at night for a mean period of six months and results were assessed in terms of disappearance of eye-mouth syncinesiae, and in terms of normality of facial tone. Improvement and total recovery without sequelae were obtained far more frequently in the group which wore magnets, encouraging us to continue along these lines.
Body size and allometric variation in facial shape in children.
Larson, Jacinda R; Manyama, Mange F; Cole, Joanne B; Gonzalez, Paula N; Percival, Christopher J; Liberton, Denise K; Ferrara, Tracey M; Riccardi, Sheri L; Kimwaga, Emmanuel A; Mathayo, Joshua; Spitzmacher, Jared A; Rolian, Campbell; Jamniczky, Heather A; Weinberg, Seth M; Roseman, Charles C; Klein, Ophir; Lukowiak, Ken; Spritz, Richard A; Hallgrimsson, Benedikt
2018-02-01
Morphological integration, or the tendency for covariation, is commonly seen in complex traits such as the human face. The effects of growth on shape, or allometry, represent a ubiquitous but poorly understood axis of integration. We address the question of to what extent age and measures of size converge on a single pattern of allometry for human facial shape. Our study is based on two large cross-sectional cohorts of children, one from Tanzania and the other from the United States (N = 7,173). We employ 3D facial imaging and geometric morphometrics to relate facial shape to age and anthropometric measures. The two populations differ significantly in facial shape, but the magnitude of this difference is small relative to the variation within each group. Allometric variation for facial shape is similar in both populations, representing a small but significant proportion of total variation in facial shape. Different measures of size are associated with overlapping but statistically distinct aspects of shape variation. Only half of the size-related variation in facial shape can be explained by the first principal component of four size measures and age while the remainder associates distinctly with individual measures. Allometric variation in the human face is complex and should not be regarded as a singular effect. This finding has important implications for how size is treated in studies of human facial shape and for the developmental basis for allometric variation more generally. © 2017 Wiley Periodicals, Inc.
Botulinum toxin to improve lower facial symmetry in facial nerve palsy
Sadiq, S A; Khwaja, S; Saeed, S R
2012-01-01
Introduction In long-standing facial palsy, muscles on the normal side overcontract causing difficulty in articulation, eating, drinking, cosmetic embarrassment, and psychological effects as patients lack confidence in public. Methods We injected botulinum toxin A (BTXA) into the normal contralateral smile muscles to weaken them and restore symmetry to both active and passive movements by neutralising these overacting muscles. Results A total of 14 patients received BTXA (79% women, median age 47 years, average length of palsy 8 years). They were all difficult cases graded between 2 and 6 (average grade 3 House–Brackmann). All 14 patients reported improved facial symmetry with BTXA (dose altered in some to achieve maximum benefit). Average dose was 30 units, but varied from 10 to 80 units. Average time to peak effect was 6 days; average duration of effect was 11 weeks. Three patients had increased drooling (resolved within a few days). Conclusion The improvement in symmetry was observed by both patient and examining doctor. Patients commented on increased confidence, being more likely to allow photographs taken of themselves, and families reported improved legibility of speech. Younger patients have more muscle tone than older patients; the effect is more noticeable and the benefit greater for them. BTXA improves symmetry in patients with facial palsy, is simple and acceptable, and provides approximately 4 months of benefit. The site of injection depends on the dynamics of the muscles in each individual patient. PMID:22975654
Nichol, Kathryn; Bigelow, Philip; O'Brien-Pallas, Linda; McGeer, Allison; Manno, Mike; Holness, D Linn
2008-09-01
Communicable respiratory illness is an important cause of morbidity among nurses. One of the key reasons for occupational transmission of this illness is the failure to implement appropriate barrier precautions, particularly facial protection. The objectives of this study were to describe the factors that influence nurses' decisions to use facial protection and to determine their relative importance in predicting compliance. This cross-sectional survey was conducted in 9 units of 2 urban hospitals in which nursing staff regularly use facial protection. A total of 400 self-administered questionnaires were provided to nurses, and 177 were returned (44% response rate). Less than half of respondents reported compliance with the recommended use of facial protection (eye/face protection, respirators, and surgical masks) to prevent occupational transmission of communicable respiratory disease. Multivariate analysis showed 5 factors to be key predictors of nurses' compliance with the recommended use of facial protection. These factors include full-time work status, greater than 5 years tenure as a nurse, at least monthly use of facial protection, a belief that media coverage of infectious diseases impacts risk perception and work practices, and organizational support for health and safety. Strategies and interventions based on these findings should result in enhanced compliance with facial protection and, ultimately, a reduction in occupational transmission of communicable respiratory illness.
Grunebaum, Lisa Danielle; Reiter, David
2006-01-01
To determine current practice for use of perioperative antibiotics among facial plastic surgeons, to determine the extent of use of literature support for preferences of facial plastic surgeons, and to compare patterns of use with nationally supported evidence-based guidelines. A link to a Web site containing a questionnaire on perioperative antibiotic use was e-mailed to more than 1000 facial plastic surgeons in the United States. Responses were archived in a dedicated database and analyzed to determine patterns of use and methods of documenting that use. Current literature was used to develop evidence-based recommendations for perioperative antibiotic use, emphasizing current nationally supported guidelines. Preferences varied significantly for medication used, dosage and regimen, time of first dose relative to incision time, setting in which medication was administered, and procedures for which perioperative antibiotic was deemed necessary. Surgical site infection in facial plastic surgery can be reduced by better conformance to currently available evidence-based guidelines. We offer specific recommendations that are supported by the current literature.
Spisak, Brian R.; Blaker, Nancy M.; Lefevre, Carmen E.; Moore, Fhionna R.; Krebbers, Kleis F. B.
2014-01-01
Previous research indicates that followers tend to contingently match particular leader qualities to evolutionarily consistent situations requiring collective action (i.e., context-specific cognitive leadership prototypes) and information processing undergoes categorization which ranks certain qualities as first-order context-general and others as second-order context-specific. To further investigate this contingent categorization phenomenon we examined the “attractiveness halo”—a first-order facial cue which significantly biases leadership preferences. While controlling for facial attractiveness, we independently manipulated the underlying facial cues of health and intelligence and then primed participants with four distinct organizational dynamics requiring leadership (i.e., competition vs. cooperation between groups and exploratory change vs. stable exploitation). It was expected that the differing requirements of the four dynamics would contingently select for relatively healthier- or intelligent-looking leaders. We found perceived facial intelligence to be a second-order context-specific trait—for instance, in times requiring a leader to address between-group cooperation—whereas perceived health is significantly preferred across all contexts (i.e., a first-order trait). The results also indicate that facial health positively affects perceived masculinity while facial intelligence negatively affects perceived masculinity, which may partially explain leader choice in some of the environmental contexts. The limitations and a number of implications regarding leadership biases are discussed. PMID:25414653
Strauss, G; Strauss, M; Lüders, C; Stopp, S; Shi, J; Dietz, A; Lüth, T
2008-10-01
PROBLEM DEFINITION: The goal of this work is the integration of the information of the intraoperative EMG monitoring of the facial nerve into the radiological data of the petrous bone. The following hypotheses are to be examined: (I) the N. VII can be determined intraoperatively with a high reliability by the stimulation-probe. A computer program is able to discriminate true-positive EMG signals from false-positive artifacts. (II) The course of the facial nerve can be registered in a three-dimensional area by EMG signals at a nerve model in the lab test. The individual items of the nerve can be combined into a route model. The route model can be integrated into the data of digital volume tomography (DVT). (I) Intraoperative EMG signals of the facial nerve were classified at 128 measurements by an automatic software. The results were correlated with the actual intraoperative situation. (II) The nerve phantom was designed and a DVT data set was provided. Phantom was registered with a navigation system (Karl Storz NPU, Tuttlingen, Germany). The stimulation probe of the EMG-system was tracked by the navigation system. The navigation system was extended by a processing unit (MiMed, Technische Universität München, Germany). Thus the classified EMG parameters of the facial route can be received, processed and be generated to a model of the facial nerve route. The operability was examined at 120 (10 x 12) measuring points. The evaluation of the examined algorithm for classification EMG-signals of the facial nerve resulted as correct in all measuring events. In all 10 attempts it succeeded to visualize the nerve route as three-dimensional model. The different sizes of the individual measuring points reflect the appropriate values of Istim and UEMG correctly. This work proves the feasibility of an automatic classification of an intraoperative EMG signal of the facial nerve by a processing unit. Furthermore the work shows the feasibility of tracking of the position of the stimulation probe and its integration into amodel of the route of the facial nerve (e. g. DVT). The rediability, with which the position of the nerve can be seized by the stimulation probe, is also included into the resulting route model.
Living with Moebius syndrome: adjustment, social competence, and satisfaction with life.
Bogart, Kathleen Rives; Matsumoto, David
2010-03-01
Moebius syndrome is a rare congenital condition that results in bilateral facial paralysis. Several studies have reported social interaction and adjustment problems in people with Moebius syndrome and other facial movement disorders, presumably resulting from lack of facial expression. To determine whether adults with Moebius syndrome experience increased anxiety and depression and/or decreased social competence and satisfaction with life compared with people without facial movement disorders. Internet-based quasi-experimental study with comparison group. Thirty-seven adults with Moebius syndrome recruited through the United States-based Moebius Syndrome Foundation newsletter and Web site and 37 age- and gender-matched control participants recruited through a university participant database. Anxiety and depression, social competence, satisfaction with life, ability to express emotion facially, and questions about Moebius syndrome symptoms. People with Moebius syndrome reported significantly lower social competence than the matched control group and normative data but did not differ significantly from the control group or norms in anxiety, depression, or satisfaction with life. In people with Moebius syndrome, degree of facial expression impairment was not significantly related to the adjustment variables. Many people with Moebius syndrome are better adjusted than previous research suggests, despite their difficulties with social interaction. To enhance interaction, people with Moebius syndrome could compensate for the lack of facial expression with alternative expressive channels.
Late outcomes after grafting of the severely burned face: a quality improvement initiative.
Philp, Lauren; Umraw, Nisha; Cartotto, Robert
2012-01-01
Many approaches to surgical management of the severely burned face are described, but there are few objective outcome studies. The purpose of this study was to perform a detailed evaluation of the late outcomes in adult patients who have undergone grafting using a standardized surgical and rehabilitation approach for full-thickness (FT) facial burns to identify areas for improvement in the treatment strategy of authors. This was a prospective observational study in which patients who had undergone grafting for FT facial burns by the senior investigator at a regional burn centre between 1999 and 2010 were examined by a single evaluator. The surgical approach included tangential excision based on the facial aesthetic units, temporary cover with allograft then autografting with scalp skin preferentially, split grafts for the upper eyelid, and FT grafts for the lower eyelid. Rehabilitation included compression (uvex and or soft cloth), scar massage, and silicone gel sheeting. Of 35 patients with facial grafts, 14 subjects (age 43 ± 16 years with 22 ± 21% TBSA burns) returned for late follow-up at 40 ± 33 months (range, 5-91 months). A mean of four facial aesthetic units per patient were grafted (range, 1-9 units), with six full facial grafts performed. Scalp was used as donor in 10 of 14 cases. Scalp donor sites were well tolerated with minor alopecia visible in only one case although the donor site visibly extended slightly past the hairline in two cases. Color match with native skin was rated at 8.8 ± 0.8 of 10 when scalp skin was used compared with 7.5 ± 1.6 with other donor sites (P = .06). On the lip and chin, hypertrophic scars were significantly worse compared with the rest of the facial grafts (Vancouver scar scale 8 ± 2 vs 3 ± 1, P < .01). Sensory recovery was poor with overall moving two-point discrimination at 11 ± 3 mm (range, 4-15 mm), and monofilament light touch was 3.8 ± 0.6. Graft borders were significantly more elevated than graft seams. On the forehead, the most notable problem was a gap between the graft and hairlines of the frontal scalp and eyebrows (range, 0-40 mm). Grafted eyelids required one or more subsequent ectropion releases in the majority of cases. The most common problem for the nose was asymmetry of the nostril apertures. The most problematic late outcomes that the authors identified after facial grafting for FT facial burns included relatively poor sensory return, elevation of graft edges, eyelid ectropion, gaps between grafts and hairline, and marked hypertrophic scarring around the mouth and chin. The results indicate that possible areas for quality improvement include greater attention to the limits of scalp harvest, more attention to pressure application to graft borders and the lip and chin during rehabilitation, greater accuracy in excision and graft placement on the forehead to avoid gaps with the hairlines, and counseling of the patient regarding the high probability of diminished facial sensation.
Contextual influences on pain communication in couples with and without a partner with chronic pain.
Gagnon, Michelle M; Hadjistavropoulos, Thomas; MacNab, Ying C
2017-10-01
This is an experimental study of pain communication in couples. Despite evidence that chronic pain in one partner impacts both members of the dyad, dyadic influences on pain communication have not been sufficiently examined and are typically studied based on retrospective reports. Our goal was to directly study contextual influences (ie, presence of chronic pain, gender, relationship quality, and pain catastrophizing) on self-reported and nonverbal (ie, facial expressions) pain responses. Couples with (n = 66) and without (n = 65) an individual with chronic pain (ICP) completed relationship and pain catastrophizing questionnaires. Subsequently, one partner underwent a pain task (pain target, PT), while the other partner observed (pain observer, PO). In couples with an ICP, the ICP was assigned to be the PT. Pain intensity and PO perceived pain intensity ratings were recorded at multiple intervals. Facial expressions were video recorded throughout the pain task. Pain-related facial expression was quantified using the Facial Action Coding System. The most consistent predictor of either partner's pain-related facial expression was the pain-related facial expression of the other partner. Pain targets provided higher pain ratings than POs and female PTs reported and showed more pain, regardless of chronic pain status. Gender and the interaction between gender and relationship satisfaction were predictors of pain-related facial expression among PTs, but not POs. None of the examined variables predicted self-reported pain. Results suggest that contextual variables influence pain communication in couples, with distinct influences for PTs and POs. Moreover, self-report and nonverbal responses are not displayed in a parallel manner.
Hur, Dong Min; Lee, Young Hee; Kim, Sung Hoon; Park, Jung Mi; Kim, Ji Hyun; Yong, Sang Yeol; Shinn, Jong Mock; Oh, Kyung Joon
2013-01-01
Objective To examine the neurophysiologic status in patients with idiopathic facial nerve palsy (Bell's palsy) and Ramsay Hunt syndrome (herpes zoster oticus) within 7 days from onset of symptoms, by comparing the amplitude of compound muscle action potentials (CMAP) of facial muscles in electroneuronography (ENoG) and transcranial magnetic stimulation (TMS). Methods The facial nerve conduction study using ENoG and TMS was performed in 42 patients with Bell's palsy and 14 patients with Ramsay Hunt syndrome within 7 days from onset of symptoms. Denervation ratio was calculated as CMAP amplitude evoked by ENoG or TMS on the affected side as percentage of the amplitudes on the healthy side. The severity of the facial palsy was graded according to House-Brackmann facial grading scale (H-B FGS). Results In all subjects, the denervation ratio in TMS (71.53±18.38%) was significantly greater than the denervation ratio in ENoG (41.95±21.59%). The difference of denervation ratio between ENoG and TMS was significantly smaller in patients with Ramsay Hunt syndrome than in patients with Bell's palsy. The denervation ratio of ENoG or TMS did not correlated significantly with the H-B FGS. Conclusion In the electrophysiologic study for evaluation in patients with facial palsy within 7 days from onset of symptoms, ENoG and TMS are useful in gaining additional information about the neurophysiologic status of the facial nerve and may help to evaluate prognosis and set management plan. PMID:23525840
Bertalanffy, Helmut; Tissira, Nadir; Krayenbühl, Niklaus; Bozinov, Oliver; Sarnthein, Johannes
2011-03-01
Surgical exposure of intrinsic brainstem lesions through the floor of the 4th ventricle requires precise identification of facial nerve (CN VII) fibers to avoid damage. To assess the shape, size, and variability of the area where the facial nerve can be stimulated electrophysiologically on the surface of the rhomboid fossa. Over a period of 18 months, 20 patients were operated on for various brainstem and/or cerebellar lesions. Facial nerve fibers were stimulated to yield compound muscle action potentials (CMAP) in the target muscles. Using the sites of CMAP yield, a detailed functional map of the rhomboid fossa was constructed for each patient. Lesions resected included 14 gliomas, 5 cavernomas, and 1 epidermoid cyst. Of 40 response areas mapped, 19 reached the median sulcus. The distance from the obex to the caudal border of the response area ranged from 8 to 27 mm (median, 17 mm). The rostrocaudal length of the response area ranged from 2 to 15 mm (median, 5 mm). Facial nerve response areas showed large variability in size and position, even in patients with significant distance between the facial colliculus and underlying pathological lesion. Lesions located close to the facial colliculus markedly distorted the response area. This is the first documentation of variability in the CN VII response area in the rhomboid fossa. Knowledge of this remarkable variability may facilitate the assessment of safe entry zones to the brainstem and may contribute to improved outcome following neurosurgical interventions within this sensitive area of the brain.
Har-Shai, Yaron; Gil, Tamir; Metanes, Issa; Labbé, Daniel
2010-07-01
Facial paralysis is a significant functional and aesthetic handicap. Facial reanimation is performed either by two-stage microsurgical methods or by regional one-stage muscle pedicle flaps. Labbé has modified and improved the regional muscle pedicle transfer flaps for facial reanimation (i.e., the lengthening temporalis myoplasty procedure). This true myoplasty technique is capable of producing a coordinated, spontaneous, and symmetrical smile. An intraoperative electrical stimulation of the temporal muscle is proposed to simulate the smile of the paralyzed side on the surgical table. The intraoperative electrical stimulation of the temporalis muscle, employing direct percutaneous electrode needles or transcutaneous electrical stimulation electrodes, was utilized in 11 primary and four secondary cases with complete facial palsy. The duration of the facial paralysis was up to 12 years. Postoperative follow-up ranged from 3 to 12 months. The insertion points of the temporalis muscle tendon to the nasolabial fold, upper lip, and oral commissure had been changed according to the intraoperative muscle stimulation in six patients of the 11 primary cases (55 percent) and in all four secondary (revisional) cases. A coordinated, spontaneous, and symmetrical smile was achieved in all patients by 3 months after surgery by employing speech therapy and biofeedback. This adjunct intraoperative refinement provides crucial feedback for the surgeon in both primary and secondary facial palsy cases regarding the vector of action of the temporalis muscle and the accuracy of the anchoring points of its tendon, thus enhancing a more coordinated and symmetrical smile.
Classification and Weakly Supervised Pain Localization using Multiple Segment Representation.
Sikka, Karan; Dhall, Abhinav; Bartlett, Marian Stewart
2014-10-01
Automatic pain recognition from videos is a vital clinical application and, owing to its spontaneous nature, poses interesting challenges to automatic facial expression recognition (AFER) research. Previous pain vs no-pain systems have highlighted two major challenges: (1) ground truth is provided for the sequence, but the presence or absence of the target expression for a given frame is unknown, and (2) the time point and the duration of the pain expression event(s) in each video are unknown. To address these issues we propose a novel framework (referred to as MS-MIL) where each sequence is represented as a bag containing multiple segments, and multiple instance learning (MIL) is employed to handle this weakly labeled data in the form of sequence level ground-truth. These segments are generated via multiple clustering of a sequence or running a multi-scale temporal scanning window, and are represented using a state-of-the-art Bag of Words (BoW) representation. This work extends the idea of detecting facial expressions through 'concept frames' to 'concept segments' and argues through extensive experiments that algorithms such as MIL are needed to reap the benefits of such representation. The key advantages of our approach are: (1) joint detection and localization of painful frames using only sequence-level ground-truth, (2) incorporation of temporal dynamics by representing the data not as individual frames but as segments, and (3) extraction of multiple segments, which is well suited to signals with uncertain temporal location and duration in the video. Extensive experiments on UNBC-McMaster Shoulder Pain dataset highlight the effectiveness of the approach by achieving competitive results on both tasks of pain classification and localization in videos. We also empirically evaluate the contributions of different components of MS-MIL. The paper also includes the visualization of discriminative facial patches, important for pain detection, as discovered by our algorithm and relates them to Action Units that have been associated with pain expression. We conclude the paper by demonstrating that MS-MIL yields a significant improvement on another spontaneous facial expression dataset, the FEEDTUM dataset.
Network Interactions Explain Sensitivity to Dynamic Faces in the Superior Temporal Sulcus.
Furl, Nicholas; Henson, Richard N; Friston, Karl J; Calder, Andrew J
2015-09-01
The superior temporal sulcus (STS) in the human and monkey is sensitive to the motion of complex forms such as facial and bodily actions. We used functional magnetic resonance imaging (fMRI) to explore network-level explanations for how the form and motion information in dynamic facial expressions might be combined in the human STS. Ventral occipitotemporal areas selective for facial form were localized in occipital and fusiform face areas (OFA and FFA), and motion sensitivity was localized in the more dorsal temporal area V5. We then tested various connectivity models that modeled communication between the ventral form and dorsal motion pathways. We show that facial form information modulated transmission of motion information from V5 to the STS, and that this face-selective modulation likely originated in OFA. This finding shows that form-selective motion sensitivity in the STS can be explained in terms of modulation of gain control on information flow in the motion pathway, and provides a substantial constraint for theories of the perception of faces and biological motion. © The Author 2014. Published by Oxford University Press.
Sex differences in social cognition: The case of face processing.
Proverbio, Alice Mado
2017-01-02
Several studies have demonstrated that women show a greater interest for social information and empathic attitude than men. This article reviews studies on sex differences in the brain, with particular reference to how males and females process faces and facial expressions, social interactions, pain of others, infant faces, faces in things (pareidolia phenomenon), opposite-sex faces, humans vs. landscapes, incongruent behavior, motor actions, biological motion, erotic pictures, and emotional information. Sex differences in oxytocin-based attachment response and emotional memory are also mentioned. In addition, we investigated how 400 different human faces were evaluated for arousal and valence dimensions by a group of healthy male and female University students. Stimuli were carefully balanced for sensory and perceptual characteristics, age, facial expression, and sex. As a whole, women judged all human faces as more positive and more arousing than men. Furthermore, they showed a preference for the faces of children and the elderly in the arousal evaluation. Regardless of face aesthetics, age, or facial expression, women rated human faces higher than men. The preference for opposite- vs. same-sex faces strongly interacted with facial age. Overall, both women and men exhibited differences in facial processing that could be interpreted in the light of evolutionary psychobiology. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Patterns of craniofacial integration in extant Homo, Pan, and Gorilla.
Polanski, Joshua M; Franciscus, Robert G
2006-09-01
Brain size increased greatly during Pleistocene human evolution, while overall facial and dentognathic size decreased markedly. This mosaic pattern is due to either selective forces that acted uniquely on each functional unit in a modularized, developmentally uncoupled craniofacial complex, or alternatively, selection that acted primarily on one unit, with the other responding passively as part of a coevolved set of ontogenetically and evolutionarily integrated structures. Using conditional independence modeling on homologous linear measurements of the height, breadth, and depth of the cranium in Pan (n = 95), Gorilla (n = 102), and recent Homo (n = 120), we reject the null hypothesis of equal levels of overall cranial integration. While all three groups share the pattern of greater neurocranial integration with distinct separation between the face and neurocranium (modularization), family differences do exist. The apes are more integrated in their entire crania, but display a particularly strong pattern of integration within the facial complex related to prognathism. Modern humans display virtually no facial integration, a pattern which is likely related to their markedly decreased facial projection. Modern humans also differ from their great ape counterparts in being more integrated within the breadth dimension of the cranial vault, likely tied to the increase in brain size and eventual globularity seen in human evolution. That the modern human integration pattern differs from the ancestral African great ape pattern along the inverse neurocranial-facial trend seen in human evolution indicates that this shift in the pattern of integration is evolutionarily significant, and may help to clarify aspects of the current debate over defining modern humans. 2006 Wiley-Liss, Inc.
Modulation of α power and functional connectivity during facial affect recognition.
Popov, Tzvetan; Miller, Gregory A; Rockstroh, Brigitte; Weisz, Nathan
2013-04-03
Research has linked oscillatory activity in the α frequency range, particularly in sensorimotor cortex, to processing of social actions. Results further suggest involvement of sensorimotor α in the processing of facial expressions, including affect. The sensorimotor face area may be critical for perception of emotional face expression, but the role it plays is unclear. The present study sought to clarify how oscillatory brain activity contributes to or reflects processing of facial affect during changes in facial expression. Neuromagnetic oscillatory brain activity was monitored while 30 volunteers viewed videos of human faces that changed their expression from neutral to fearful, neutral, or happy expressions. Induced changes in α power during the different morphs, source analysis, and graph-theoretic metrics served to identify the role of α power modulation and cross-regional coupling by means of phase synchrony during facial affect recognition. Changes from neutral to emotional faces were associated with a 10-15 Hz power increase localized in bilateral sensorimotor areas, together with occipital power decrease, preceding reported emotional expression recognition. Graph-theoretic analysis revealed that, in the course of a trial, the balance between sensorimotor power increase and decrease was associated with decreased and increased transregional connectedness as measured by node degree. Results suggest that modulations in α power facilitate early registration, with sensorimotor cortex including the sensorimotor face area largely functionally decoupled and thereby protected from additional, disruptive input and that subsequent α power decrease together with increased connectedness of sensorimotor areas facilitates successful facial affect recognition.
Proposed shade guide for human facial skin and lip: a pilot study.
Wee, Alvin G; Beatty, Mark W; Gozalo-Diaz, David J; Kim-Pusateri, Seungyee; Marx, David B
2013-08-01
Currently, no commercially available facial shade guide exists in the United States for the fabrication of facial prostheses. The purpose of this study was to measure facial skin and lip color in a human population sample stratified by age, gender, and race. Clustering analysis was used to determine optimal color coordinates for a proposed facial shade guide. Participants (n=119) were recruited from 4 racial/ethnic groups, 5 age groups, and both genders. Reflectance measurements of participants' noses and lower lips were made by using a spectroradiometer and xenon arc lamp with a 45/0 optical configuration. Repeated measures ANOVA (α=.05), to identify skin and lip color differences, resulting from race, age, gender, and location, and a hierarchical clustering analysis, to identify clusters of skin colors) were used. Significant contributors to L*a*b* facial color were race and facial location (P<.01). b* affected all factors (P<.05). Age affected only b* (P<.001), while gender affected only L* (P<.05) and b* (P<.05). Analyses identified 5 clusters of skin color. The study showed that skin color caused by age and gender primarily occurred within the yellow-blue axis. A significant lightness difference between gender groups was also found. Clustering analysis identified 5 distinct skin shade tabs. Copyright © 2013 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
[Summary of professor YANG Jun's experience for intractable facial paralysis].
Wang, Tao; Li, Zaiyuan; Ge, Tingqiu; Zhang, Man; Yuan, Aihong; Yang, Jun
2017-06-12
Professor YANG Jun 's experience of diagnosis and treatment for intractable facial paralysis is introduced. Professor YANG focuses on the thinking model that combines TCM, western medicine and acupuncture, and adopts the differentiation system that combines disease differentiation, syndrome differentiation and meridian differentiation; he adopts the treatment integrates etiological treatment, overall regulation, symptomatic treatment as well as acupuncture, moxibustion, medication and flash cupping. The acupoints of yangming meridians are mostly selected, and acupoints of governor vessel such as Dazhui (GV 14) and Jinsuo (GV 8) are highly valued. The multiple-needles shallow-penetration-insertion twirling lifting and thrusting technique are mostly adopted to achieve slow and mild acupuncture sensation; in addition, the facial muscles are pulled up with mechanics action. The intensive stimulation with electroacupuncture is recommended at Qianzheng (Extra), Yifeng (TE 17) and Yangbai (GB 14), which is given two or three treatments per week.
Evidence of microbeads from personal care product contaminating the sea.
Cheung, Pui Kwan; Fok, Lincoln
2016-08-15
Plastic microbeads in personal care products have been identified as a source of marine pollution. Yet, their existence in the environment is rarely reported. During two surface manta trawls in the coastal waters of Hong Kong, eleven blue, spherical microbeads were captured. Their sizes (in diameters) ranged from 0.332 to 1.015mm. These microbeads possessed similar characteristics in terms of colour, shape and size with those identified and extracted from a facial scrub available in the local market. The FT-IR spectrum of the captured microbeads also matched those from the facial scrub. It was likely that the floating microbeads at the sea surface originated from a facial scrub and they have bypassed or escaped the sewage treatment system in Hong Kong. Timely voluntary or legislative actions are required to prevent more microbeads from entering the aquatic environment. Copyright © 2016 Elsevier Ltd. All rights reserved.
Automated Measurement of Facial Expression in Infant-Mother Interaction: A Pilot Study
Messinger, Daniel S.; Mahoor, Mohammad H.; Chow, Sy-Miin; Cohn, Jeffrey F.
2009-01-01
Automated facial measurement using computer vision has the potential to objectively document continuous changes in behavior. To examine emotional expression and communication, we used automated measurements to quantify smile strength, eye constriction, and mouth opening in two six-month-old/mother dyads who each engaged in a face-to-face interaction. Automated measurements showed high associations with anatomically based manual coding (concurrent validity); measurements of smiling showed high associations with mean ratings of positive emotion made by naive observers (construct validity). For both infants and mothers, smile strength and eye constriction (the Duchenne marker) were correlated over time, creating a continuous index of smile intensity. Infant and mother smile activity exhibited changing (nonstationary) local patterns of association, suggesting the dyadic repair and dissolution of states of affective synchrony. The study provides insights into the potential and limitations of automated measurement of facial action. PMID:19885384
Wiertel-Krawczuk, Agnieszka; Huber, Juliusz; Wojtysiak, Magdalena; Golusiński, Wojciech; Pieńkowski, Piotr; Golusiński, Paweł
2015-05-01
Parotid gland tumor surgery sometimes leads to facial nerve paralysis. Malignant more than benign tumors determine nerve function preoperatively, while postoperative observations based on clinical, histological and neurophysiological studies have not been reported in detail. The aims of this pilot study were evaluation and correlations of histological properties of tumor (its size and location) and clinical and neurophysiological assessment of facial nerve function pre- and post-operatively (1 and 6 months). Comparative studies included 17 patients with benign (n = 13) and malignant (n = 4) tumors. Clinical assessment was based on House-Brackmann scale (H-B), neurophysiological diagnostics included facial electroneurography [ENG, compound muscle action potential (CMAP)], mimetic muscle electromyography (EMG) and blink-reflex examinations (BR). Mainly grade I of H-B was recorded both pre- (n = 13) and post-operatively (n = 12) in patients with small (1.5-2.4 cm) benign tumors located in superficial lobes. Patients with medium size (2.5-3.4 cm) malignant tumors in both lobes were scored at grade I (n = 2) and III (n = 2) pre- and mainly VI (n = 4) post-operatively. CMAP amplitudes after stimulation of mandibular marginal branch were reduced at about 25 % in patients with benign tumors after surgery. In the cases of malignant tumors CMAPs were not recorded following stimulation of any branch. A similar trend was found for BR results. H-B and ENG results revealed positive correlations between the type of tumor and surgery with facial nerve function. Neurophysiological studies detected clinically silent facial nerve neuropathy of mandibular marginal branch in postoperative period. Needle EMG, ENG and BR examinations allow for the evaluation of face muscles reinnervation and facial nerve regeneration.
Sasaki, Ryo; Takeuchi, Yuichi; Watanabe, Yorikatsu; Niimi, Yosuke; Sakurai, Hiroyuki; Miyata, Mariko; Yamato, Masayuki
2014-01-01
Background: Extensive facial nerve defects between the facial nerve trunk and its branches can be clinically reconstructed by incorporating double innervation into an end-to-side loop graft technique. This study developed a new animal model to evaluate the technique’s ability to promote nerve regeneration. Methods: Rats were divided into the intact, nonsupercharge, and supercharge groups. Artificially created facial nerve defects were reconstructed with a nerve graft, which was end-to-end sutured from proximal facial nerve stump to the mandibular branch (nonsupercharge group), or with the graft of which other end was end-to-side sutured to the hypoglossal nerve (supercharge group). And they were evaluated after 30 weeks. Results: Axonal diameter was significantly larger in the supercharge group than in the nonsupercharge group for the buccal (3.78 ± 1.68 vs 3.16 ± 1.22; P < 0.0001) and marginal mandibular branches (3.97 ± 2.31 vs 3.46 ± 1.57; P < 0.0001), but the diameter was significantly larger in the intact group for all branches except the temporal branch. In the supercharge group, compound muscle action potential amplitude was significantly higher than in the nonsupercharge group (4.18 ± 1.49 mV vs 1.87 ± 0.37 mV; P < 0.0001) and similar to that in the intact group (4.11 ± 0.68 mV). Retrograde labeling showed that the mimetic muscles were double-innervated by facial and hypoglossal nerve nuclei in the supercharge group. Conclusions: Multiple facial nerve branch reconstruction with an end-to-side loop graft was able to achieve axonal distribution. Additionally, axonal supercharge from the hypoglossal nerve significantly improved outcomes. PMID:25426357
Endotracheal tube stabilization using an orthodontic skeletal anchor in a patient with facial burns.
Kanno, T; Mitsugi, M; Furuki, Y; Kozato, S
2008-04-01
Stabilizing the endotracheal tube is of vital importance in patients suffering facial burns or trauma in the intensive care unit, as well as during a general anaesthetic procedure. Here is presented a secure method using a simple orthodontic skeletal anchorage system on the maxilla and 0.4-mm stainless steel wire that does not require any work or place any burden on the teeth or gingival tissue, and does not require extensive surgery.
Down syndrome detection from facial photographs using machine learning techniques
NASA Astrophysics Data System (ADS)
Zhao, Qian; Rosenbaum, Kenneth; Sze, Raymond; Zand, Dina; Summar, Marshall; Linguraru, Marius George
2013-02-01
Down syndrome is the most commonly occurring chromosomal condition; one in every 691 babies in United States is born with it. Patients with Down syndrome have an increased risk for heart defects, respiratory and hearing problems and the early detection of the syndrome is fundamental for managing the disease. Clinically, facial appearance is an important indicator in diagnosing Down syndrome and it paves the way for computer-aided diagnosis based on facial image analysis. In this study, we propose a novel method to detect Down syndrome using photography for computer-assisted image-based facial dysmorphology. Geometric features based on facial anatomical landmarks, local texture features based on the Contourlet transform and local binary pattern are investigated to represent facial characteristics. Then a support vector machine classifier is used to discriminate normal and abnormal cases; accuracy, precision and recall are used to evaluate the method. The comparison among the geometric, local texture and combined features was performed using the leave-one-out validation. Our method achieved 97.92% accuracy with high precision and recall for the combined features; the detection results were higher than using only geometric or texture features. The promising results indicate that our method has the potential for automated assessment for Down syndrome from simple, noninvasive imaging data.
Consumerism in action: how patients and physicians negotiate payment in health care.
Oh, Hyeyoung
2013-03-01
Drawing from the medical sociology literature on the patient-doctor relationship and microeconomic sociological scholarship about the role of money in personal relationships, I examined patient-physician interactions within a clinic that offered eye health and cosmetic facial services in the United States. Relying on ethnographic observations conducted in 2008, I evaluated how financial pressures shape the patient-physician relationship during the clinical encounter. To gain a financial advantage, patients attempted to reshape the relationship toward a socially intimate one, where favor and gift exchanges are more common. To ensure the rendering of services, the physician in turn allied herself with the patient, demonstrating how external parties are the barriers to affordable care. This allied relationship was tested when conflicts emerged, primarily because of the role of financial intermediaries in the clinical encounter. These conflicts resulted in the disintegration of the personal relationship, with patient and physician pitted against one another.
Todd, Rebecca M; Lee, Wayne; Evans, Jennifer W; Lewis, Marc D; Taylor, Margot J
2012-07-01
The modulation of control processes by stimulus salience, as well as associated neural activation, changes over development. We investigated age-related differences in the influence of facial emotion on brain activation when an action had to be withheld, focusing on a developmental period characterized by rapid social-emotional and cognitive change. Groups of kindergarten and young school-aged children and a group of young adults performed a modified Go/Nogo task. Response cues were preceded by happy or angry faces. After controlling for task performance, left orbitofrontal regions discriminated trials with happy vs. angry faces in children but not in adults when a response was withheld, and this effect decreased parametrically with age group. Age-related changes in prefrontal responsiveness to facial expression were not observed when an action was required, nor did this region show age-related activation changes with the demand to withhold a response in general. Such results reveal age-related differences in prefrontal activation that are specific to stimulus valence and depend on the action required. Copyright © 2012 Elsevier Ltd. All rights reserved.
Yang, Yang; Saleemi, Imran; Shah, Mubarak
2013-07-01
This paper proposes a novel representation of articulated human actions and gestures and facial expressions. The main goals of the proposed approach are: 1) to enable recognition using very few examples, i.e., one or k-shot learning, and 2) meaningful organization of unlabeled datasets by unsupervised clustering. Our proposed representation is obtained by automatically discovering high-level subactions or motion primitives, by hierarchical clustering of observed optical flow in four-dimensional, spatial, and motion flow space. The completely unsupervised proposed method, in contrast to state-of-the-art representations like bag of video words, provides a meaningful representation conducive to visual interpretation and textual labeling. Each primitive action depicts an atomic subaction, like directional motion of limb or torso, and is represented by a mixture of four-dimensional Gaussian distributions. For one--shot and k-shot learning, the sequence of primitive labels discovered in a test video are labeled using KL divergence, and can then be represented as a string and matched against similar strings of training videos. The same sequence can also be collapsed into a histogram of primitives or be used to learn a Hidden Markov model to represent classes. We have performed extensive experiments on recognition by one and k-shot learning as well as unsupervised action clustering on six human actions and gesture datasets, a composite dataset, and a database of facial expressions. These experiments confirm the validity and discriminative nature of the proposed representation.
Ten Mistakes To Avoid When Injecting Botulinum Toxin.
Ruiz-Rodriguez, R; Martin-Gorgojo, A
2015-01-01
Injection of botulinum toxin is currently the most common cosmetic procedure in the United States, and in recent years it has become-together with dermal fillers-the mainstay of therapy for the prevention and treatment of facial aging. However, in some cases the treatment may lead to a somewhat unnatural appearance, usually caused by loss of facial expression or other telltale signs. In the present article, we review the 10 mistakes that should be avoided when injecting botulinum toxin. We also reflect on how treatment with botulinum toxin influences us through our facial expressions, both in terms of how we feel and what others perceive. Copyright © 2015 Elsevier España, S.L.U. and AEDV. All rights reserved.
Tanzer, Michal; Shahar, Golan; Avidan, Galia
2014-01-01
The aim of the proposed theoretical model is to illuminate personal and interpersonal resilience by drawing from the field of emotional face perception. We suggest that perception/recognition of emotional facial expressions serves as a central link between subjective, self-related processes and the social context. Emotional face perception constitutes a salient social cue underlying interpersonal communication and behavior. Because problems in communication and interpersonal behavior underlie most, if not all, forms of psychopathology, it follows that perception/recognition of emotional facial expressions impacts psychopathology. The ability to accurately interpret one’s facial expression is crucial in subsequently deciding on an appropriate course of action. However, perception in general, and of emotional facial expressions in particular, is highly influenced by individuals’ personality and the self-concept. Herein we briefly outline well-established theories of personal and interpersonal resilience and link them to the neuro-cognitive basis of face perception. We then describe the findings of our ongoing program of research linking two well-established resilience factors, general self-efficacy (GSE) and perceived social support (PSS), with face perception. We conclude by pointing out avenues for future research focusing on possible genetic markers and patterns of brain connectivity associated with the proposed model. Implications of our integrative model to psychotherapy are discussed. PMID:25165439
Sun, Shiyue; Carretié, Luis; Zhang, Lei; Dong, Yi; Zhu, Chunyan; Luo, Yuejia; Wang, Kai
2014-01-01
Background Although ample evidence suggests that emotion and response inhibition are interrelated at the behavioral and neural levels, neural substrates of response inhibition to negative facial information remain unclear. Thus we used event-related potential (ERP) methods to explore the effects of explicit and implicit facial expression processing in response inhibition. Methods We used implicit (gender categorization) and explicit emotional Go/Nogo tasks (emotion categorization) in which neutral and sad faces were presented. Electrophysiological markers at the scalp and the voxel level were analyzed during the two tasks. Results We detected a task, emotion and trial type interaction effect in the Nogo-P3 stage. Larger Nogo-P3 amplitudes during sad conditions versus neutral conditions were detected with explicit tasks. However, the amplitude differences between the two conditions were not significant for implicit tasks. Source analyses on P3 component revealed that right inferior frontal junction (rIFJ) was involved during this stage. The current source density (CSD) of rIFJ was higher with sad conditions compared to neutral conditions for explicit tasks, rather than for implicit tasks. Conclusions The findings indicated that response inhibition was modulated by sad facial information at the action inhibition stage when facial expressions were processed explicitly rather than implicitly. The rIFJ may be a key brain region in emotion regulation. PMID:25330212
Chao, Xiuhua; Xu, Lei; Li, Jianfeng; Han, Yuechen; Li, Xiaofei; Mao, YanYan; Shang, Haiqiong; Fan, Zhaomin; Wang, Haibo
2016-06-01
Conclusion C/GP hydrogel was demonstrated to be an ideal drug delivery vehicle and scaffold in the vein conduit. Combined use autologous vein and NGF continuously delivered by C/GP-NGF hydrogel can improve the recovery of facial nerve defects. Objective This study investigated the effects of chitosan-β-glycerophosphate-nerve growth factor (C/GP-NGF) hydrogel combined with autologous vein conduit on the recovery of damaged facial nerve in a rat model. Methods A 5 mm gap in the buccal branch of a rat facial nerve was reconstructed with an autologous vein. Next, C/GP-NGF hydrogel was injected into the vein conduit. In negative control groups, NGF solution or phosphate-buffered saline (PBS) was injected into the vein conduits, respectively. Autologous implantation was used as a positive control group. Vibrissae movement, electrophysiological assessment, and morphological analysis of regenerated nerves were performed to assess nerve regeneration. Results NGF continuously released from C/GP-NGF hydrogel in vitro. The recovery rate of vibrissae movement and the compound muscle action potentials of regenerated facial nerve in the C/GP-NGF group were similar to those in the Auto group, and significantly better than those in the NGF group. Furthermore, larger regenerated axons and thicker myelin sheaths were obtained in the C/GP-NGF group than those in the NGF group.
Ichikawa, Hiroko; Kanazawa, So; Yamaguchi, Masami K; Kakigi, Ryusuke
2010-09-27
Adult observers can quickly identify specific actions performed by an invisible actor from the points of lights attached to the actor's head and major joints. Infants are also sensitive to biological motion and prefer to see it depicted by a dynamic point-light display. In detecting biological motion such as whole body and facial movements, neuroimaging studies have demonstrated the involvement of the occipitotemporal cortex, including the superior temporal sulcus (STS). In the present study, we used the point-light display technique and near-infrared spectroscopy (NIRS) to examine infant brain activity while viewing facial biological motion depicted in a point-light display. Dynamic facial point-light displays (PLD) were made from video recordings of three actors making a facial expression of surprise in a dark room. As in Bassili's study, about 80 luminous markers were scattered over the surface of the actor's faces. In the experiment, we measured infant's hemodynamic responses to these displays using NIRS. We hypothesized that infants would show different neural activity for upright and inverted PLD. The responses were compared to the baseline activation during the presentation of individual still images, which were frames extracted from the dynamic PLD. We found that the concentration of oxy-Hb increased in the right temporal area during the presentation of the upright PLD compared to that of the baseline period. This is the first study to demonstrate that infant's brain activity in face processing is induced only by the motion cue of facial movement depicted by dynamic PLD. (c) 2010 Elsevier Ireland Ltd. All rights reserved.
2014-01-01
Background Alexithymia is a personality trait that is characterized by difficulties in identifying and describing feelings. Previous studies have shown that alexithymia is related to problems in recognizing others’ emotional facial expressions when these are presented with temporal constraints. These problems can be less severe when the expressions are visible for a relatively long time. Because the neural correlates of these recognition deficits are still relatively unexplored, we investigated the labeling of facial emotions and brain responses to facial emotions as a function of alexithymia. Results Forty-eight healthy participants had to label the emotional expression (angry, fearful, happy, or neutral) of faces presented for 1 or 3 seconds in a forced-choice format while undergoing functional magnetic resonance imaging. The participants’ level of alexithymia was assessed using self-report and interview. In light of the previous findings, we focused our analysis on the alexithymia component of difficulties in describing feelings. Difficulties describing feelings, as assessed by the interview, were associated with increased reaction times for negative (i.e., angry and fearful) faces, but not with labeling accuracy. Moreover, individuals with higher alexithymia showed increased brain activation in the somatosensory cortex and supplementary motor area (SMA) in response to angry and fearful faces. These cortical areas are known to be involved in the simulation of the bodily (motor and somatosensory) components of facial emotions. Conclusion The present data indicate that alexithymic individuals may use information related to bodily actions rather than affective states to understand the facial expressions of other persons. PMID:24629094
A systematic review and meta-analysis of 'Systems for Social Processes' in eating disorders.
Caglar-Nazali, H Pinar; Corfield, Freya; Cardi, Valentina; Ambwani, Suman; Leppanen, Jenni; Olabintan, Olaolu; Deriziotis, Stephanie; Hadjimichalis, Alexandra; Scognamiglio, Pasquale; Eshkevari, Ertimiss; Micali, Nadia; Treasure, Janet
2014-05-01
Social and emotional problems have been implicated in the development and maintenance of eating disorders (ED). This paper reviews the facets of social processing in ED according to the NIMH Research and Domain Criteria (NIMH RDoC) 'Systems for Social Processes' framework. Embase, Medline, PsycInfo and Web of Science were searched for peer-reviewed articles published by March 2013. One-hundred and fifty four studies measuring constructs of: attachment, social communication, perception and understanding of self and others, and social dominance in people with ED, were identified. Eleven meta-analyses were performed, they showed evidence that people with ED had attachment insecurity (d=1.31), perceived low parental care (d=.51), appraised high parental overprotection (d=0.29), impaired facial emotion recognition (d=.44) and facial communication (d=2.10), increased facial avoidance (d=.52), reduced agency (d=.39), negative self-evaluation (d=2.27), alexithymia (d=.66), poor understanding of mental states (d=1.07) and sensitivity to social dominance (d=1.08). There is less evidence for problems with production and reception of non-facial communication, animacy and action. Copyright © 2013 Elsevier Ltd. All rights reserved.
Dudas, Marek; Kim, Jieun; Li, Wai-Yee; Nagy, Andre; Larsson, Jonas; Karlsson, Stefan; Chai, Yang; Kaartinen, Vesa
2006-01-01
Transforming growth factor beta (TGF-β) proteins play important roles in morphogenesis of many craniofacial tissues; however, detailed biological mechanisms of TGF-β action, particularly in vivo, are still poorly understood. Here, we deleted the TGF-β type I receptor gene Alk5 specifically in the embryonic ectodermal and neural crest cell lineages. Failure in signaling via this receptor, either in the epithelium or in the mesenchyme, caused severe craniofacial defects including cleft palate. Moreover, the facial phenotypes of neural crest-specific Alk5 mutants included devastating facial cleft and appeared significantly more severe than the defects seen in corresponding mutants lacking the TGF-β type II receptor (TGFβRII), a prototypical binding partner of ALK5. Our data indicate that ALK5 plays unique, non-redundant cell-autonomous roles during facial development. Remarkable divergence between Tgfbr2 and Alk5 phenotypes, together with our biochemical in vitro data, imply that (1) ALK5 mediates signaling of a diverse set of ligands not limited to the three isoforms of TGF-β, and (2) ALK5 acts also in conjunction with type II receptors other than TGFβRII. PMID:16806156
Múnera, A; Cuestas, D M; Troncoso, J
2012-10-25
Facial nerve lesions elicit long-lasting changes in vibrissal primary motor cortex (M1) muscular representation in rodents. Reorganization of cortical representation has been attributed to potentiation of preexisting horizontal connections coming from neighboring muscle representation. However, changes in layer 5 pyramidal neuron activity induced by facial nerve lesion have not yet been explored. To do so, the effect of irreversible facial nerve injury on electrophysiological properties of layer 5 pyramidal neurons was characterized. Twenty-four adult male Wistar rats were randomly subjected to two experimental treatments: either surgical transection of mandibular and buccal branches of the facial nerve (n=18) or sham surgery (n=6). Unitary and population activity of vibrissal M1 layer 5 pyramidal neurons recorded in vivo under general anesthesia was compared between sham-operated and facial nerve-injured animals. Injured animals were allowed either one (n=6), three (n=6), or five (n=6) weeks recovery before recording in order to characterize the evolution of changes in electrophysiological activity. As compared to control, facial nerve-injured animals displayed the following sustained and significant changes in spontaneous activity: increased basal firing frequency, decreased spike-associated local field oscillation amplitude, and decreased spontaneous theta burst firing frequency. Significant changes in evoked-activity with whisker pad stimulation included: increased short latency population spike amplitude, decreased long latency population oscillations amplitude and frequency, and decreased peak frequency during evoked single-unit burst firing. Taken together, such changes demonstrate that peripheral facial nerve lesions induce robust and sustained changes of layer 5 pyramidal neurons in vibrissal motor cortex. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.
Hujoel, P P; Bollen, A-M; Yuen, K C J; Hujoel, I A
2016-10-01
It has been suggested that facial traits are informative on the inherited susceptibility to tuberculosis and obesity, two current global health issues. Our aim was to compare the phenotypic characteristics of adolescents with dental markers for a concave (n=420), a convex (n=978), and a straight (n=3542) facial profile in a nationally representative sample of United States adolescents. The results show that adolescents with a concave facial profile, when compared to a straight facial profile, had an increased waist-to-height ratio (Δ, 1.1 [95% CI 0.5-1.7], p<0.003) and an increased acne prevalence (OR, 1.5 [95% CI 1.2-1.9], p<0.001). Adolescents with a convex facial profile, when compared to a straight facial profile, had an increased prevalence of tuberculosis (OR, 4.3 [95% CI 1.4-13.1], p<0.02), increased ectomorphy (Δ, 0.3 [95% CI 0.2-0.4], p<0.0001), increased left-handedness (OR, 1.4 [95% CI 1.1-1.7], p<0.007), increased color-blindness (OR, 1.7 [95% CI 1.3-2.3], p<0.004), and rhesus ee phenotype (OR, 1.3 [95% CI 1.1-1.5], p<0.008). Adolescents with a concave facial profile, when compared to a convex profile, had increased mesomorphy (Δ, 1.3 [95% CI 1.1-1.5], p<0.0001), increased endomorphy (Δ, 0.5 [95% CI 0.4-0.6], p<0.0001), lower ectomorphy (Δ, 0.5 [95% CI 0.4-0.6], p<0.0001), and lower vocabulary test scores (Δ, 2.3 [95% CI 0.8-3.8], p<0.008). It is concluded that population-based survey data confirm that distinct facial features are associated with distinct somatotypes and distinct disease susceptibilities. Copyright © 2016 Elsevier GmbH. All rights reserved.
Emotion processing deficits in alexithymia and response to a depth of processing intervention.
Constantinou, Elena; Panayiotou, Georgia; Theodorou, Marios
2014-12-01
Findings on alexithymic emotion difficulties have been inconsistent. We examined potential differences between alexithymic and control participants in general arousal, reactivity, facial and subjective expression, emotion labeling, and covariation between emotion response systems. A depth of processing intervention was introduced. Fifty-four participants (27 alexithymic), selected using the Toronto Alexithymia Scale-20, completed an imagery experiment (imagining joy, fear and neutral scripts), under instructions for shallow or deep emotion processing. Heart rate, skin conductance, facial electromyography and startle reflex were recorded along with subjective ratings. Results indicated hypo-reactivity to emotion among high alexithymic individuals, smaller and slower startle responses, and low covariation between physiology and self-report. No deficits in facial expression, labeling and emotion ratings were identified. Deep processing was associated with increased physiological reactivity and lower perceived dominance and arousal in high alexithymia. Findings suggest a tendency for avoidance of intense, unpleasant emotions and less defensive action preparation in alexithymia. Copyright © 2014 Elsevier B.V. All rights reserved.
Sayers, W Michael; Sayette, Michael A
2013-09-01
Research on emotion suppression has shown a rebound effect, in which expression of the targeted emotion increases following a suppression attempt. In prior investigations, participants have been explicitly instructed to suppress their responses, which has drawn the act of suppression into metaconsciousness. Yet emerging research emphasizes the importance of nonconscious approaches to emotion regulation. This study is the first in which a craving rebound effect was evaluated without simultaneously raising awareness about suppression. We aimed to link spontaneously occurring attempts to suppress cigarette craving to increased smoking motivation assessed immediately thereafter. Smokers (n = 66) received a robust cued smoking-craving manipulation while their facial responses were videotaped and coded using the Facial Action Coding System. Following smoking-cue exposure, participants completed a behavioral choice task previously found to index smoking motivation. Participants evincing suppression-related facial expressions during cue exposure subsequently valued smoking more than did those not displaying these expressions, which suggests that internally generated suppression can exert powerful rebound effects.
Palumbo, Letizia; Jellema, Tjeerd
2013-01-01
Emotional facial expressions are immediate indicators of the affective dispositions of others. Recently it has been shown that early stages of social perception can already be influenced by (implicit) attributions made by the observer about the agent's mental state and intentions. In the current study possible mechanisms underpinning distortions in the perception of dynamic, ecologically-valid, facial expressions were explored. In four experiments we examined to what extent basic perceptual processes such as contrast/context effects, adaptation and representational momentum underpinned the perceptual distortions, and to what extent 'emotional anticipation', i.e. the involuntary anticipation of the other's emotional state of mind on the basis of the immediate perceptual history, might have played a role. Neutral facial expressions displayed at the end of short video-clips, in which an initial facial expression of joy or anger gradually morphed into a neutral expression, were misjudged as being slightly angry or slightly happy, respectively (Experiment 1). This response bias disappeared when the actor's identity changed in the final neutral expression (Experiment 2). Videos depicting neutral-to-joy-to-neutral and neutral-to-anger-to-neutral sequences again produced biases but in opposite direction (Experiment 3). The bias survived insertion of a 400 ms blank (Experiment 4). These results suggested that the perceptual distortions were not caused by any of the low-level perceptual mechanisms (adaptation, representational momentum and contrast effects). We speculate that especially when presented with dynamic, facial expressions, perceptual distortions occur that reflect 'emotional anticipation' (a low-level mindreading mechanism), which overrules low-level visual mechanisms. Underpinning neural mechanisms are discussed in relation to the current debate on action and emotion understanding.
Palumbo, Letizia; Jellema, Tjeerd
2013-01-01
Emotional facial expressions are immediate indicators of the affective dispositions of others. Recently it has been shown that early stages of social perception can already be influenced by (implicit) attributions made by the observer about the agent’s mental state and intentions. In the current study possible mechanisms underpinning distortions in the perception of dynamic, ecologically-valid, facial expressions were explored. In four experiments we examined to what extent basic perceptual processes such as contrast/context effects, adaptation and representational momentum underpinned the perceptual distortions, and to what extent ‘emotional anticipation’, i.e. the involuntary anticipation of the other’s emotional state of mind on the basis of the immediate perceptual history, might have played a role. Neutral facial expressions displayed at the end of short video-clips, in which an initial facial expression of joy or anger gradually morphed into a neutral expression, were misjudged as being slightly angry or slightly happy, respectively (Experiment 1). This response bias disappeared when the actor’s identity changed in the final neutral expression (Experiment 2). Videos depicting neutral-to-joy-to-neutral and neutral-to-anger-to-neutral sequences again produced biases but in opposite direction (Experiment 3). The bias survived insertion of a 400 ms blank (Experiment 4). These results suggested that the perceptual distortions were not caused by any of the low-level perceptual mechanisms (adaptation, representational momentum and contrast effects). We speculate that especially when presented with dynamic, facial expressions, perceptual distortions occur that reflect ‘emotional anticipation’ (a low-level mindreading mechanism), which overrules low-level visual mechanisms. Underpinning neural mechanisms are discussed in relation to the current debate on action and emotion understanding. PMID:23409112
The Mirror Neuron System: A Fresh View
Casile, Antonino; Caggiano, Vittorio; Ferrari, Pier Francesco
2013-01-01
Mirror neurons are a class of visuomotor neurons in the monkey premotor and parietal cortices that discharge during the execution and observation of goal-directed motor acts. They are deemed to be at the basis of primates’ social abilities. In this review, the authors provide a fresh view about two still open questions about mirror neurons. The first question is their possible functional role. By reviewing recent neurophysiological data, the authors suggest that mirror neurons might represent a flexible system that encodes observed actions in terms of several behaviorally relevant features. The second question concerns the possible developmental mechanisms responsible for their initial emergence. To provide a possible answer to question, the authors review two different aspects of sensorimotor development: facial and hand movements, respectively. The authors suggest that possibly two different “mirror” systems might underlie the development of action understanding and imitative abilities in the two cases. More specifically, a possibly prewired system already present at birth but shaped by the social environment might underlie the early development of facial imitative abilities. On the contrary, an experience-dependent system might subserve perception-action couplings in the case of hand movements. The development of this latter system might be critically dependent on the observation of own movements. PMID:21467305
Optical stimulation of the facial nerve: a surgical tool?
NASA Astrophysics Data System (ADS)
Richter, Claus-Peter; Teudt, Ingo Ulrik; Nevel, Adam E.; Izzo, Agnella D.; Walsh, Joseph T., Jr.
2008-02-01
One sequela of skull base surgery is the iatrogenic damage to cranial nerves. Devices that stimulate nerves with electric current can assist in the nerve identification. Contemporary devices have two main limitations: (1) the physical contact of the stimulating electrode and (2) the spread of the current through the tissue. In contrast to electrical stimulation, pulsed infrared optical radiation can be used to safely and selectively stimulate neural tissue. Stimulation and screening of the nerve is possible without making physical contact. The gerbil facial nerve was irradiated with 250-μs-long pulses of 2.12 μm radiation delivered via a 600-μm-diameter optical fiber at a repetition rate of 2 Hz. Muscle action potentials were recorded with intradermal electrodes. Nerve samples were examined for possible tissue damage. Eight facial nerves were stimulated with radiant exposures between 0.71-1.77 J/cm2, resulting in compound muscle action potentials (CmAPs) that were simultaneously measured at the m. orbicularis oculi, m. levator nasolabialis, and m. orbicularis oris. Resulting CmAP amplitudes were 0.3-0.4 mV, 0.15-1.4 mV and 0.3-2.3 mV, respectively, depending on the radial location of the optical fiber and the radiant exposure. Individual nerve branches were also stimulated, resulting in CmAP amplitudes between 0.2 and 1.6 mV. Histology revealed tissue damage at radiant exposures of 2.2 J/cm2, but no apparent damage at radiant exposures of 2.0 J/cm2.
Nonablative laser treatment of facial rhytides
NASA Astrophysics Data System (ADS)
Lask, Gary P.; Lee, Patrick K.; Seyfzadeh, Manouchehr; Nelson, J. Stuart; Milner, Thomas E.; Anvari, Bahman; Dave, Digant P.; Geronemus, Roy G.; Bernstein, Leonard J.; Mittelman, Harry; Ridener, Laurie A.; Coulson, Walter F.; Sand, Bruce; Baumgarder, Jon; Hennings, David R.; Menefee, Richard F.; Berry, Michael J.
1997-05-01
The purpose of this study is to evaluate the safety and effectiveness of the New Star Model 130 neodymium:yttrium aluminum garnet (Nd:YAG) laser system for nonablative laser treatment of facial rhytides (e.g., periorbital wrinkles). Facial rhytides are treated with 1.32 micrometer wavelength laser light delivered through a fiberoptic handpiece into a 5 mm diameter spot using three 300 microsecond duration pulses at 100 Hz pulse repetition frequency and pulse radiant exposures extending up to 12 J/cm2. Dynamic cooling is used to cool the epidermis selectively prior to laser treatment; animal histology experiments confirm that dynamic cooling combined with nonablative laser heating protects the epidermis and selectively injures the dermis. In the human clinical study, immediately post-treatment, treated sites exhibit mild erythema and, in a few cases, edema or small blisters. There are no long-term complications such as marked dyspigmentation and persistent erythema that are commonly observed following ablative laser skin resurfacing. Preliminary results indicate that the severity of facial rhytides has been reduced, but long-term follow-up examinations are needed to quantify the reduction. The mechanism of action of this nonablative laser treatment modality may involve dermal wound healing that leads to long- term synthesis of new collagen and extracellular matrix material.
Quinto-Sánchez, Mirsha; Muñoz-Muñoz, Francesc; Gomez-Valdes, Jorge; Cintas, Celia; Navarro, Pablo; Cerqueira, Caio Cesar Silva de; Paschetta, Carolina; de Azevedo, Soledad; Ramallo, Virginia; Acuña-Alonzo, Victor; Adhikari, Kaustubh; Fuentes-Guajardo, Macarena; Hünemeier, Tábita; Everardo, Paola; de Avila, Francisco; Jaramillo, Claudia; Arias, Williams; Gallo, Carla; Poletti, Giovani; Bedoya, Gabriel; Bortolini, Maria Cátira; Canizales-Quinteros, Samuel; Rothhammer, Francisco; Rosique, Javier; Ruiz-Linares, Andres; Gonzalez-Jose, Rolando
2018-01-17
Facial asymmetries are usually measured and interpreted as proxies to developmental noise. However, analyses focused on its developmental and genetic architecture are scarce. To advance on this topic, studies based on a comprehensive and simultaneous analysis of modularity, morphological integration and facial asymmetries including both phenotypic and genomic information are needed. Here we explore several modularity hypotheses on a sample of Latin American mestizos, in order to test if modularity and integration patterns differ across several genomic ancestry backgrounds. To do so, 4104 individuals were analyzed using 3D photogrammetry reconstructions and a set of 34 facial landmarks placed on each individual. We found a pattern of modularity and integration that is conserved across sub-samples differing in their genomic ancestry background. Specifically, a signal of modularity based on functional demands and organization of the face is regularly observed across the whole sample. Our results shed more light on previous evidence obtained from Genome Wide Association Studies performed on the same samples, indicating the action of different genomic regions contributing to the expression of the nose and mouth facial phenotypes. Our results also indicate that large samples including phenotypic and genomic metadata enable a better understanding of the developmental and genetic architecture of craniofacial phenotypes.
Clothing Speaks: 4-H Leader's Guide and 4-H Member's Guide.
ERIC Educational Resources Information Center
Extension Service (USDA), Washington, DC.
Designed as a group project for boys and girls between the ages of 14 and 17, the informal discussion unit on clothing deals with total appearance (Accessories, hair, make-up, grooming, posture, mannerisms, facial expression, and clothes) and its relationship to self-understanding and one's role in society. The unit is organized into four parts:…
Action and Emotion Recognition from Point Light Displays: An Investigation of Gender Differences
Alaerts, Kaat; Nackaerts, Evelien; Meyns, Pieter; Swinnen, Stephan P.; Wenderoth, Nicole
2011-01-01
Folk psychology advocates the existence of gender differences in socio-cognitive functions such as ‘reading’ the mental states of others or discerning subtle differences in body-language. A female advantage has been demonstrated for emotion recognition from facial expressions, but virtually nothing is known about gender differences in recognizing bodily stimuli or body language. The aim of the present study was to investigate potential gender differences in a series of tasks, involving the recognition of distinct features from point light displays (PLDs) depicting bodily movements of a male and female actor. Although recognition scores were considerably high at the overall group level, female participants were more accurate than males in recognizing the depicted actions from PLDs. Response times were significantly higher for males compared to females on PLD recognition tasks involving (i) the general recognition of ‘biological motion’ versus ‘non-biological’ (or ‘scrambled’ motion); or (ii) the recognition of the ‘emotional state’ of the PLD-figures. No gender differences were revealed for a control test (involving the identification of a color change in one of the dots) and for recognizing the gender of the PLD-figure. In addition, previous findings of a female advantage on a facial emotion recognition test (the ‘Reading the Mind in the Eyes Test’ (Baron-Cohen, 2001)) were replicated in this study. Interestingly, a strong correlation was revealed between emotion recognition from bodily PLDs versus facial cues. This relationship indicates that inter-individual or gender-dependent differences in recognizing emotions are relatively generalized across facial and bodily emotion perception. Moreover, the tight correlation between a subject's ability to discern subtle emotional cues from PLDs and the subject's ability to basically discriminate biological from non-biological motion provides indications that differences in emotion recognition may - at least to some degree – be related to more basic differences in processing biological motion per se. PMID:21695266
Action and emotion recognition from point light displays: an investigation of gender differences.
Alaerts, Kaat; Nackaerts, Evelien; Meyns, Pieter; Swinnen, Stephan P; Wenderoth, Nicole
2011-01-01
Folk psychology advocates the existence of gender differences in socio-cognitive functions such as 'reading' the mental states of others or discerning subtle differences in body-language. A female advantage has been demonstrated for emotion recognition from facial expressions, but virtually nothing is known about gender differences in recognizing bodily stimuli or body language. The aim of the present study was to investigate potential gender differences in a series of tasks, involving the recognition of distinct features from point light displays (PLDs) depicting bodily movements of a male and female actor. Although recognition scores were considerably high at the overall group level, female participants were more accurate than males in recognizing the depicted actions from PLDs. Response times were significantly higher for males compared to females on PLD recognition tasks involving (i) the general recognition of 'biological motion' versus 'non-biological' (or 'scrambled' motion); or (ii) the recognition of the 'emotional state' of the PLD-figures. No gender differences were revealed for a control test (involving the identification of a color change in one of the dots) and for recognizing the gender of the PLD-figure. In addition, previous findings of a female advantage on a facial emotion recognition test (the 'Reading the Mind in the Eyes Test' (Baron-Cohen, 2001)) were replicated in this study. Interestingly, a strong correlation was revealed between emotion recognition from bodily PLDs versus facial cues. This relationship indicates that inter-individual or gender-dependent differences in recognizing emotions are relatively generalized across facial and bodily emotion perception. Moreover, the tight correlation between a subject's ability to discern subtle emotional cues from PLDs and the subject's ability to basically discriminate biological from non-biological motion provides indications that differences in emotion recognition may - at least to some degree - be related to more basic differences in processing biological motion per se.
Abdulkadir, Kocer; Buket, Sanlisoy; Dilek, Agircan; Munevver, Okay; Ayse, Aralasmak
2015-04-01
Otitis media is a well-known condition and its infra-temporal and intracranial complications are extremely rare because of the widespread usage of antibiotic treatment. We report a case of 63-year-old female with complaints of right-sided facial pain and diplopia. She had a history of acute otitis media before 4 months of admission to our neurology unit. Neurological examination showed that total ophthalmoplegia with ptosis, mydriasis, decreased vision and loss of pupil reflex on the right side. In addition, there was involvement of 5th and 7th cranial nerves. Neurological and radiological follow-up examinations demonstrated Jacod's Syndrome with unusual facial nerve damage and infection in aetiology. Sinusitis is the most common aetiology, but there are a few cases reported Jacod's Syndrome originating from otitis media.
The development of motor behavior
Adolph, Karen E.; Franchak, John M.
2016-01-01
This article reviews research on the development of motor behavior from a developmental systems perspective. We focus on infancy when basic action systems are acquired. Posture provides a stable base for locomotion, manual actions, and facial actions. Experience facilitates improvements in motor behavior and infants accumulate immense amounts of experience with all of their basic action systems. At every point in development, perception guides motor behavior by providing feedback about the results of just prior movements and information about what to do next. Reciprocally, the development of motor behavior provides fodder for perception. More generally, motor development brings about new opportunities for acquiring knowledge about the world, and burgeoning motor skills can instigate cascades of developmental changes in perceptual, cognitive, and social domains. PMID:27906517
Reading Minds: How Infants Come to Understand Others
ERIC Educational Resources Information Center
Gopnik, Alison; Seiver, Elizabeth
2009-01-01
Navigating the social world is an extraordinarily difficult and complex task. How do we think about other people's minds, and how do we come to infer other people's intentions from their actions? Developmental psychologists have shown that even very young infants are attuned to the emotions of those around them, imitate facial expressions and…
ERIC Educational Resources Information Center
Davidson, Jane W.
2012-01-01
The research literature concerning gesture in musical performance increasingly reports that musically communicative and meaningful performances contain highly expressive bodily movements. These movements are involved in the generation of the musically expressive performance, but enquiry into the development of expressive bodily movement has been…
Kleydman, Kate; Cohen, Joel L; Marmur, Ellen
2012-12-01
Skin necrosis after soft tissue augmentation with dermal fillers is a rare but potentially severe complication. Nitroglycerin paste may be an important treatment option for dermal and epidermal ischemia in cosmetic surgery. To summarize the knowledge about nitroglycerin paste in cosmetic surgery and to understand its current use in the treatment of vascular compromise after soft tissue augmentation. To review the mechanism of action of nitroglycerin, examine its utility in the dermal vasculature in the setting of dermal filler-induced ischemia, and describe the facial anatomy danger zones in order to avoid vascular injury. A literature review was conducted to examine the mechanism of action of nitroglycerin, and a treatment algorithm was proposed from clinical observations to define strategies for impending facial necrosis after filler injection. Our experience with nitroglycerin paste and our review of the medical literature supports the use of nitroglycerin paste on the skin to help improve flow in the dermal vasculature because of its vasodilatory effect on small-caliber arterioles. © 2012 by the American Society for Dermatologic Surgery, Inc. Published by Wiley Periodicals, Inc.
Tickle-Degnen, Linda; Zebrowitz, Leslie A; Ma, Hui-ing
2011-07-01
Facial masking in Parkinson's disease is the reduction of automatic and controlled expressive movement of facial musculature, creating an appearance of apathy, social disengagement or compromised cognitive status. Research in western cultures demonstrates that practitioners form negatively biased impressions associated with patient masking. Socio-cultural norms about facial expressivity vary according to culture and gender, yet little research has studied the effect of these factors on practitioners' responses toward patients who vary in facial expressivity. This study evaluated the effect of masking, culture and gender on practitioners' impressions of patient psychological attributes. Practitioners (N = 284) in the United States and Taiwan judged 12 Caucasian American and 12 Asian Taiwanese women and men patients in video clips from interviews. Half of each patient group had a moderate degree of facial masking and the other half had near-normal expressivity. Practitioners in both countries judged patients with higher masking to be more depressed and less sociable, less socially supportive, and less cognitively competent than patients with lower masking. Practitioners were more biased by masking when judging the sociability of the American patients, and American practitioners' judgments of patient sociability were more negatively biased in response to masking than were those of Taiwanese practitioners. Practitioners were more biased by masking when judging the cognitive competence and social supportiveness of the Taiwanese patients, and Taiwanese practitioners' judgments of patient cognitive competence were more negatively biased in response to masking than were those of American practitioners. The negative response to higher masking was stronger in practitioner judgments of women than men patients, particularly American patients. The findings suggest local cultural values as well as ethnic and gender stereotypes operate on practitioners' use of facial expressivity in clinical impression formation. Copyright © 2011 Elsevier Ltd. All rights reserved.
Tickle-Degnen, Linda; Zebrowitz, Leslie A.; Ma, Hui-ing
2011-01-01
Facial masking in Parkinson’s disease is the reduction of automatic and controlled expressive movement of facial musculature, creating an appearance of apathy, social disengagement or compromised cognitive status. Research in western cultures demonstrates that practitioners form negatively biased impressions associated with patient masking. Socio-cultural norms about facial expressivity vary according to culture and gender, yet little research has studied the effect of these factors on practitioners’ responses toward patients who vary in facial expressivity. This study evaluated the effect of masking, culture and gender on practitioners’ impressions of patient psychological attributes. Practitioners (N=284) in the United States and Taiwan judged 12 Caucasian American and 12 Asian Taiwanese women and men patients in video clips from interviews. Half of each patient group had a moderate degree of facial masking and the other half had near-normal expressivity. Practitioners in both countries judged patients with higher masking to be more depressed and less sociable, less socially supportive, and less cognitively competent than patients with lower masking. Practitioners were more biased by masking when judging the sociability of the American patients, and American practitioners’ judgments of patient sociability were more negatively biased in response to masking than were those of Taiwanese practitioners. Practitioners were more biased by masking when judging the cognitive competence and social supportiveness of the Taiwanese patients, and Taiwanese practitioners’ judgments of patient cognitive competence were more negatively biased in response to masking than were those of American practitioners. The negative response to higher masking was stronger in practitioner judgments of women than men patients, particularly American patients. The findings suggest local cultural values as well as ethnic and gender stereotypes operate on practitioners’ use of facial expressivity in clinical impression formation. PMID:21664737
Saito, Kosuke; Tamaki, Tetsuro; Hirata, Maki; Hashimoto, Hiroyuki; Nakazato, Kenei; Nakajima, Nobuyuki; Kazuno, Akihito; Sakai, Akihiro; Iida, Masahiro; Okami, Kenji
2015-01-01
Head and neck cancer is often diagnosed at advanced stages, and surgical resection with wide margins is generally indicated, despite this treatment being associated with poor postoperative quality of life (QOL). We have previously reported on the therapeutic effects of skeletal muscle-derived multipotent stem cells (Sk-MSCs), which exert reconstitution capacity for muscle-nerve-blood vessel units. Recently, we further developed a 3D patch-transplantation system using Sk-MSC sheet-pellets. The aim of this study is the application of the 3D Sk-MSC transplantation system to the reconstitution of facial complex nerve-vascular networks after severe damage. Mouse experiments were performed for histological analysis and rats were used for functional examinations. The Sk-MSC sheet-pellets were prepared from GFP-Tg mice and SD rats, and were transplanted into the facial resection model (ST). Culture medium was transplanted as a control (NT). In the mouse experiment, facial-nerve-palsy (FNP) scoring was performed weekly during the recovery period, and immunohistochemistry was used for the evaluation of histological recovery after 8 weeks. In rats, contractility of facial muscles was measured via electrical stimulation of facial nerves root, as the marker of total functional recovery at 8 weeks after transplantation. The ST-group showed significantly higher FNP (about three fold) scores when compared to the NT-group after 2-8 weeks. Similarly, significant functional recovery of whisker movement muscles was confirmed in the ST-group at 8 weeks after transplantation. In addition, engrafted GFP+ cells formed complex branches of nerve-vascular networks, with differentiation into Schwann cells and perineurial/endoneurial cells, as well as vascular endothelial and smooth muscle cells. Thus, Sk-MSC sheet-pellet transplantation is potentially useful for functional reconstitution therapy of large defects in facial nerve-vascular networks.
Assault-related facial injuries during the season of goodwill.
Islam, Shofiq; Uwadiae, Nosa; Hayter, Jonathan P
2016-06-01
The aim of this study was to assess if the "season of goodwill," over the 12 days of Christmas, manifests in a reduction in the rate of maxillofacial injuries secondary to interpersonal violence. We performed a retrospective analysis at a teaching hospital in the United Kingdom. We identified consecutive patients presenting at our institution with facial injuries secondary to assault during the Christmas season, together with corresponding Easter time and control periods. Data for 4 consecutive years starting from 2010 were collected. We compared the rates of presentation of facial injuries over the Christmas season with those occurring during Easter and control periods. Our outcome measures included frequency distributions of facial injuries secondary to assault as well as maxillofacial injury patterns. For the study, 440 patients met the inclusion criteria, with 194 presentations occurring during the Christmas season, 132 presentations over Easter, and 114 over the control period (P = .006). There was a statistically significant difference in the mean rates of presentation between the Christmas and Easter seasons (P = .03) and also between the Christmas and control periods (P = .02). We noted an increasing annual trend during the study period in the frequency of assault-related facial injuries during Christmas. Our data suggest that the rate of assault-related facial trauma during Christmas is significantly greater compared with that for both the Easter holiday period and the baseline presentation rate. The "season of goodwill," therefore, does not appear to manifest in a reduction in the rate of assault-related facial injuries. This increased trauma workload requires strategic planning to ensure adequate clinical cover for these anticipated busy periods. Copyright © 2016 Elsevier Inc. All rights reserved.
Role of temporal processing stages by inferior temporal neurons in facial recognition.
Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji
2011-01-01
In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition.
Role of Temporal Processing Stages by Inferior Temporal Neurons in Facial Recognition
Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji
2011-01-01
In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition. PMID:21734904
Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation
Cid, Felipe; Moreno, Jose; Bustos, Pablo; Núñez, Pedro
2014-01-01
This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions. PMID:24787636
Greene, Jacqueline J; McClendon, Mark T; Stephanopoulos, Nicholas; Álvarez, Zaida; Stupp, Samuel I; Richter, Claus-Peter
2018-04-27
Facial nerve injury can cause severe long-term physical and psychological morbidity. There are limited repair options for an acutely transected facial nerve not amenable to primary neurorrhaphy. We hypothesize that a peptide amphiphile nanofiber neurograft may provide the nanostructure necessary to guide organized neural regeneration. Five experimental groups were compared, animals with 1) an intact nerve, 2) following resection of a nerve segment, and following resection and immediate repair with either a 3) autograft (using the resected nerve segment), 4) neurograft, or 5) empty conduit. The buccal branch of the rat facial nerve was directly stimulated with charge balanced biphasic electrical current pulses at different current amplitudes while nerve compound action potentials (nCAPs) and electromygraphic (EMG) responses were recorded. After 8 weeks, the proximal buccal branch was surgically re-exposed and electrically evoked nCAPs were recorded for groups 1-5. As expected, the intact nerves required significantly lower current amplitudes to evoke an nCAP than those repaired with the neurograft and autograft nerves. For other electrophysiologic parameters such as latency and maximum nCAP, there was no significant difference between the intact, autograft and neurograft groups. The resected group had variable responses to electrical stimulation, and the empty tube group was electrically silent. Immunohistochemical analysis and TEM confirmed myelinated neural regeneration. This study demonstrates that the neuroregenerative capability of peptide amphiphile nanofiber neurografts is similar to the current clinical gold standard method of repair and holds potential as an off-the-shelf solution for facial reanimation and potentially peripheral nerve repair. This article is protected by copyright. All rights reserved.
Soccer-related Facial Trauma: Multicenter Experience in 2 Brazilian University Hospitals
Dini, Gal M.; Pereira, Max D.; Gurgel, Augusto; Bastos, Endrigo O.; Nagarkar, Purushottam; Gemperli, Rolf; Ferreira, Lydia M.
2014-01-01
Background: Soccer is the most popular sport in Brazil and a high incidence of related trauma is reported. Maxillofacial trauma can be quite common, sometimes requiring prolonged hospitalization and invasive procedures. To characterize soccer-related facial fractures needing surgery in 2 major Brazilian Centers. Methods: A retrospective review of trauma medical records from the Plastic Surgery Divisions at the Universidade Federal de São Paulo–Escola Paulista de Medicina and the Hospital das Clinicas–Universidade de São Paulo was carried out to identify patients who underwent invasive surgical procedures due to acute soccer-related facial fractures. Data points reviewed included gender, date of injury, type of fracture, date of surgery, and procedure performed. Results: A total of 45 patients (31 from Escola Paulista de Medicina and 14 from Universidade de São Paulo) underwent surgical procedures to address facial fractures between March 2000 and September 2013. Forty-four patients were men, and mean age was 28 years. The fracture patterns seen were nasal bones (16 patients, 35%), orbitozygomatic (16 patients, 35%), mandibular (7 patients, 16%), orbital (6 patients, 13%), frontal (1 patient, 2%), and naso-orbito-ethmoid (1 patient, 2%). Mechanisms of injury included collisions with another player (n = 39) and being struck by the ball (n = 6). Conclusions: Although it is less common than orthopedic injuries, soccer players do sustain maxillofacial trauma. Knowledge of its frequency is important to first responders, nurses, and physicians who have initial contact with patients. Missed diagnosis or delayed treatment can lead to facial deformities and functional problems in the physiological actions of breathing, vision, and chewing. PMID:25289361
[Black bone disease of the skull and facial bones].
Laure, B; Petraud, A; Sury, F; Bayol, J-C; Marquet-Van Der Mee, N; de Pinieux, G; Goga, D
2009-11-01
We report the case of a patient with a craniofacial black bone disease. This was discovered accidentally during a coronal approach. A 38-year-old patient was referred to our unit for facial palsy having appeared 10 years before. Rehabilitation of the facial palsy was performed with a lengthening temporal myoplasty and lengthening of the upper eyelid elevator. An unusual black color of the skull was observed at incision of the coronal approach. Subperiostal dissection of skull and malars confirmed the presence of a black bone disease. A postoperative history revealed minocycline intake (200mg per day) during 3 years. This craniofacial black bone disease was caused by minocycline intake. The originality of this case is to see directly the entire craniofacial skeleton black. This abnormal pigmentation may affect various organs or tissues. Bone pigmentation is irreversible unlike that of the mouth mucosa or of the skin. This abnormal pigmentation is usually discovered accidentally.
Russell, Cristel Antonia; Swasy, John L.; Russell, Dale Wesley; Engel, Larry
2017-01-01
Risk warning or disclosure information in advertising is only effective in correcting consumers’ judgments if enough cognitive capacity is available to process that information. Hence, comprehension of verbal warnings in TV commercials may suffer if accompanied by positive visual elements. This research addresses this concern about cross-modality interference in the context of direct-to-consumer (DTC) pharmaceutical commercials in the United States by experimentally testing whether positive facial expressions reduce consumers’ understanding of the mandated health warning. A content analysis of a sample of DTC commercials reveals that positive facial expressions are more prevalent during the verbal warning act of the commercials than during the other acts. An eye-tracking experiment conducted with specially produced DTC commercials, which vary the valence of characters’ facial expressions during the health warning, provides evidence that happy faces reduce objective comprehension of the warning. PMID:29269979
NASA Astrophysics Data System (ADS)
Gholami, Behnood
This dissertation introduces a new problem in the delivery of healthcare, which could result in lower cost and a higher quality of medical care as compared to the current healthcare practice. In particular, a framework is developed for sedation and cardiopulmonary management for patients in the intensive care unit. A method is introduced to automatically detect pain and agitation in nonverbal patients, specifically in sedated patients in the intensive care unit, using their facial expressions. Furthermore, deterministic as well as probabilistic expert systems are developed to suggest the appropriate drug dose based on patient sedation level. Patients in the intensive care unit who require mechanical ventilation due to acute respiratory failure also frequently require the administration of sedative agents. The need for sedation arises both from patient anxiety due to the loss of personal control and the unfamiliar and intrusive environment of the intensive care unit, and also due to pain or other variants of noxious stimuli. In this dissertation, we develop a rule-based expert system for cardiopulmonary management and intensive care unit sedation. Furthermore, we use probability theory to quantify uncertainty and to extend the proposed rule-based expert system to deal with more realistic situations. Pain assessment in patients who are unable to verbally communicate is a challenging problem. The fundamental limitations in pain assessment stem from subjective assessment criteria, rather than quantifiable, measurable data. The relevance vector machine (RVM) classification technique is a Bayesian extension of the support vector machine (SVM) algorithm which achieves comparable performance to SVM while providing posterior probabilities for class memberships and a sparser model. In this dissertation, we use the RVM classification technique to distinguish pain from non-pain as well as assess pain intensity levels. We also correlate our results with the pain intensity assessed by expert and non-expert human examiners. Next, we consider facial expression recognition using an unsupervised learning framework. We show that different facial expressions reside on distinct subspaces if the manifold is unfolded. In particular, semi-definite embedding is used to reduce the dimensionality and unfold the manifold of facial images. Next, generalized principal component analysis is used to fit a series of subspaces to the data points and associate each data point to a subspace. Data points that belong to the same subspace are shown to belong to the same facial expression. In clinical intensive care unit practice sedative/analgesic agents are titrated to achieve a specific level of sedation. The level of sedation is currently based on clinical scoring systems. Examples include the motor activity assessment scale (MAAS), the Richmond agitation-sedation scale (RASS), and the modified Ramsay sedation scale (MRSS). In general, the goal of the clinician is to find the drug dose that maintains the patient at a sedation score corresponding to a moderately sedated state. In this research, we use pharmacokinetic and pharmacodynamic modeling to find an optimal drug dosing control policy to drive the patient to a desired MRSS score. Atrial fibrillation, a cardiac arrhythmia characterized by unsynchronized electrical activity in the atrial chambers of the heart, is a rapidly growing problem in modern societies. One treatment, referred to as catheter ablation, targets specific parts of the left atrium for radio frequency ablation using an intracardiac catheter. As a first step towards the general solution to the computer-assisted segmentation of the left atrial wall, we use shape learning and shape-based image segmentation to identify the endocardial wall of the left atrium in the delayed-enhancement magnetic resonance images. (Abstract shortened by UMI.)
Spontaneous Facial Mimicry is Modulated by Joint Attention and Autistic Traits.
Neufeld, Janina; Ioannou, Christina; Korb, Sebastian; Schilbach, Leonhard; Chakrabarti, Bhismadev
2016-07-01
Joint attention (JA) and spontaneous facial mimicry (SFM) are fundamental processes in social interactions, and they are closely related to empathic abilities. When tested independently, both of these processes have been usually observed to be atypical in individuals with autism spectrum conditions (ASC). However, it is not known how these processes interact with each other in relation to autistic traits. This study addresses this question by testing the impact of JA on SFM of happy faces using a truly interactive paradigm. Sixty-two neurotypical participants engaged in gaze-based social interaction with an anthropomorphic, gaze-contingent virtual agent. The agent either established JA by initiating eye contact or looked away, before looking at an object and expressing happiness or disgust. Eye tracking was used to make the agent's gaze behavior and facial actions contingent to the participants' gaze. SFM of happy expressions was measured by Electromyography (EMG) recording over the Zygomaticus Major muscle. Results showed that JA augments SFM in individuals with low compared with high autistic traits. These findings are in line with reports of reduced impact of JA on action imitation in individuals with ASC. Moreover, they suggest that investigating atypical interactions between empathic processes, instead of testing these processes individually, might be crucial to understanding the nature of social deficits in autism. Autism Res 2016, 9: 781-789. © 2015 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research. © 2015 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research.
NASA Astrophysics Data System (ADS)
Hermsmeier, Maiko; Sawant, Tanvee; Lac, Diana; Yamamoto, Akira; Chen, Xin; Nagavarapu, Usha; Evans, Conor L.; Chan, Kin Foong
2017-02-01
Minocycline is an antibiotic regularly prescribed to treat acne vulgaris. The only commercially available minocycline comes in an oral dosage form, which often results in systemic adverse effects. A topical minocycline composition (BPX-01) was developed to provide localized and targeted delivery to the epidermis and pilosebaceous unit where acne-related bacteria, Propionibacterium acnes (P. acnes), reside. As minocycline is a known fluorophore, fluorescence microscopy was performed to investigate its potential use in visualizing minocycline distribution within tissues. BPX-01 with various concentrations of minocycline, was applied topically to freshly excised human facial skin specimens. Spatial distribution of minocycline and its fluorescence intensity within the stratum corneum, epidermis, dermis, and pilosebaceous unit were assessed. The resulting fluorescence intensity data as a function of minocycline concentration may indicate clinically relevant therapeutic doses of topical BPX-01 needed to kill P. acnes and reduce inflammation for successful clinical outcomes.
Memory for faces: the effect of facial appearance and the context in which the face is encountered.
Mattarozzi, Katia; Todorov, Alexander; Codispoti, Maurizio
2015-03-01
We investigated the effects of appearance of emotionally neutral faces and the context in which the faces are encountered on incidental face memory. To approximate real-life situations as closely as possible, faces were embedded in a newspaper article, with a headline that specified an action performed by the person pictured. We found that facial appearance affected memory so that faces perceived as trustworthy or untrustworthy were remembered better than neutral ones. Furthermore, the memory of untrustworthy faces was slightly better than that of trustworthy faces. The emotional context of encoding affected the details of face memory. Faces encountered in a neutral context were more likely to be recognized as only familiar. In contrast, emotionally relevant contexts of encoding, whether pleasant or unpleasant, increased the likelihood of remembering semantic and even episodic details associated with faces. These findings suggest that facial appearance (i.e., perceived trustworthiness) affects face memory. Moreover, the findings support prior evidence that the engagement of emotion processing during memory encoding increases the likelihood that events are not only recognized but also remembered.
Display rules for anger and aggression in school-age children.
Underwood, M K; Coie, J D; Herbsman, C R
1992-04-01
2 related studies addressed the development of display rules for anger and the relation between use of display rules for anger and aggressiveness as rated by school peers. Third, fifth, and seventh graders (ages 8.4, 10.9, and 12.8, respectively) gave hypothetical responses to videotaped, anger provoking vignettes. Overall, regardless of how display rules were defined, subjects reported display rules more often with teachers than with peers for both facial expressions and actions. Reported masking of facial expressions of anger increased with age, but only with teachers. Girls reported masking of facial expressions of anger more than boys. There was a trend for aggressive subjects to invoke display rules for anger less than nonaggressive subjects. The phenomenon of display rules for anger is complex and dependent on the way display rules are defined and the age and gender of the subjects. Most of all, whether children say they would behave angrily seems to be determined by the social context for revealing angry feelings; children say they would express anger genuinely much more often with peers than with teachers.
Infants' Perception of Emotion from Body Movements
ERIC Educational Resources Information Center
Zieber, Nicole; Kangas, Ashley; Hock, Alyson; Bhatt, Ramesh S.
2014-01-01
Adults recognize emotions conveyed by bodies with comparable accuracy to facial emotions. However, no prior study has explored infants' perception of body emotions. In Experiment 1, 6.5-month-olds (n = 32) preferred happy over neutral actions of actors with covered faces in upright but not inverted silent videos. In Experiment 2, infants…
ERIC Educational Resources Information Center
Ferrari, Pier Francesco; Paukner, Annika; Ruggiero, Angela; Darcey, Lisa; Unbehagen, Sarah; Suomi, Stephen J.
2009-01-01
The capacity to imitate facial gestures is highly variable in rhesus macaques and this variability may be related to differences in specific neurobehavioral patterns of development. This study evaluated the differential neonatal imitative response of 41 macaques in relation to the development of sensory, motor, and cognitive skills throughout the…
Seeing Emotions: A Review of Micro and Subtle Emotion Expression Training
ERIC Educational Resources Information Center
Poole, Ernest Andre
2016-01-01
In this review I explore and discuss the use of micro and subtle expression training in the social sciences. These trainings, offered commercially, are designed and endorsed by noted psychologist Paul Ekman, co-author of the Facial Action Coding System, a comprehensive system of measuring muscular movement in the face and its relationship to the…
Direct effects of diazepam on emotional processing in healthy volunteers
Murphy, S. E.; Downham, C.; Cowen, P. J.
2008-01-01
Rationale Pharmacological agents used in the treatment of anxiety have been reported to decrease threat relevant processing in patients and healthy controls, suggesting a potentially relevant mechanism of action. However, the effects of the anxiolytic diazepam have typically been examined at sedative doses, which do not allow the direct actions on emotional processing to be fully separated from global effects of the drug on cognition and alertness. Objectives The aim of this study was to investigate the effect of a lower, but still clinically effective, dose of diazepam on emotional processing in healthy volunteers. Materials and methods Twenty-four participants were randomised to receive a single dose of diazepam (5 mg) or placebo. Sixty minutes later, participants completed a battery of psychological tests, including measures of non-emotional cognitive performance (reaction time and sustained attention) and emotional processing (affective modulation of the startle reflex, attentional dot probe, facial expression recognition, and emotional memory). Mood and subjective experience were also measured. Results Diazepam significantly modulated attentional vigilance to masked emotional faces and significantly decreased overall startle reactivity. Diazepam did not significantly affect mood, alertness, response times, facial expression recognition, or sustained attention. Conclusions At non-sedating doses, diazepam produces effects on attentional vigilance and startle responsivity that are consistent with its anxiolytic action. This may be an underlying mechanism through which benzodiazepines exert their therapeutic effects in clinical anxiety. PMID:18581100
Assessment of facial golden proportions among central Indian population.
Saurabh, Rathore; Piyush, Bolya; Sourabh, Bhatt; Preeti, Ojha; Trivedi, Rutvik; Vishnoi, Pradeep
2016-12-01
This study aimed to identify and establish the facial and smile proportions in young adults and to compare the results with ideal or divine proportions, compare the proportions of males and females included in our study population and compare them with those established for Caucasian and Japanese populations. Two hundred participants (164 females, 36 males) with Angle's class I malocclusion (M.O). and well-balanced faces were selected and photographed in the frontal repose position. Analysis was done in Adobe Photoshop software. Statistical analysis was done using the Statistical Package for the Social Sciences version 17.0. (IBM Corporation Armonk, New York, United States). Results suggested that females are more near to ideal ratios and males are more deviated from the ideal ratios. The proportions of males and females were not considerably different from each other. In Indian population, upper 3 rd facial height (TR-LC) was increased and mid-face height (LC-LN) was decreased; in lower 3 rd of the face, LN-CH was slightly increased in comparison to CH-ME. In facial widths, outer canthal width (LC-LC) was greater in the Indian population and mouth width (CH-CH) was normal. When compared with Indian population, Japanese participants had wider noses, outer canthal distance, and bitemporal width. It was concluded that significant difference was found between the proportions of the Indian population and ideal ratio. When Indian population was compared with Japanese and Caucasian populations, some parameters of facial proportions showed significant difference, which leads to the need for establishing standardized norms for various facial proportions in Indian population.
Schwirtz, Roderic M F; Mulder, Frans J; Mosmuller, David G M; Tan, Robin A; Maal, Thomas J; Prahl, Charlotte; de Vet, Henrica C W; Don Griot, J Peter W
2018-05-01
To determine if cropping facial images affects nasolabial aesthetics assessments in unilateral cleft lip patients and to evaluate the effect of facial attractiveness on nasolabial evaluation. Two cleft surgeons and one cleft orthodontist assessed standardized frontal photographs 4 times; nasolabial aesthetics were rated on cropped and full-face images using the Cleft Aesthetic Rating Scale, and total facial attractiveness was rated on full-face images with and without the nasolabial area blurred using a 5-point Likert scale. Cleft Palate Craniofacial Unit of a University Medical Center. Inclusion criteria: nonsyndromic unilateral cleft lip and an available frontal view photograph around 10 years of age. a history of facial trauma and an incomplete cleft. Eighty-one photographs were available for assessment. Differences in mean CARS scores between cropped versus full-face photographs and attractive versus unattractive rated patients were evaluated by paired t test. Nasolabial aesthetics are scored more negatively on full-face photographs compared to cropped photographs, regardless of facial attractiveness. (Mean CARS score, nose: cropped = 2.8, full-face = 3.0, P < .001; lip: cropped = 2.4, full-face = 2.7, P < .001; nose and lip: cropped = 2.6, full-face = 2.8, P < .001). Aesthetic outcomes of the nasolabial area are assessed significantly more positively when using cropped images compared to full-face images. For this reason, cropping images, revealing the nasolabial area only, is recommended for aesthetical assessments.
Appearance-based human gesture recognition using multimodal features for human computer interaction
NASA Astrophysics Data System (ADS)
Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun
2011-03-01
The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.
de Faria, Maria Estela Justamante; Carvalho, Luciani R; Rossetto, Shirley M; Amaral, Terezinha Sampaio; Berger, Karina; Arnhold, Ivo Jorge Prado; Mendonca, Berenice Bilharinho
2009-01-01
There are many controversies regarding side effects on craniofacial and extremity growth due to growth hormone (GH) treatment. Our aim was to estimate GH action on craniofacial development and extremity growth in GH-deficient patients. Twenty patients with GH deficiency with a chronological age ranging from 4.6 to 24.3 years (bone age from 1.5 to 13 years) were divided in 2 groups: group 1 (n = 6), naive to GH treatment, and group 2 (n = 14), ongoing GH treatment for 2-11 years. GH doses (0.1-0.15 U/kg/day) were adjusted to maintain insulin-like growth factor 1 and insulin-like growth factor binding protein 3 levels within the normal range. Anthropometric measurements, cephalometric analyses and facial photographs to verify profile and harmony were performed annually for at least 3 years. Two patients with a disharmonious profile due to mandibular growth attained harmony, and none of them developed facial disharmony. Increased hand or foot size (>P97) was observed in 2 female patients and in 4 patients (1 female), respectively, both not correlated with GH treatment duration and increased levels of insulin-like growth factor 1. GH treatment with standard doses in GH-deficient patients can improve the facial profile in retrognathic patients and does not lead to facial disharmony although extremity growth, mainly involving the feet, can occur. Copyright 2009 S. Karger AG, Basel.
Anatomy of Sodium Hypochlorite Accidents Involving Facial Ecchymosis – A Review
Zhu, Wan-chun; Gyamfi, Jacqueline; Niu, Li-na; Schoeffel, G. John; Liu, Si-ying; Santarcangelo, Filippo; Khan, Sara; Tay, Kelvin C-Y.; Pashley, David H.; Tay, Franklin R.
2013-01-01
Objectives Root canal treatment forms an essential part of general dental practice. Sodium hypochlorite (NaOCl) is the most commonly used irrigant in endodontics due to its ability to dissolve organic soft tissues in the root canal system and its action as a potent antimicrobial agent. Although NaOCl accidents created by extrusion of the irrigant through root apices are relatively rare and are seldom life-threatening, they do create substantial morbidity when they occur. Methods To date, NaOCl accidents have only been published as isolated case reports. Although previous studies have attempted to summarise the symptoms involved in these case reports, there was no endeavor to analyse the distribution of soft tissue distribution in those reports. In this review, the anatomy of a classical NaOCl accident that involves facial swelling and ecchymosis is discussed. Results By summarising the facial manifestations presented in previous case reports, a novel hypothesis that involves intravenous infusion of extruded NaOCl into the facial vein via non-collapsible venous sinusoids within the cancellous bone is presented. Conclusions Understanding the mechanism involved in precipitating a classic NaOCl accident will enable the profession to make the best decision regarding the choice of irrigant delivery techniques in root canal débridement, and for manufacturers to design and improve their irrigation systems to achieve maximum safety and efficient cleanliness of the root canal system. PMID:23994710
Anatomy of sodium hypochlorite accidents involving facial ecchymosis - a review.
Zhu, Wan-chun; Gyamfi, Jacqueline; Niu, Li-na; Schoeffel, G John; Liu, Si-ying; Santarcangelo, Filippo; Khan, Sara; Tay, Kelvin C-Y; Pashley, David H; Tay, Franklin R
2013-11-01
Root canal treatment forms an essential part of general dental practice. Sodium hypochlorite (NaOCl) is the most commonly used irrigant in endodontics due to its ability to dissolve organic soft tissues in the root canal system and its action as a potent antimicrobial agent. Although NaOCl accidents created by extrusion of the irrigant through root apices are relatively rare and are seldom life-threatening, they do create substantial morbidity when they occur. To date, NaOCl accidents have only been published as isolated case reports. Although previous studies have attempted to summarise the symptoms involved in these case reports, there was no endeavour to analyse the distribution of soft tissue distribution in those reports. In this review, the anatomy of a classical NaOCl accident that involves facial swelling and ecchymosis is discussed. By summarising the facial manifestations presented in previous case reports, a novel hypothesis that involves intravenous infusion of extruded NaOCl into the facial vein via non-collapsible venous sinusoids within the cancellous bone is presented. Understanding the mechanism involved in precipitating a classic NaOCl accident will enable the profession to make the best decision regarding the choice of irrigant delivery techniques in root canal débridement, and for manufacturers to design and improve their irrigation systems to achieve maximum safety and efficient cleanliness of the root canal system. Copyright © 2013 Elsevier Ltd. All rights reserved.
First U.S. near-total human face transplantation: a paradigm shift for massive complex injuries.
Siemionow, Maria Z; Papay, Frank; Djohan, Risal; Bernard, Steven; Gordon, Chad R; Alam, Daniel; Hendrickson, Mark; Lohman, Robert; Eghtesad, Bijan; Fung, John
2010-01-01
Severe complex facial injuries are difficult to reconstruct and require multiple surgical procedures. The potential of performing complex craniofacial reconstruction in one surgical procedure is appealing, and composite face allograft transplantation may be considered an alternative option. The authors describe establishment of the Cleveland Clinic face transplantation program that led them to perform the first U.S. near-total face transplantation. In November of 2004, the authors received the world's first institutional review board approval to perform a face transplant in humans. In December of 2008, after a 22-hour operation, the authors performed the first near-total face transplantation in the United States, replacing 80 percent of the patient's traumatic facial deficit with a composite allograft from a brain-dead donor. This largest, and most complex, face allograft in the world included over 535 cm2 of facial skin; functional units of full nose with nasal lining and bony skeleton; lower eyelids and upper lip; underlying muscles and bones, including orbital floor, zygoma, maxilla, alveolus with teeth, hard palate, and parotid glands; and pertinent nerves, arteries, and veins. Immunosuppressive treatment consisted of thymoglobulin, tacrolimus, mycophenolate mofetil, and prednisone. The patient tolerated the procedure and immunosuppression well. At day 47 after transplantation, routine biopsy showed rejection of the graft mucosa without clinical evidence of skin or graft rejection. The patient's physical and psychological recovery went well. The functional outcome has been excellent, including optimal return of breathing through the nose, smelling, tasting, speaking, drinking from a cup, and eating solid foods. The functional outcome thus far at 8 months is rewarding and confirms the feasibility of performing complex reconstruction of severely disfigured patients in a single surgical procedure of facial allotransplantation.
Denize, Erin Stewart; McDonald, Fraser; Sherriff, Martyn
2014-01-01
Objective To evaluate the relative importance of bilabial prominence in relation to other facial profile parameters in a normal population. Methods Profile stimulus images of 38 individuals (28 female and 10 male; ages 19-25 years) were shown to an unrelated group of first-year students (n = 42; ages 18-24 years). The images were individually viewed on a 17-inch monitor. The observers received standardized instructions before viewing. A six-question questionnaire was completed using a Likert-type scale. The responses were analyzed by ordered logistic regression to identify associations between profile characteristics and observer preferences. The Bayesian Information Criterion was used to select variables that explained observer preferences most accurately. Results Nasal, bilabial, and chin prominences; the nasofrontal angle; and lip curls had the greatest effect on overall profile attractiveness perceptions. The lip-chin-throat angle and upper lip curl had the greatest effect on forehead prominence perceptions. The bilabial prominence, nasolabial angle (particularly the lower component), and mentolabial angle had the greatest effect on nasal prominence perceptions. The bilabial prominence, nasolabial angle, chin prominence, and submental length had the greatest effect on lip prominence perceptions. The bilabial prominence, nasolabial angle, mentolabial angle, and submental length had the greatest effect on chin prominence perceptions. Conclusions More prominent lips, within normal limits, may be considered more attractive in the profile view. Profile parameters have a greater influence on their neighboring aesthetic units but indirectly influence related profile parameters, endorsing the importance of achieving an aesthetic balance between relative prominences of all aesthetic units of the facial profile. PMID:25133133
Monoblock Expanded Full-thickness Graft for Resurfacing of the Burned Face in Young Patients.
Allam, A M; El Khalek, A E A; Mustafa, W; Zayed, E
2007-12-31
It has been emphasized by many authors that to obtain better aesthetic results in a burned facial area to be resurfaced - if it extends into more than one aesthetic territory - the units involved should be combined into a single large composite unit allowing the largest possible skin graft to be used. Unfortunately, the donor site for full-thickness grafts is limited in young patients and hence tissue expansion is used. A monoblock expanded full-thickness skin graft for facial resurfacing after post-burn sequelae excision was used in 12 young patients after expansion of the superolateral aspect of the buttock. Females made up the majority of the patients (75%) and the ages ranged between 8 and 18 yr. The operating time was 3-3.5 hours, in two sessions. Post-operatively, we recorded partial graft necrosis in two cases (16.7%) and infection in one (8.3%), and some minor donor-site-related complications were reported, such as haematoma in one patient (8.3%), wound infection in one patient (8.3%), and wide scarring in two patients (16.7%). At follow-up, eight of the patients (66.7%) were satisfied with their new facial look as the mask effect of facial scarring had been overcome. With monoblock expanded full-thickness graft we were able to resurface the face in nine cases (75%). A second complementary procedure to reconstruct the eyebrows or reshape the nose was required in two cases (16.7%). We concluded that the monoblock expanded full-thickness graft was a suitable solution for limitation of the donor site in young patients, as the resulting wound could be closed primarily with a scar that could be concealed by the underwear, with lim.
Monoblock Expanded Full-thickness Graft for Resurfacing of the Burned Face in Young Patients
Allam, A.M.; El Khalek, A.E.A.; Mustafa, W.; Zayed, E.
2007-01-01
Summary It has been emphasized by many authors that to obtain better aesthetic results in a burned facial area to be resurfaced - if it extends into more than one aesthetic territory - the units involved should be combined into a single large composite unit allowing the largest possible skin graft to be used. Unfortunately, the donor site for full-thickness grafts is limited in young patients and hence tissue expansion is used. A monoblock expanded full-thickness skin graft for facial resurfacing after post-burn sequelae excision was used in 12 young patients after expansion of the superolateral aspect of the buttock. Females made up the majority of the patients (75%) and the ages ranged between 8 and 18 yr. The operating time was 3-3.5 hours, in two sessions. Post-operatively, we recorded partial graft necrosis in two cases (16.7%) and infection in one (8.3%), and some minor donor-site-related complications were reported, such as haematoma in one patient (8.3%), wound infection in one patient (8.3%), and wide scarring in two patients (16.7%). At follow-up, eight of the patients (66.7%) were satisfied with their new facial look as the mask effect of facial scarring had been overcome. With monoblock expanded full-thickness graft we were able to resurface the face in nine cases (75%). A second complementary procedure to reconstruct the eyebrows or reshape the nose was required in two cases (16.7%). We concluded that the monoblock expanded full-thickness graft was a suitable solution for limitation of the donor site in young patients, as the resulting wound could be closed primarily with a scar that could be concealed by the underwear, with lim. PMID:21991093
[Effect of extracts from Dendrobii ifficinalis flos on hyperthyroidism Yin deficiency mice].
Lei, Shan-shan; Lv, Gui-yuan; Jin, Ze-wu; Li, Bo; Yang, Zheng-biao; Chen, Su-hong
2015-05-01
Some unhealthy life habits, such as long-term smoking, heavy drinking, sexual overstrain and frequent stay-up could induce the Yin deficiency symptoms of zygomatic red and dysphoria. Stems of Dendrobii officinalis flos (DOF) showed the efficacy of nourishing Yin. In this study, the hyperthyroidism Yin deficiency model was set up to study the yin nourishing effect and action mechanism of DOF, in order to provide the pharmacological basis for developing DOF resources and decreasing resource wastes. ICR mice were divided into five groups: the normal control group, the model control group, the positive control group and DOF extract groups (6.4 g · kg(-1)). Except for the normal group, the other groups were administrated with thyroxine for 30 d to set up the hyperthyroidism yin deficiency model. At the same time, the other groups were administrated with the corresponding drugs for 30 d. After administration for 4 weeks, the signs (facial temperature, pain domain, heart rate and autonomic activity) in mice were measured, and the facial and ear micro-circulation blood flow were detected by laser Doppler technology. After the last administration, all mice were fasted for 12 hours, blood were collected from their orbits, and serum were separated to detect AST, ALT, TG and TP by the automatic biochemistry analyzer and test T3, T4 and TSH levels by ELISA. (1) Compared with the normal control group, the model control group showed significant increases in facial and ear micro-circulation blood flow, facial temperature and heart rate (P < 0.05, P < 0.01), serum AST, ALT (P < 0.01), T3 level (P < 0.05), TSH level (P < 0.05) and notable deceases in pain domain (P < 0.01), TG level (P < 0.01). (2) Compared with the model control group, extracts from DOF (6 g · kg(-1)) could notably reduce facial and ear micro-circulation blood flow, facial temperature and heart rate (P < 0.05, P < 0.01) and AST (P < 0.05) and enhance pain domain (P < 0.01) and TG (P < 0.01). Extracts from DOF (4 g · kg(-1)) could remarkably reduce AST and ALT levels (P < 0.01, 0.05). Extracts from DOF (6 g · kg(-1) 4 g · kg(-1)) could significantly reduce T3 and increase serum TSH level (P < 0.05). DOF could improve Yin deficiency symptoms of zygomatic red and dysphoria in mice as well as liver function injury caused by overactive thyroid axis. According to its action mechanism, DOF may show yin nourishing and hepatic protective effects by impacting thyroxin substance metabolism, improving micro-circulation and reducing heart rate.
Zeichner, Joshua A; Wong, Vicky; Linkner, Rita V; Haddican, Madelaine
2013-03-01
Combination therapy using medications with complementary mechanisms of action is the standard of care in treating acne. We report results of a clinical trial evaluating the use of a fixed-dose tretinoin 0.025%/clindamycin phosphate 1.2% (T/CP) gel in combination with a benzoyl peroxide 6% foaming cloth compared with T/CP alone for facial acne. At week 12, the combination therapy group showed a trend toward greater efficacy compared with T/CP alone. There was a high success rate observed in the study, which may be attributable to the large percentage of adult female acne patients enrolled. Cutaneous adverse events were not statistically different in using combination therapy compared with T/CP alone.
Potthoff, Denise; Seitz, Rüdiger J
2015-12-01
Humans typically make probabilistic inferences about another person's affective state based on her/his bodily movements such as emotional facial expressions, emblematic gestures and whole body movements. Furthermore, humans deduce tentative predictions about the other person's intentions. Thus, the first person perspective of a subject is supplemented by the second person perspective involving theory of mind and empathy. Neuroimaging investigations have shown that the medial and lateral frontal cortex are critical nodes in the circuits underlying theory of mind, empathy, as well as intention of action. It is suggested that personal perspective taking in social interactions is paradigmatic for the capability of humans to generate probabilistic accounts of the outside world that underlie a person's control of behaviour. Copyright © 2015 Elsevier Ltd. All rights reserved.
Prenatal Alcohol Exposure Alters Biobehavioral Reactivity to Pain in Newborns
Oberlander, Tim F.; Jacobson, Sandra W.; Weinberg, Joanne; Grunau, Ruth E.; Molteno, Christopher D.; Jacobson, Joseph L.
2016-01-01
Objectives To examine biobehavioral responses to an acute pain event in a Cape Town, South Africa, cohort consisting of 28 Cape Colored (mixed ancestry) newborns (n = 14) heavily exposed to alcohol during pregnancy (exposed), and born to abstainers (n = 14) or light (≥0.5 oz absolute alcohol / d) drinkers (controls). Methods Mothers were recruited during the third trimester of pregnancy. Newborn data were collected on postpartum day 3 in the maternity obstetrical unit where the infant had been delivered. Heavy prenatal alcohol exposure was defined as maternal consumption of at least 14 drinks / wk or at least 1 incident of binge drinking / mo. Acute stress-related biobehavioral markers [salivary cortisol, heart rate (HR), respiratory sinus arrhythmia (RSA), spectral measures of heart rate variability (HRV), and videotaped facial actions] were collected thrice during a heel lance blood collection (baseline, lance, and recovery). After a feeding and nap, newborns were administered an abbreviated Brazelton Neonatal Behavioral Assessment Scale. Results There were no between-group differences in maternal age, marital status, parity, gravidity, depression, anxiety, pregnancy smoking, maternal education, or infant gestational age at birth (all ps > 0.15). In both groups, HR increased with the heel lance and decreased during the postlance period. The alcohol-exposed group had lower mean HR than controls throughout, and showed no change in RSA over time. Cortisol levels showed no change over time in controls but decreased over time in exposed infants. Although facial action analyses revealed no group differences in response to the heel lance, behavioral responses assessed on the Brazelton Neonatal Scale showed less arousal in the exposed group. Conclusions Both cardiac autonomic and hypothalamic–pituitary–adrenal stress reactivity measures suggest a blunted response to an acute noxious event in alcohol-exposed newborns. This is supported by results on the Brazelton Neonatal Scale indicating reduced behavioral arousal in the exposed group. To our knowledge, these data provide the first biobehavioral examination of early pain reactivity in alcohol-exposed newborns and have important implications for understanding neuro- / biobehavioral effects of prenatal alcohol exposure in the newborn period. PMID:20121718
ERIC Educational Resources Information Center
Dondi, Marco; Messinger, Daniel; Colle, Marta; Tabasso, Alessia; Simion, Francesca; Barba, Beatrice Dalla; Fogel, Alan
2007-01-01
To better understand the form and recognizability of neonatal smiling, 32 newborns (14 girls; M = 25.6 hr) were videorecorded in the behavioral states of alertness, drowsiness, active sleep, and quiet sleep. Baby Facial Action Coding System coding of both lip corner raising (simple or non-Duchenne) and lip corner raising with cheek raising…
ERIC Educational Resources Information Center
Fogel, Alan; Hsu, Hui-Chin; Shapiro, Alyson F.; Nelson-Goens, G. Christina; Secrist, Cory
2006-01-01
Different types of smiling varying in amplitude of lip corner retraction were investigated during 2 mother-infant games--peekaboo and tickle--at 6 and 12 months and during normally occurring and perturbed games. Using Facial Action Coding System (FACS), infant smiles were coded as simple (lip corner retraction only), Duchenne (simple plus cheek…
Playing Hockey, Riding Motorcycles, and the Ethics of Protection
2012-01-01
Ice hockey and motorcycle riding are increasingly popular activities in the United States that are associated with high risks of head and facial injuries. In both, effective head and facial protective equipment are available. Yet the debates about safety policies regarding the use of head protection in these activities have taken different forms, in terms of the influence of epidemiological data as well as of the ethical concerns raised. I examine these debates over injury prevention in the context of leisure activities, in which the public health duty to prevent avoidable harm must be balanced with the freedom to assume voluntary risks. PMID:23078472
Saito, Kosuke; Tamaki, Tetsuro; Hirata, Maki; Hashimoto, Hiroyuki; Nakazato, Kenei; Nakajima, Nobuyuki; Kazuno, Akihito; Sakai, Akihiro; Iida, Masahiro; Okami, Kenji
2015-01-01
Head and neck cancer is often diagnosed at advanced stages, and surgical resection with wide margins is generally indicated, despite this treatment being associated with poor postoperative quality of life (QOL). We have previously reported on the therapeutic effects of skeletal muscle-derived multipotent stem cells (Sk-MSCs), which exert reconstitution capacity for muscle-nerve-blood vessel units. Recently, we further developed a 3D patch-transplantation system using Sk-MSC sheet-pellets. The aim of this study is the application of the 3D Sk-MSC transplantation system to the reconstitution of facial complex nerve-vascular networks after severe damage. Mouse experiments were performed for histological analysis and rats were used for functional examinations. The Sk-MSC sheet-pellets were prepared from GFP-Tg mice and SD rats, and were transplanted into the facial resection model (ST). Culture medium was transplanted as a control (NT). In the mouse experiment, facial-nerve-palsy (FNP) scoring was performed weekly during the recovery period, and immunohistochemistry was used for the evaluation of histological recovery after 8 weeks. In rats, contractility of facial muscles was measured via electrical stimulation of facial nerves root, as the marker of total functional recovery at 8 weeks after transplantation. The ST-group showed significantly higher FNP (about three fold) scores when compared to the NT-group after 2–8 weeks. Similarly, significant functional recovery of whisker movement muscles was confirmed in the ST-group at 8 weeks after transplantation. In addition, engrafted GFP+ cells formed complex branches of nerve-vascular networks, with differentiation into Schwann cells and perineurial/endoneurial cells, as well as vascular endothelial and smooth muscle cells. Thus, Sk-MSC sheet-pellet transplantation is potentially useful for functional reconstitution therapy of large defects in facial nerve-vascular networks. PMID:26372044
Cotta, Ana; Paim, Julia Filardi; da-Cunha-Junior, Antonio Lopes; Neto, Rafael Xavier; Nunes, Simone Vilela; Navarro, Monica Magalhaes; Valicek, Jaquelin; Carvalho, Elmano; Yamamoto, Lydia U; Almeida, Camila F; Braz, Shelida Vasconcelos; Takata, Reinaldo Issao; Vainzof, Mariz
2014-01-01
Limb girdle muscular dystrophy type 2G (LGMD2G) is a subtype of autosomal recessive muscular dystrophy caused by mutations in the telethonin gene. There are few LGMD2G patients worldwide reported, and this is the first description associated with early tibialis anterior sparing on muscle image and myopathic-neurogenic motor unit potentials. Here we report a 31 years old caucasian male patient with progressive gait disturbance, and severe lower limb proximal weakness since the age of 20 years, associated with subtle facial muscle weakness. Computed tomography demonstrated soleus, medial gastrocnemius, and diffuse thigh muscles involvement with tibialis anterior sparing. Electromyography disclosed both neurogenic and myopathic motor unit potentials. Muscle biopsy demonstrated large groups of atrophic and hypertrophic fibers, frequent fibers with intracytoplasmic rimmed vacuoles full of autophagic membrane and sarcoplasmic debris, and a total deficiency of telethonin. Molecular investigation identified the common homozygous c.157C > T in the TCAP gene. This report expands the phenotypic variability of telethoninopathy/ LGMD2G, including: 1) mixed neurogenic and myopathic motor unit potentials, 2) facial weakness, and 3) tibialis anterior sparing. Appropriate diagnosis in these cases is important for genetic counseling and prognosis.
Harmer, Catherine J; Shelley, Nicholas C; Cowen, Philip J; Goodwin, Guy M
2004-07-01
Antidepressants that inhibit the reuptake of serotonin (SSRIs) or norepinephrine (SNRIs) are effective in the treatment of disorders such as depression and anxiety. Cognitive psychological theories emphasize the importance of correcting negative biases of information processing in the nonpharmacological treatment of these disorders, but it is not known whether antidepressant drugs can directly modulate the neural processing of affective information. The present study therefore assessed the actions of repeated antidepressant administration on perception and memory for positive and negative emotional information in healthy volunteers. Forty-two male and female volunteers were randomly assigned to 7 days of double-blind intervention with the SSRI citalopram (20 mg/day), the SNRI reboxetine (8 mg/day), or placebo. On the final day, facial expression recognition, emotion-potentiated startle response, and memory for affect-laden words were assessed. Questionnaires monitoring mood, hostility, and anxiety were given before and after treatment. In the facial expression recognition task, citalopram and reboxetine reduced the identification of the negative facial expressions of anger and fear. Citalopram also abolished the increased startle response found in the context of negative affective images. Both antidepressants increased the relative recall of positive (versus negative) emotional material. These changes in emotional processing occurred in the absence of significant differences in ratings of mood and anxiety. However, reboxetine decreased subjective ratings of hostility and elevated energy. Short-term administration of two different antidepressant types had similar effects on emotion-related tasks in healthy volunteers, reducing the processing of negative relative to positive emotional material. Such effects of antidepressants may ameliorate the negative biases in information processing that characterize mood and anxiety disorders. They also suggest a mechanism of action potentially compatible with cognitive theories of anxiety and depression.
Godlewska, B R; Browning, M; Norbury, R; Cowen, P J; Harmer, C J
2016-11-22
Antidepressant treatment reduces behavioural and neural markers of negative emotional bias early in treatment and has been proposed as a mechanism of antidepressant drug action. Here, we provide a critical test of this hypothesis by assessing whether neural markers of early emotional processing changes predict later clinical response in depression. Thirty-five unmedicated patients with major depression took the selective serotonin re-uptake inhibitor (SSRI), escitalopram (10 mg), over 6 weeks, and were classified as responders (22 patients) versus non-responders (13 patients), based on at least a 50% reduction in symptoms by the end of treatment. The neural response to fearful and happy emotional facial expressions was assessed before and after 7 days of treatment using functional magnetic resonance imaging. Changes in the neural response to these facial cues after 7 days of escitalopram were compared in patients as a function of later clinical response. A sample of healthy controls was also assessed. At baseline, depressed patients showed greater activation to fear versus happy faces than controls in the insula and dorsal anterior cingulate. Depressed patients who went on to respond to the SSRI had a greater reduction in neural activity to fearful versus happy facial expressions after just 7 days of escitalopram across a network of regions including the anterior cingulate, insula, amygdala and thalamus. Mediation analysis confirmed that the direct effect of neural change on symptom response was not mediated by initial changes in depressive symptoms. These results support the hypothesis that early changes in emotional processing with antidepressant treatment are the basis of later clinical improvement. As such, early correction of negative bias may be a key mechanism of antidepressant drug action and a potentially useful predictor of therapeutic response.
Bouquot, J E; LaMarche, M G
1999-02-01
Previous studies have identified focal areas of alveolar tenderness, elevated mucosal temperature, radiographic abnormality, and increased radioisotope uptake or "hot spots" within the quadrant of pain in most patients with chronic, idiopathic facial pain (phantom pain, atypical facial neuralgia, and atypical facial pain). This retrospective investigation radiographically and microscopically evaluated intramedullary bone in a certain subset of patients with histories of endodontics, extraction, and fixed partial denture placement in an area of "idiopathic" pain. Patients from 12 of the United States were identified through tissue samples, histories, and radiographs submitted to a national biopsy service. Imaging tests, coagulation tests, and microscopic features were reviewed. Of 38 consecutive idiopathic facial pain patients, 32 were women. Approximately 90% of subpontic bone demonstrated either ischemic osteonecrosis (68%), chronic osteomyelitis (21%), or a combination (11%). More than 84% of the patients had abnormal radiographic changes in subpontic bone, and 5 of 9 (56%) patients who underwent radioisotope bone scan revealed hot spots in the region. Of the 14 patients who had laboratory testing for coagulation disorders, 71% were positive for thrombophilia, hypofibrinolysis, or both (normal: 2% to 7%). Ten pain-free patients with abnormal subpontic bone on radiographs were also reviewed. Intraosseous ischemia and chronic inflammation were suggested as a pathoetiologic mechanism for at least some patients with atypical facial pain. These conditions were also offered as an explanation for poor healing of extraction sockets and positive radioisotope scans.
Self-reported empathy and neural activity during action imitation and observation in schizophrenia
Horan, William P.; Iacoboni, Marco; Cross, Katy A.; Korb, Alex; Lee, Junghee; Nori, Poorang; Quintana, Javier; Wynn, Jonathan K.; Green, Michael F.
2014-01-01
Introduction Although social cognitive impairments are key determinants of functional outcome in schizophrenia their neural bases are poorly understood. This study investigated neural activity during imitation and observation of finger movements and facial expressions in schizophrenia, and their correlates with self-reported empathy. Methods 23 schizophrenia outpatients and 23 healthy controls were studied with functional magnetic resonance imaging (fMRI) while they imitated, executed, or simply observed finger movements and facial emotional expressions. Between-group activation differences, as well as relationships between activation and self-reported empathy, were evaluated. Results Both patients and controls similarly activated neural systems previously associated with these tasks. We found no significant between-group differences in task-related activations. There were, however, between-group differences in the correlation between self-reported empathy and right inferior frontal (pars opercularis) activity during observation of facial emotional expressions. As in previous studies, controls demonstrated a positive association between brain activity and empathy scores. In contrast, the pattern in the patient group reflected a negative association between brain activity and empathy. Conclusions Although patients with schizophrenia demonstrated largely normal patterns of neural activation across the finger movement and facial expression tasks, they reported decreased self perceived empathy and failed to show the typical relationship between neural activity and self-reported empathy seen in controls. These findings suggest that patients show a disjunction between automatic neural responses to low level social cues and higher level, integrative social cognitive processes involved in self-perceived empathy. PMID:25009771
Self-reported empathy and neural activity during action imitation and observation in schizophrenia.
Horan, William P; Iacoboni, Marco; Cross, Katy A; Korb, Alex; Lee, Junghee; Nori, Poorang; Quintana, Javier; Wynn, Jonathan K; Green, Michael F
2014-01-01
Although social cognitive impairments are key determinants of functional outcome in schizophrenia their neural bases are poorly understood. This study investigated neural activity during imitation and observation of finger movements and facial expressions in schizophrenia, and their correlates with self-reported empathy. 23 schizophrenia outpatients and 23 healthy controls were studied with functional magnetic resonance imaging (fMRI) while they imitated, executed, or simply observed finger movements and facial emotional expressions. Between-group activation differences, as well as relationships between activation and self-reported empathy, were evaluated. Both patients and controls similarly activated neural systems previously associated with these tasks. We found no significant between-group differences in task-related activations. There were, however, between-group differences in the correlation between self-reported empathy and right inferior frontal (pars opercularis) activity during observation of facial emotional expressions. As in previous studies, controls demonstrated a positive association between brain activity and empathy scores. In contrast, the pattern in the patient group reflected a negative association between brain activity and empathy. Although patients with schizophrenia demonstrated largely normal patterns of neural activation across the finger movement and facial expression tasks, they reported decreased self perceived empathy and failed to show the typical relationship between neural activity and self-reported empathy seen in controls. These findings suggest that patients show a disjunction between automatic neural responses to low level social cues and higher level, integrative social cognitive processes involved in self-perceived empathy.
Humor and laughter in patients with cerebellar degeneration.
Frank, B; Propson, B; Göricke, S; Jacobi, H; Wild, B; Timmann, D
2012-06-01
Humor is a complex behavior which includes cognitive, affective and motor responses. Based on observations of affective changes in patients with cerebellar lesions, the cerebellum may support cerebral and brainstem areas involved in understanding and appreciation of humorous stimuli and expression of laughter. The aim of the present study was to examine if humor appreciation, perception of humorous stimuli, and the succeeding facial reaction differ between patients with cerebellar degeneration and healthy controls. Twenty-three adults with pure cerebellar degeneration were compared with 23 age-, gender-, and education-matched healthy control subjects. No significant difference in humor appreciation and perception of humorous stimuli could be found between groups using the 3 Witz-Dimensionen Test, a validated test asking for funniness and aversiveness of jokes and cartoons. Furthermore, while observing jokes, humorous cartoons, and video sketches, facial expressions of subjects were videotaped and afterwards analysed using the Facial Action Coding System. Using depression as a covariate, the number, and to a lesser degree, the duration of facial expressions during laughter were reduced in cerebellar patients compared to healthy controls. In sum, appreciation of humor appears to be largely preserved in patients with chronic cerebellar degeneration. Cerebellar circuits may contribute to the expression of laughter. Findings add to the literature that non-motor disorders in patients with chronic cerebellar disease are generally mild, but do not exclude that more marked disorders may show up in acute cerebellar disease and/or in more specific tests of humor appreciation.
Faces and Photography in 19th-Century Visual Science.
Wade, Nicholas J
2016-09-01
Reading faces for identity, character, and expression is as old as humanity but representing these states is relatively recent. From the 16th century, physiognomists classified character in terms of both facial form and represented the types graphically. Darwin distinguished between physiognomy (which concerned static features reflecting character) and expression (which was dynamic and reflected emotions). Artists represented personality, pleasure, and pain in their paintings and drawings, but the scientific study of faces was revolutionized by photography in the 19th century. Rather than relying on artistic abstractions of fleeting facial expressions, scientists photographed what the eye could not discriminate. Photography was applied first to stereoscopic portraiture (by Wheatstone) then to the study of facial expressions (by Duchenne) and to identity (by Galton and Bertillon). Photography opened new methods for investigating face perception, most markedly with Galton's composites derived from combining aligned photographs of many sitters. In the same decade (1870s), Kühne took the process of photography as a model for the chemical action of light in the retina. These developments and their developers are described and fixed in time, but the ideas they initiated have proved impossible to stop. © The Author(s) 2016.
Ferrari, Pier Francesco; Barbot, Anna; Bianchi, Bernardo; Ferri, Andrea; Garofalo, Gioacchino; Bruno, Nicola; Coudé, Gino; Bertolini, Chiara; Ardizzi, Martina; Nicolini, Ylenia; Belluardo, Mauro; Stefani, Elisa De
2017-05-01
Studies of the last twenty years on the motor and premotor cortices of primates demonstrated that the motor system is involved in the control and initiation of movements, and in higher cognitive processes, such as action understanding, imitation, and empathy. Mirror neurons are only one example of such theoretical shift. Their properties demonstrate that motor and sensory processing are coupled in the brain. Such knowledge has been also central for designing new neurorehabilitative therapies for patients suffering from brain injuries and consequent motor deficits. Moebius Syndrome patients, for example, are incapable of moving their facial muscles, which are fundamental for affective communication. These patients face an important challenge after having undergone a corrective surgery: reanimating the transplanted muscles to achieve a voluntarily control of smiling. We propose two new complementary rehabilitative approaches on MBS patients based on observation/imitation therapy (Facial Imitation Therapy, FIT) and on hand-mouth motor synergies (Synergistic Activity Therapy, SAT). Preliminary results show that our intervention protocol is a promising approach for neurorehabilitation of patients with facial palsy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fairness modulates non-conscious facial mimicry in women.
Hofman, Dennis; Bos, Peter A; Schutter, Dennis J L G; van Honk, Jack
2012-09-07
In societies with high cooperation demands, implicit consensus on social norms enables successful human coexistence. Mimicking other people's actions and emotions has been proposed as a means to synchronize behaviour, thereby enhancing affiliation. Mimicry has long been thought to be reflexive, but it has recently been suggested that mimicry might also be motivationally driven. Here, we show during an economic bargaining game that automatic happy mimicry of those making unfair offers disappears. After the bargaining game, when the proposers have acquired either a fair or unfair reputation, we observe increased angry mimicry of proposers with an unfair reputation and decreased angry mimicry of fair proposers. These findings provide direct empirical evidence that non-conscious mimicry is modulated by fairness. We interpret the present results as reflecting that facial mimicry in women functions conditionally, dependent on situational demands.
Lew, Timothy A; Walker, John A; Wenke, Joseph C; Blackbourne, Lorne H; Hale, Robert G
2010-01-01
To characterize and describe the craniomaxillofacial (CMF) battlefield injuries sustained by US Service Members in Operation Iraqi Freedom and Operation Enduring Freedom. The Joint Theater Trauma Registry was queried from October 19, 2001, to December 11, 2007, for CMF battlefield injuries. The CMF injuries were identified using the "International Classification of Diseases, Ninth Revision, Clinical Modification" codes and the data compiled for battlefield injury service members. Nonbattlefield injuries, killed in action, and return to duty cases were excluded. CMF battlefield injuries were found in 2,014 of the 7,770 battlefield-injured US service members. In the 2,014 injured service members were 4,783 CMF injuries (2.4 injuries per soldier). The incidence of CMF battlefield injuries by branch of service was Army, 72%; Marines, 24%; Navy, 2%; and Air Force, 1%. The incidence of penetrating soft-tissue injuries and fractures was 58% and 27%, respectively. Of the fractures, 76% were open. The location of the facial fractures was the mandible in 36%, maxilla/zygoma in 19%, nasal in 14%, and orbit in 11%. The remaining 20% were not otherwise specified. The primary mechanism of injury involved explosive devices (84%). Of the injured US service members, 26% had injuries to the CMF region in the Operation Iraqi Freedom/Operation Enduring Freedom conflicts during a 6-year period. Multiple penetrating soft-tissue injuries and fractures caused by explosive devices were frequently seen. Increased survivability because of body armor, advanced battlefield medicine, and the increased use of explosive devices is probably related to the elevated incidence of CMF battlefield injuries. The current use of "International Classification of Diseases, Ninth Revision, Clinical Modification" codes with the Joint Theater Trauma Registry failed to characterize the severity of facial wounds.
Speech Signal and Facial Image Processing for Obstructive Sleep Apnea Assessment
Espinoza-Cuadros, Fernando; Fernández-Pozo, Rubén; Toledano, Doroteo T.; Alcázar-Ramírez, José D.; López-Gonzalo, Eduardo; Hernández-Gómez, Luis A.
2015-01-01
Obstructive sleep apnea (OSA) is a common sleep disorder characterized by recurring breathing pauses during sleep caused by a blockage of the upper airway (UA). OSA is generally diagnosed through a costly procedure requiring an overnight stay of the patient at the hospital. This has led to proposing less costly procedures based on the analysis of patients' facial images and voice recordings to help in OSA detection and severity assessment. In this paper we investigate the use of both image and speech processing to estimate the apnea-hypopnea index, AHI (which describes the severity of the condition), over a population of 285 male Spanish subjects suspected to suffer from OSA and referred to a Sleep Disorders Unit. Photographs and voice recordings were collected in a supervised but not highly controlled way trying to test a scenario close to an OSA assessment application running on a mobile device (i.e., smartphones or tablets). Spectral information in speech utterances is modeled by a state-of-the-art low-dimensional acoustic representation, called i-vector. A set of local craniofacial features related to OSA are extracted from images after detecting facial landmarks using Active Appearance Models (AAMs). Support vector regression (SVR) is applied on facial features and i-vectors to estimate the AHI. PMID:26664493
Speech Signal and Facial Image Processing for Obstructive Sleep Apnea Assessment.
Espinoza-Cuadros, Fernando; Fernández-Pozo, Rubén; Toledano, Doroteo T; Alcázar-Ramírez, José D; López-Gonzalo, Eduardo; Hernández-Gómez, Luis A
2015-01-01
Obstructive sleep apnea (OSA) is a common sleep disorder characterized by recurring breathing pauses during sleep caused by a blockage of the upper airway (UA). OSA is generally diagnosed through a costly procedure requiring an overnight stay of the patient at the hospital. This has led to proposing less costly procedures based on the analysis of patients' facial images and voice recordings to help in OSA detection and severity assessment. In this paper we investigate the use of both image and speech processing to estimate the apnea-hypopnea index, AHI (which describes the severity of the condition), over a population of 285 male Spanish subjects suspected to suffer from OSA and referred to a Sleep Disorders Unit. Photographs and voice recordings were collected in a supervised but not highly controlled way trying to test a scenario close to an OSA assessment application running on a mobile device (i.e., smartphones or tablets). Spectral information in speech utterances is modeled by a state-of-the-art low-dimensional acoustic representation, called i-vector. A set of local craniofacial features related to OSA are extracted from images after detecting facial landmarks using Active Appearance Models (AAMs). Support vector regression (SVR) is applied on facial features and i-vectors to estimate the AHI.
Franchise medicine: how I avoid being a commodity in a global market.
Constantinides, Minas
2010-02-01
As facial plastic surgery becomes more global, pressures for practices to become commoditized will increase. Commoditized practices are those in which price drives the quality of the product. Franchised surgical practices have also recently increased within the United States and abroad. These are always commoditized by their corporate philosophies. There are better ways to create value than to lower price to compete with a neighboring practice. By establishing a Transcendent Relationship of growth, both the surgeon and the patient are more satisfied with their facial plastic surgical experiences. Key tools helpful in predicting future directions for a practice, the Four Compass Points and the Average Best Patient, will be introduced. Thieme Medical Publishers.
ERIC Educational Resources Information Center
Werdmann, Anne M.
In a sixth-grade unit, students learned about people's facial expressions through careful observation, recording, reporting, and generalizing. The students studied the faces of people of various ages; explored "masks" that people wear in different situations; learned about the use of ritual masks; made case studies of individuals to show…
Coll, Sélim Yahia; Ceravolo, Leonardo; Frühholz, Sascha; Grandjean, Didier
2018-05-02
Different parts of our brain code the perceptual features and actions related to an object, causing a binding problem, in which the brain has to integrate information related to an event without any interference regarding the features and actions involved in other concurrently processed events. Using a paradigm similar to Hommel, who revealed perception-action bindings, we showed that emotion could bind with motor actions when relevant, and in specific conditions, irrelevant for the task. By adapting our protocol to a functional Magnetic Resonance Imaging paradigm we investigated, in the present study, the neural bases of the emotion-action binding with task-relevant angry faces. Our results showed that emotion bound with motor responses. This integration revealed increased activity in distributed brain areas involved in: (i) memory, including the hippocampi; (ii) motor actions with the precentral gyri; (iii) and emotion processing with the insula. Interestingly, increased activations in the cingulate gyri and putamen, highlighted their potential key role in the emotion-action binding, due to their involvement in emotion processing, motor actions, and memory. The present study confirmed our previous results and point out for the first time the functional brain activity related to the emotion-action association.
7 CFR 275.18 - Project area/management unit corrective action plan.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 4 2010-01-01 2010-01-01 false Project area/management unit corrective action plan... SYSTEM Corrective Action § 275.18 Project area/management unit corrective action plan. (a) The State agency shall ensure that corrective action plans are prepared at the project area/management unit level...
7 CFR 275.18 - Project area/management unit corrective action plan.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 4 2013-01-01 2013-01-01 false Project area/management unit corrective action plan... SYSTEM Corrective Action § 275.18 Project area/management unit corrective action plan. (a) The State agency shall ensure that corrective action plans are prepared at the project area/management unit level...
7 CFR 275.18 - Project area/management unit corrective action plan.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 4 2014-01-01 2014-01-01 false Project area/management unit corrective action plan... SYSTEM Corrective Action § 275.18 Project area/management unit corrective action plan. (a) The State agency shall ensure that corrective action plans are prepared at the project area/management unit level...
7 CFR 275.18 - Project area/management unit corrective action plan.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 4 2011-01-01 2011-01-01 false Project area/management unit corrective action plan... SYSTEM Corrective Action § 275.18 Project area/management unit corrective action plan. (a) The State agency shall ensure that corrective action plans are prepared at the project area/management unit level...
7 CFR 275.18 - Project area/management unit corrective action plan.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 4 2012-01-01 2012-01-01 false Project area/management unit corrective action plan... SYSTEM Corrective Action § 275.18 Project area/management unit corrective action plan. (a) The State agency shall ensure that corrective action plans are prepared at the project area/management unit level...
Maxillofacial injuries in the workplace.
Burnham, Richard; Martin, Tim
2013-04-01
Over a 2-year period we reviewed patients who presented to a UK maxillofacial unit with facial injuries sustained at work. We looked at links between the mechanism, injury, and characteristics of such injuries. Copyright © 2012 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Laser-assisted hair removal for facial hirsutism in women: A review of evidence.
Lee, Chun-Man
2018-06-01
Poly cystic ovarian syndrome (PCOS) has been described as the common diagnosis for hirsutism in women. Facial hirsutism is by far the most distressing symptom of hyperandrogenism in women with PCOS. A statistically significant improvement in psychological well-being has been reported in patients with PCOS allocated for laser-assisted hair removal. The theory of selective photothermolysis has revolutionized laser hair removal in that it is effective and safe, when operated by sufficiently trained and experienced professionals. Long-pulsed ruby (694 nm), long-pulsed alexandrite (755 nm), diode (800-980 nm), and long-pulsed Nd:YAG (1064 nm) are commercially available laser devices for hair removal most widely studied. This article will introduce the fundamentals and mechanism of action of lasers in hair removal, in a contemporary literature review looking at medium to long term efficacy and safety profiles of various laser hair removal modalities most widely commercially available to date.
Treatment of hemifacial spasm with botulinum A toxin. Results and rationale.
Gonnering, R S
1986-01-01
Hemifacial spasm is characterized by unilateral, periodic, tonic contractions of facial muscles, thought to be caused by mechanical compression at the root-exit zone of the facial nerve. Electrophysiologic abnormalities such as ectopic excitation and synkinesis are typical. Although posterior fossa microsurgical nerve decompression is successful in bringing about relief of the spasm in most cases, it carries a risk to hearing. As an alternative treatment, 15 patients with hemifacial spasm were given a total of 41 sets of injections with botulinum A toxin, with a mean follow-up of 14.3 +/- 1.1 months. Relief of symptoms lasted a mean of 108.3 +/- 4.2 days. Mild transient lagophthalmos and ptosis were the only complications. Although the exact mechanism of its action and beneficial effect is speculative at this time, botulinum A toxin appears to offer an effective, safe alternative to more radical intracranial surgery for patients with hemifacial spasm.
Deep facial analysis: A new phase I epilepsy evaluation using computer vision.
Ahmedt-Aristizabal, David; Fookes, Clinton; Nguyen, Kien; Denman, Simon; Sridharan, Sridha; Dionisio, Sasha
2018-05-01
Semiology observation and characterization play a major role in the presurgical evaluation of epilepsy. However, the interpretation of patient movements has subjective and intrinsic challenges. In this paper, we develop approaches to attempt to automatically extract and classify semiological patterns from facial expressions. We address limitations of existing computer-based analytical approaches of epilepsy monitoring, where facial movements have largely been ignored. This is an area that has seen limited advances in the literature. Inspired by recent advances in deep learning, we propose two deep learning models, landmark-based and region-based, to quantitatively identify changes in facial semiology in patients with mesial temporal lobe epilepsy (MTLE) from spontaneous expressions during phase I monitoring. A dataset has been collected from the Mater Advanced Epilepsy Unit (Brisbane, Australia) and is used to evaluate our proposed approach. Our experiments show that a landmark-based approach achieves promising results in analyzing facial semiology, where movements can be effectively marked and tracked when there is a frontal face on visualization. However, the region-based counterpart with spatiotemporal features achieves more accurate results when confronted with extreme head positions. A multifold cross-validation of the region-based approach exhibited an average test accuracy of 95.19% and an average AUC of 0.98 of the ROC curve. Conversely, a leave-one-subject-out cross-validation scheme for the same approach reveals a reduction in accuracy for the model as it is affected by data limitations and achieves an average test accuracy of 50.85%. Overall, the proposed deep learning models have shown promise in quantifying ictal facial movements in patients with MTLE. In turn, this may serve to enhance the automated presurgical epilepsy evaluation by allowing for standardization, mitigating bias, and assessing key features. The computer-aided diagnosis may help to support clinical decision-making and prevent erroneous localization and surgery. Copyright © 2018 Elsevier Inc. All rights reserved.
The influence of nationality on the accuracy of face and voice recognition.
Doty, N D
1998-01-01
Sixty English and U.S. citizens were tested to determine the effect of nationality on accuracy in recognizing previously witnessed faces and voices. Subjects viewed a frontal facial photograph and were then asked to select that face from a set of 10 oblique facial photographs. Subjects listened to a recorded voice and were then asked to select the same voice from a set of 10 voice recordings. This process was repeated 7 more times, such that subjects identified a male and female face and voice from England, France, Belize, and the United States. Subjects demonstrated better accuracy recognizing the faces and voices of their own nationality. Subgoups analysis further supported the other-nationality effect as well as the previously documented other-race effect.
What do healthcare workers think? A survey of facial protection equipment user preferences.
Bryce, E; Forrester, L; Scharf, S; Eshghpour, M
2008-03-01
Data on healthcare workers'(HCWs) self-reported knowledge regarding selection of facial protection equipment, usage preferences and compliance are limited. We used a questionnaire on the use of facial protection equipment at a 700-bed adult tertiary care hospital employing approximately 7000 HCWs. Clinical areas targeted were those with frequent users of N95 respirators: intensive care unit, emergency room, respiratory services, and internal medicine. Respiratory therapists were also invited. In all, 137 questionnaires (68.5%) were returned. Most (72.8%) reported that training on the use of facial protection equipment was 'sufficient' to 'excellent'. The PFR95 and 3M 1860 Cone were used most frequently (56%) followed by the 3M 1870 Pocket (42%). While 95% reported having been fit-tested, only 60% were tested annually. PRF95 use exceeded the number of HCWs fit-tested for the item. Overall comfort and compliance scores were 13.6/20 and 21.5/25, respectively, for respirators and 7.7/10 and 18.5/25 for protective eyewear. No relationship between comfort and years of use of either respirators or protective eyewear was found. The results highlight potential failures in effectiveness in the use of personal protective equipment that could compromise HCW safety, and support observations that compliance in the workplace is usually less than in the research setting.
Tissue-Engineered Autologous Grafts for Facial Bone Reconstruction
Bhumiratana, Sarindr; Bernhard, Jonathan C.; Alfi, David M.; Yeager, Keith; Eton, Ryan E.; Bova, Jonathan; Shah, Forum; Gimble, Jeffrey M.; Lopez, Mandi J.; Eisig, Sidney B.; Vunjak-Novakovic, Gordana
2016-01-01
Facial deformities require precise reconstruction of the appearance and function of the original tissue. The current standard of care—the use of bone harvested from another region in the body—has major limitations, including pain and comorbidities associated with surgery. We have engineered one of the most geometrically complex facial bones by using autologous stromal/stem cells, without bone morphogenic proteins, using native bovine bone matrix and a perfusion bioreactor for the growth and transport of living grafts. The ramus-condyle unit (RCU), the most eminent load-bearing bone in the skull, was reconstructed using an image-guided personalized approach in skeletally mature Yucatan minipigs (human-scale preclinical model). We used clinically approved decellularized bovine trabecular bone as a scaffolding material, and crafted it into an anatomically correct shape using image-guided micromilling, to fit the defect. Autologous adipose-derived stromal/stem cells were seeded into the scaffold and cultured in perfusion for 3 weeks in a specialized bioreactor to form immature bone tissue. Six months after implantation, the engineered grafts maintained their anatomical structure, integrated with native tissues, and generated greater volume of new bone and greater vascular infiltration than either non-seeded anatomical scaffolds or untreated defects. This translational study demonstrates feasibility of facial bone reconstruction using autologous, anatomically shaped, living grafts formed in vitro, and presents a platform for personalized bone tissue engineering. PMID:27306665
Clinical practice guideline: Bell's palsy.
Baugh, Reginald F; Basura, Gregory J; Ishii, Lisa E; Schwartz, Seth R; Drumheller, Caitlin Murray; Burkholder, Rebecca; Deckard, Nathan A; Dawson, Cindy; Driscoll, Colin; Gillespie, M Boyd; Gurgel, Richard K; Halperin, John; Khalid, Ayesha N; Kumar, Kaparaboyna Ashok; Micco, Alan; Munsell, Debra; Rosenbaum, Steven; Vaughan, William
2013-11-01
Bell's palsy, named after the Scottish anatomist, Sir Charles Bell, is the most common acute mono-neuropathy, or disorder affecting a single nerve, and is the most common diagnosis associated with facial nerve weakness/paralysis. Bell's palsy is a rapid unilateral facial nerve paresis (weakness) or paralysis (complete loss of movement) of unknown cause. The condition leads to the partial or complete inability to voluntarily move facial muscles on the affected side of the face. Although typically self-limited, the facial paresis/paralysis that occurs in Bell's palsy may cause significant temporary oral incompetence and an inability to close the eyelid, leading to potential eye injury. Additional long-term poor outcomes do occur and can be devastating to the patient. Treatments are generally designed to improve facial function and facilitate recovery. There are myriad treatment options for Bell's palsy, and some controversy exists regarding the effectiveness of several of these options, and there are consequent variations in care. In addition, numerous diagnostic tests available are used in the evaluation of patients with Bell's palsy. Many of these tests are of questionable benefit in Bell's palsy. Furthermore, while patients with Bell's palsy enter the health care system with facial paresis/paralysis as a primary complaint, not all patients with facial paresis/paralysis have Bell's palsy. It is a concern that patients with alternative underlying etiologies may be misdiagnosed or have unnecessary delay in diagnosis. All of these quality concerns provide an important opportunity for improvement in the diagnosis and management of patients with Bell's palsy. The primary purpose of this guideline is to improve the accuracy of diagnosis for Bell's palsy, to improve the quality of care and outcomes for patients with Bell's palsy, and to decrease harmful variations in the evaluation and management of Bell's palsy. This guideline addresses these needs by encouraging accurate and efficient diagnosis and treatment and, when applicable, facilitating patient follow-up to address the management of long-term sequelae or evaluation of new or worsening symptoms not indicative of Bell's palsy. The guideline is intended for all clinicians in any setting who are likely to diagnose and manage patients with Bell's palsy. The target population is inclusive of both adults and children presenting with Bell's palsy. ACTION STATEMENTS: The development group made a strong recommendation that (a) clinicians should assess the patient using history and physical examination to exclude identifiable causes of facial paresis or paralysis in patients presenting with acute-onset unilateral facial paresis or paralysis, (b) clinicians should prescribe oral steroids within 72 hours of symptom onset for Bell's palsy patients 16 years and older, (c) clinicians should not prescribe oral antiviral therapy alone for patients with new-onset Bell's palsy, and (d) clinicians should implement eye protection for Bell's palsy patients with impaired eye closure. The panel made recommendations that (a) clinicians should not obtain routine laboratory testing in patients with new-onset Bell's palsy, (b) clinicians should not routinely perform diagnostic imaging for patients with new-onset Bell's palsy, (c) clinicians should not perform electrodiagnostic testing in Bell's palsy patients with incomplete facial paralysis, and (d) clinicians should reassess or refer to a facial nerve specialist those Bell's palsy patients with (1) new or worsening neurologic findings at any point, (2) ocular symptoms developing at any point, or (3) incomplete facial recovery 3 months after initial symptom onset. The development group provided the following options: (a) clinicians may offer oral antiviral therapy in addition to oral steroids within 72 hours of symptom onset for patients with Bell's palsy, and (b) clinicians may offer electrodiagnostic testing to Bell's palsy patients with complete facial paralysis. The development group offered the following no recommendations: (a) no recommendation can be made regarding surgical decompression for patients with Bell's palsy, (b) no recommendation can be made regarding the effect of acupuncture in patients with Bell's palsy, and (c) no recommendation can be made regarding the effect of physical therapy in patients with Bell's palsy.
Effects of the potential lithium-mimetic, ebselen, on impulsivity and emotional processing.
Masaki, Charles; Sharpley, Ann L; Cooper, Charlotte M; Godlewska, Beata R; Singh, Nisha; Vasudevan, Sridhar R; Harmer, Catherine J; Churchill, Grant C; Sharp, Trevor; Rogers, Robert D; Cowen, Philip J
2016-07-01
Lithium remains the most effective treatment for bipolar disorder and also has important effects to lower suicidal behaviour, a property that may be linked to its ability to diminish impulsive, aggressive behaviour. The antioxidant drug, ebselen, has been proposed as a possible lithium-mimetic based on its ability in animals to inhibit inositol monophosphatase (IMPase), an action which it shares with lithium. The aim of the study was to determine whether treatment with ebselen altered emotional processing and diminished measures of risk-taking behaviour. We studied 20 healthy participants who were tested on two occasions receiving either ebselen (3600 mg over 24 h) or identical placebo in a double-blind, randomized, cross-over design. Three hours after the final dose of ebselen/placebo, participants completed the Cambridge Gambling Task (CGT) and a task that required the detection of emotional facial expressions (facial emotion recognition task (FERT)). On the CGT, relative to placebo, ebselen reduced delay aversion while on the FERT, it increased the recognition of positive vs negative facial expressions. The study suggests that at the dosage used, ebselen can decrease impulsivity and produce a positive bias in emotional processing. These findings have implications for the possible use of ebselen in the disorders characterized by impulsive behaviour and dysphoric mood.
The Noh mask effect: vertical viewpoint dependence of facial expression perception.
Lyons, M J; Campbell, R; Plante, A; Coleman, M; Kamachi, M; Akamatsu, S
2000-01-01
Full-face masks, worn by skilled actors in the Noh tradition, can induce a variety of perceived expressions with changes in head orientation. Out-of-plane rotation of the head changes the two-dimensional image characteristics of the face which viewers may misinterpret as non-rigid changes due to muscle action. Three experiments with Japanese and British viewers explored this effect. Experiment 1 confirmed a systematic relationship between vertical angle of view of a Noh mask and judged affect. A forward tilted mask was more often judged happy, and one backward tilted more often judged sad. This effect was moderated by culture. Japanese viewers ascribed happiness to the mask at greater degrees of backward tilt with a reversal towards sadness at extreme forward angles. Cropping the facial image of chin and upper head contour reduced the forward-tilt reversal. Finally, the relationship between head tilt and affect was replicated with a laser-scanned human face image, but with no cultural effect. Vertical orientation of the head changes the apparent disposition of facial features and viewers respond systematically to these changes. Culture moderates this effect, and we discuss how perceptual strategies for ascribing expression to familiar and unfamiliar images may account for the differences. PMID:11413638
Transformative science education through action research and self-study practices
NASA Astrophysics Data System (ADS)
Calderon, Olga
The research studies human emotions through diverse methods and theoretical lenses. My intention in using this approach is to provide alternative ways of perceiving and interpreting emotions being experienced in the moment of arousal. Emotions are fundamental in human interactions because they are essential in the development of effective relationships of any kind and they can also mediate hostility towards others. I begin by presenting an impressionist auto-ethnography, which narrates a personal account of how science and scientific inquiry has been entrenched in me since childhood. I describe how emotions are an important part of how I perceive and respond to the world around me. I describe science in my life in terms of natural environments, which were the initial source of scientific wonder and bafflement for me. In this auto-ethnography, I recount how social interactions shaped my perceptions about people, the world, and my education trajectory. Furthermore, I illustrate how sociocultural structures are used in different contexts to mediate several life decisions that enable me to pursue a career in science and science education. I also reflect on how some of those sociocultural aspects mediated my emotional wellness. I reveal how my life and science are interconnected and I present my story as a segue to the remainder of the dissertation. In chapters 2 and 3, I address a methodology and associated methods for research on facial expression of emotion. I use a facial action coding system developed by Paul Ekman in the 1970s (Ekman, 2002) to study facial representation of emotions. In chapters 4 and 5, I review the history of oximetry and ways in which an oximeter can be used to obtain information on the physiological expression of emotions. I examine oximetry data in relation to emotional physiology in three different aspects; pulse rate, oxygenation of the blood, and plethysmography (i.e., strength of pulse). In chapters 3 and 5, I include data and observations collected in a science education course for science teachers at Brooklyn College. These observations are only a small part on a larger study of emotions and mindfulness in the science classroom by a group of researchers of the City University of New York. In this context, I explore how, while teaching and learning science, emotions are represented facially and physiologically in terms of oxygenation of the blood and pulse rate and strength.
Rhinoplasty and the aesthetic of the smile.
de Benito, J; Fernandez Sanza, I
1995-01-01
The resection of the columella and nasal depressor muscles is a simple operation to perform and one which allows an improvement in the facial physiognomy of many patients. This operation can be done alone or in conjunction with the classic rhinoplasty, thus achieving an improvement in the aesthetics of the smile. It has also been proved, contrary to common belief, that the action of these muscles has no connection with physiological breathing mechanisms.
A systems view of mother-infant face-to-face communication.
Beebe, Beatrice; Messinger, Daniel; Bahrick, Lorraine E; Margolis, Amy; Buck, Karen A; Chen, Henian
2016-04-01
Principles of a dynamic, dyadic systems view of mother-infant face-to-face communication, which considers self- and interactive processes in relation to one another, were tested. The process of interaction across time in a large low-risk community sample at infant age 4 months was examined. Split-screen videotape was coded on a 1-s time base for communication modalities of attention, affect, orientation, touch, and composite facial-visual engagement. Time-series approaches generated self- and interactive contingency estimates in each modality. Evidence supporting the following principles was obtained: (a) Significant moment-to-moment predictability within each partner (self-contingency) and between the partners (interactive contingency) characterizes mother-infant communication. (b) Interactive contingency is organized by a bidirectional, but asymmetrical, process: Maternal contingent coordination with infant is higher than infant contingent coordination with mother. (c) Self-contingency organizes communication to a far greater extent than interactive contingency. (d) Self- and interactive contingency processes are not separate; each affects the other in communication modalities of facial affect, facial-visual engagement, and orientation. Each person's self-organization exists in a dynamic, homoeostatic (negative feedback) balance with the degree to which the person coordinates with the partner. For example, those individuals who are less facially stable are likely to coordinate more strongly with the partner's facial affect and vice versa. Our findings support the concept that the dyad is a fundamental unit of analysis in the investigation of early interaction. Moreover, an individual's self-contingency is influenced by the way the individual coordinates with the partner. Our results imply that it is not appropriate to conceptualize interactive processes without simultaneously accounting for dynamically interrelated self-organizing processes. (c) 2016 APA, all rights reserved).
A Systems View of Mother-Infant Face-to-Face Communication
Beebe, Beatrice; Messinger, Daniel; Bahrick, Lorraine E.; Margolis, Amy; Buck, Karen A.; Chen, Henian
2016-01-01
Principles of a dynamic, dyadic systems view of mother-infant face-to-face communication, which considers self- and interactive processes in relation to one another, were tested. We examined the process of interaction across time in a large, low-risk community sample, at infant age 4 months. Split-screen videotape was coded on a 1-s time base for communication modalities of attention, affect, orientation, touch and composite facial-visual engagement. Time-series approaches generated self- and interactive contingency estimates in each modality. Evidence supporting the following principles was obtained: (1) Significant moment-to-moment predictability within each partner (self-contingency) and between the partners (interactive contingency) characterizes mother-infant communication. (2) Interactive contingency is organized by a bi-directional, but asymmetrical, process: maternal contingent coordination with infant is higher than infant contingent coordination with mother. (3) Self-contingency organizes communication to a far greater extent than interactive contingency. (4) Self-and interactive contingency processes are not separate; each affects the other, in communication modalities of facial affect, facial-visual engagement, and orientation. Each person’s self-organization exists in a dynamic, homoeostatic (negative feedback) balance with the degree to which the person coordinates with the partner. For example, those individuals who are less facially stable are likely to coordinate more strongly with the partner’s facial affect; and vice-versa. Our findings support the concept that the dyad is a fundamental unit of analysis in the investigation of early interaction. Moreover, an individual’s self-contingency is influenced by the way the individual coordinates with the partner. Our results imply that it is not appropriate to conceptualize interactive processes without simultaneously accounting for dynamically inter-related self-organizing processes. PMID:26882118
Injecting 1000 centistoke liquid silicone with ease and precision.
Benedetto, Anthony V; Lewis, Alan T
2003-03-01
Since the Food and Drug Administration approved the use of the 1000 centistoke liquid silicone, Silikon 1000, for intraocular injection, the off-label use of this injectable silicone oil as a permanent soft-tissue filler for facial rejuvenation has increased in the United States. Injecting liquid silicone by the microdroplet technique is the most important preventive measure that one can use to avoid the adverse sequelae of silicone migration and granuloma formation, especially when injecting silicone to improve small facial defects resulting from acne scars, surgical procedures, or photoaging. To introduce an easy method for injecting a viscous silicone oil by the microdroplet technique, using an inexpensive syringe and needle that currently is available from distributors of medical supplies in the United States. We suggest the use of a Becton Dickinson 3/10 cc insulin U-100 syringe to inject Silikon 1000. This syringe contains up to 0.3 mL of fluid, and its barrel is clearly marked with an easy-to-read scale of large cross-hatches. Each cross-hatch marking represents either a unit value of 0.01 mL or a half-unit value of 0.005 mL of fluid, which is the approximate volume preferred when injecting liquid silicone into facial defects. Because not enough negative pressure can be generated in this needle and syringe to draw up the viscous silicone oil, we describe a convenient and easy method for filling this 3/10 cc diabetic syringe with Silikon 1000. We have found that by using the Becton Dickinson 3/10 cc insulin U-100 syringe, our technique of injecting minute amounts of Silikon 1000 is facilitated because each widely spaced cross-hatch on the side of the syringe barrel is easy to read and measures exact amounts of the silicone oil. These lines of the scale on the syringe barrel are so large and clearly marked that it is virtually impossible to overinject the most minute amount of silicone. Sequential microdroplets of 0.01 cc or less of Silikon 1000 can be measured and injected with the greatest ease and precision so that inadvertent overdosing and complications can be avoided.
Action Recognition Using Nonnegative Action Component Representation and Sparse Basis Selection.
Wang, Haoran; Yuan, Chunfeng; Hu, Weiming; Ling, Haibin; Yang, Wankou; Sun, Changyin
2014-02-01
In this paper, we propose using high-level action units to represent human actions in videos and, based on such units, a novel sparse model is developed for human action recognition. There are three interconnected components in our approach. First, we propose a new context-aware spatial-temporal descriptor, named locally weighted word context, to improve the discriminability of the traditionally used local spatial-temporal descriptors. Second, from the statistics of the context-aware descriptors, we learn action units using the graph regularized nonnegative matrix factorization, which leads to a part-based representation and encodes the geometrical information. These units effectively bridge the semantic gap in action recognition. Third, we propose a sparse model based on a joint l2,1-norm to preserve the representative items and suppress noise in the action units. Intuitively, when learning the dictionary for action representation, the sparse model captures the fact that actions from the same class share similar units. The proposed approach is evaluated on several publicly available data sets. The experimental results and analysis clearly demonstrate the effectiveness of the proposed approach.
The attraction of emotions: Irrelevant emotional information modulates motor actions.
Ambron, Elisabetta; Foroni, Francesco
2015-08-01
Emotional expressions are important cues that capture our attention automatically. Although a wide range of work has explored the role and influence of emotions on cognition and behavior, little is known about the way that emotions influence motor actions. Moreover, considering how critical detecting emotional facial expressions in the environment can be, it is important to understand their impact even when they are not directly relevant to the task being performed. Our novel approach was to explore this issue from the attention-and-action perspective, using a task-irrelevant distractor paradigm in which participants are asked to reach for a target while a nontarget stimulus is also presented. We tested whether the movement trajectory would be influenced by irrelevant stimuli-faces with or without emotional expressions. The results showed that reaching paths veered toward faces with emotional expressions, in particular happiness, but not toward neutral expressions. This reinforces the view of emotions as attention-capturing stimuli that are, however, also potential sources of distraction for motor actions.
The Role of Embodiment and Individual Empathy Levels in Gesture Comprehension.
Jospe, Karine; Flöel, Agnes; Lavidor, Michal
2017-01-01
Research suggests that the action-observation network is involved in both emotional-embodiment (empathy) and action-embodiment (imitation) mechanisms. Here we tested whether empathy modulates action-embodiment, hypothesizing that restricting imitation abilities will impair performance in a hand gesture comprehension task. Moreover, we hypothesized that empathy levels will modulate the imitation restriction effect. One hundred twenty participants with a range of empathy scores performed gesture comprehension under restricted and unrestricted hand conditions. Empathetic participants performed better under the unrestricted compared to the restricted condition, and compared to the low empathy participants. Remarkably however, the latter showed the exactly opposite pattern and performed better under the restricted condition. This pattern was not found in a facial expression recognition task. The selective interaction of embodiment restriction and empathy suggests that empathy modulates the way people employ embodiment in gesture comprehension. We discuss the potential of embodiment-induced therapy to improve empathetic abilities in individuals with low empathy.
The role of great auricular-facial nerve neurorrhaphy in facial nerve damage
Sun, Yan; Liu, Limei; Han, Yuechen; Xu, Lei; Zhang, Daogong; Wang, Haibo
2015-01-01
Background: Facial nerve is easy to be damaged, and there are many reconstructive methods for facial nerve reconstructive, such as facial nerve end to end anastomosis, the great auricular nerve graft, the sural nerve graft, or hypoglossal-facial nerve anastomosis. However, there is still little study about great auricular-facial nerve neurorrhaphy. The aim of the present study was to identify the role of great auricular-facial nerve neurorrhaphy and the mechanism. Methods: Rat models of facial nerve cut (FC), facial nerve end to end anastomosis (FF), facial-great auricular neurorrhaphy (FG), and control (Ctrl) were established. Apex nasi amesiality observation, electrophysiology and immunofluorescence assays were employed to investigate the function and mechanism. Results: In apex nasi amesiality observation, it was found apex nasi amesiality of FG group was partly recovered. Additionally, electrophysiology and immunofluorescence assays revealed that facial-great auricular neurorrhaphy could transfer nerve impulse and express AChR which was better than facial nerve cut and worse than facial nerve end to end anastomosis. Conclusions: The present study indicated that great auricular-facial nerve neurorrhaphy is a substantial solution for facial lesion repair, as it is efficiently preventing facial muscles atrophy by generating neurotransmitter like ACh. PMID:26550216
The role of great auricular-facial nerve neurorrhaphy in facial nerve damage.
Sun, Yan; Liu, Limei; Han, Yuechen; Xu, Lei; Zhang, Daogong; Wang, Haibo
2015-01-01
Facial nerve is easy to be damaged, and there are many reconstructive methods for facial nerve reconstructive, such as facial nerve end to end anastomosis, the great auricular nerve graft, the sural nerve graft, or hypoglossal-facial nerve anastomosis. However, there is still little study about great auricular-facial nerve neurorrhaphy. The aim of the present study was to identify the role of great auricular-facial nerve neurorrhaphy and the mechanism. Rat models of facial nerve cut (FC), facial nerve end to end anastomosis (FF), facial-great auricular neurorrhaphy (FG), and control (Ctrl) were established. Apex nasi amesiality observation, electrophysiology and immunofluorescence assays were employed to investigate the function and mechanism. In apex nasi amesiality observation, it was found apex nasi amesiality of FG group was partly recovered. Additionally, electrophysiology and immunofluorescence assays revealed that facial-great auricular neurorrhaphy could transfer nerve impulse and express AChR which was better than facial nerve cut and worse than facial nerve end to end anastomosis. The present study indicated that great auricular-facial nerve neurorrhaphy is a substantial solution for facial lesion repair, as it is efficiently preventing facial muscles atrophy by generating neurotransmitter like ACh.
Argibay-Otero, Saray; Carballo, Rosa; Vázquez-López, Ezequiel M
2017-10-01
The asymmetric unit of the title compound, [ReCl(C 5 H 5 NO) 2 (CO) 3 ]·C 5 H 5 NO, contains one mol-ecule of the complex fac -[ReCl(4-pyOH) 2 (CO) 3 ] (where 4-pyOH represents 4-hy-droxy-pyridine) and one mol-ecule of pyridin-4(1 H )-one (4-HpyO). In the mol-ecule of the complex, the Re atom is coordinated to two N atoms of the two 4-pyOH ligands, three carbonyl C atoms, in a facial configuration, and the Cl atom. The resulting geometry is slightly distorted octa-hedral. In the crystal structure, both fragments are associated by hydrogen bonds; two 4-HpyO mol-ecules bridge between two mol-ecules of the complex using the O=C group as acceptor for two different HO- groups of coordinated 4-pyOH from two neighbouring metal complexes. The resulting square arrangements are extented into infinite chains by hydrogen bonds involving the N-H groups of the 4-HpyO mol-ecule and the chloride ligands. The chains are further stabilized by π-stacking inter-actions.
Wang, Fang; Yu, Jia Ming; Yang, De Qi; Gao, Qian; Hua, Hui; Liu, Yang
2017-02-01
To show the distribution of facial exposure to non-melanoma biologically effective UV irradiance changes by rotation angles. This study selected the cheek, nose, and forehead as representative facial sites for UV irradiance measurements, which were performed using a rotating manikin and a spectroradiometer. The measured UV irradiance was weighted using action spectra to calculate the biologically effective UV irradiances that cause non-melanoma (UVBEnon-mel) skin cancer. The biologically effective UV radiant exposure (HBEnon-mel) was calculated by summing the UVBEnon-mel data collected over the exposure period. This study revealed the following: (1) the maximum cheek, nose and forehead exposure UVA and UVB irradiance times and solar elevation angles (SEA) differed from those of the ambient UV irradiance and were influenced by the rotation angles; (2) the UV irradiance exposure increased in the following order: cheek < nose < forehead; (3) the distribution of UVBEnon-mel irradiance differed from that of unweighted UV radiation (UVR) and was influenced by the rotation angles and exposure times; and (4) the maximum percentage decreases in the UVBEnon-mel radiant exposure for the cheek, nose and forehead from 0°to 180°were 48.41%, 69.48% and 71.71%, respectively. Rotation angles relative to the sun influence the face's exposure to non-melanoma biologically effective UV. Copyright © 2017 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.
Profico, Antonio; Piras, Paolo; Buzi, Costantino; Di Vincenzo, Fabio; Lattarini, Flavio; Melchionna, Marina; Veneziano, Alessio; Raia, Pasquale; Manzi, Giorgio
2017-12-01
The evolutionary relationship between the base and face of the cranium is a major topic of interest in primatology. Such areas of the skull possibly respond to different selective pressures. Yet, they are often said to be tightly integrated. In this paper, we analyzed shape variability in the cranial base and the facial complex in Cercopithecoidea and Hominoidea. We used a landmark-based approach to single out the effects of size (evolutionary allometry), morphological integration, modularity, and phylogeny (under Brownian motion) on skull shape variability. Our results demonstrate that the cranial base and the facial complex exhibit different responses to different factors, which produces a little degree of morphological integration between them. Facial shape variation appears primarily influenced by body size and sexual dimorphism, whereas the cranial base is mostly influenced by functional factors. The different adaptations affecting the two modules suggest they are best studied as separate and independent units, and that-at least when dealing with Catarrhines-caution must be posed with the notion of strong cranial integration that is commonly invoked for the evolution of their skull shape. © 2017 Wiley Periodicals, Inc.
Otero, D Peña; Domínguez, D Vazquez; Fernández, L Hernanz; Magariño, A Santano; González, V Jimenez; Klepzing, J V García; Montesinos, J V Beneit
2017-03-02
To comparatively assess the efficacy of four different therapeutic strategies to prevent the development of facial pressure ulcers (FPUs) related to the use of non-invasive mechanical ventilation (NIV) with oro-nasal masks in critically ill hospitalised patients. This randomised control trial was performed at the high dependency unit in the University General Hospital Gregorio Marañón in Madrid, Spain. Overall, 152 patients with acute respiratory failure were recruited. All patients were hospitalised and received NIV through oro-nasal masks. The Norton tool was used to evaluate the general risk of developing pressure ulcers (PUs). Subjects were divided into four groups, each of them receiving a different treatment. Tissue assessment and preventive care were performed by a member of the research team. The incidence of FPUs was significantly lower in the group receiving a solution of hyperoxygenated fatty acids (HOFA) when compared with each of the other therapeutic strategies: direct mask (p=0.055), adhesive thin dressing (p=0.03) and adhesive foam dressing (p<0.001). The application of HOFA on the facial skin in contact with the oro-nasal masks showed the highest efficacy in the prevention of NIV-related FPUs.
Usefulness of BFB/EMG in facial palsy rehabilitation.
Dalla Toffola, Elena; Bossi, Daniela; Buonocore, Michelangelo; Montomoli, Cristina; Petrucci, Lucia; Alfonsi, Enrico
2005-07-22
To analyze and to compare the recovery and the development of synkinesis in patients with idiopathic facial palsy (Bell's palsy) following treatment with two methods of rehabilitation, kinesitherapy (KT) and biofeedback/EMG (BFB/EMG). Retrospective cases--series review. Seventy-four patients with Bell' palsy were clinically evaluated within 1 month from onset of palsy and at 12 months after palsy (House scale and synkinesis evaluation). Electromyography (EMG) and Electroneurography (ENG) were performed about 4 weeks after palsy to better evaluate functional abnormalities due to facial nerve lesion. The patients followed two different protocols for rehabilitation: the first 32 patients were treated with therapeutic exercises performed by therapists (KT group), the latter 42 patients were treated using BFB/EMG methods (BFB group) with inhibition of synkinetic movement as the primary goal. KT and BFB patients were evaluated for clinical and neurophysiological characteristics before rehabilitative treatment. BFB patients showed better clinical recovery and minor synkinesis than KT patients. BFB/EMG seems to be more useful than KT in Bell's palsy treatment. This could be due to the fact that BFB/EMG gives more accurate information than KT on muscle activation with better modulation in voluntary recruitment of motor unit.
Effect of emergency department CT on neuroimaging case volume and positive scan rates.
Oguz, Kader Karli; Yousem, David M; Deluca, Tom; Herskovits, Edward H; Beauchamp, Norman J
2002-09-01
The authors performed this study to determine the effect a computed tomographic (CT) scanner in the emergency department (ED) has on neuroimaging case volume and positive scan rates. The total numbers of ED visits and neuroradiology CT scans requested from the ED were recorded for 1998 and 2000, the years before and after the installation of a CT unit in the ED. For each examination type (brain, face, cervical spine), studies were graded for major findings (those that affected patient care), minor findings, and normal findings. The CT utilization rates and positive study rates were compared for each type of study performed for both years. There was a statistically significant increase in the utilization rate after installation of the CT unit (P < .001). The fractions of studies with major findings, minor findings, and normal findings changed significantly after installation of the CT unit for facial examinations (P = .002) but not for brain (P = .12) or cervical spine (P = .24) examinations. In all types of studies, the percentage of normal examinations increased. In toto, there was a significant decrease in the positive scan rate after installation of the CT scanner (P = .004). After installation of a CT scanner in the ED, there was increased utilization and a decreased rate of positive neuroradiologic examinations, the latter primarily due to lower positive rates for facial CT scans.
Lee, Anthony J; Mitchem, Dorian G; Wright, Margaret J; Martin, Nicholas G; Keller, Matthew C; Zietsch, Brendan P
2016-01-01
Popular theory suggests that facial averageness is preferred in a partner for genetic benefits to offspring. However, whether facial averageness is associated with genetic quality is yet to be established. Here, we computed an objective measure of facial averageness for a large sample ( N = 1,823) of identical and nonidentical twins and their siblings to test two predictions from the theory that facial averageness reflects genetic quality. First, we use biometrical modelling to estimate the heritability of facial averageness, which is necessary if it reflects genetic quality. We also test for a genetic association between facial averageness and facial attractiveness. Second, we assess whether paternal age at conception (a proxy of mutation load) is associated with facial averageness and facial attractiveness. Our findings are mixed with respect to our hypotheses. While we found that facial averageness does have a genetic component, and a significant phenotypic correlation exists between facial averageness and attractiveness, we did not find a genetic correlation between facial averageness and attractiveness (therefore, we cannot say that the genes that affect facial averageness also affect facial attractiveness) and paternal age at conception was not negatively associated with facial averageness. These findings support some of the previously untested assumptions of the 'genetic benefits' account of facial averageness, but cast doubt on others.
Brown, William; Liu, Connie; John, Rita Marie; Ford, Phoebe
2014-01-01
Developing gross and fine motor skills and expressing complex emotion is critical for child development. We introduce "StorySense", an eBook-integrated mobile app prototype that can sense face and sound topologies and identify movement and expression to promote children's motor skills and emotional developmental. Currently, most interactive eBooks on mobile devices only leverage "low-motor" interaction (i.e. tapping or swiping). Our app senses a greater breath of motion (e.g. clapping, snapping, and face tracking), and dynamically alters the storyline according to physical responses in ways that encourage the performance of predetermined motor skills ideal for a child's gross and fine motor development. In addition, our app can capture changes in facial topology, which can later be mapped using the Facial Action Coding System (FACS) for later interpretation of emotion. StorySense expands the human computer interaction vocabulary for mobile devices. Potential clinical applications include child development, physical therapy, and autism.
Imagine no religion: Heretical disgust, anger and the symbolic purity of mind.
Ritter, Ryan S; Preston, Jesse L; Salomon, Erika; Relihan-Johnson, Daniel
2016-01-01
Immoral actions, including physical/sexual (e.g., incest) and social (e.g., unfairness) taboos, are often described as disgusting. But what about immoral thoughts, more specifically, thoughts that violate religious beliefs? Do heretical thoughts taint the purity of mind? The present research examined heretical disgust using self-report measures and facial electromyography. Religious thought violations consistently elicited both self-reported disgust and anger. Feelings of disgust also predicted harsh moral judgement, independent of anger, and were mediated by feelings of "contamination". However, religious thought violations were not associated with a disgust facial expression (i.e., levator labii muscle activity) that was elicited by physically disgusting stimuli. We conclude that people (especially more religious people) do feel disgust in response to heretical thoughts that is meaningfully distinct from anger as a moral emotion. However, heretical disgust is not embodied in a physical disgust response. Rather, disgust has a symbolic moral value that marks heretical thoughts as harmful and aversive.
de Mendonça, Maria Cristina C; Segheto, Natália N; Aarestrup, Fernando M; Aarestrup, Beatriz J V
2018-02-01
Phenol peeling is considered an important agent in the treatment of facial rejuvenation; however, its use has limitations due to its high potential for side effects. This article proposes a new peeling application technique for the treatment of photoaging, aiming to evaluate, clinically and histopathologically, the efficacy of a new way of applying 88% phenol, using a punctuated pattern. The procedure was performed in an outpatient setting, with female patients, on static wrinkles and high flaccidity areas of the face. Accompanying photographs and skin samples were taken for histopathological analysis before and after treatment. It was shown that 88% phenol applied topically using a punctuated technique is effective in skin rejuvenation. The authors thus suggest, based on this new proposal, that further studies be conducted with a larger group of patients to better elucidate the action mechanisms of 88% phenol. This new form of application considerably reduced patients' withdrawal from their regular activities, besides reducing the cost, compared with the conventional procedure.
Role of Kabat physical rehabilitation in Bell's palsy: a randomized trial.
Barbara, Maurizio; Antonini, Giovanni; Vestri, Annarita; Volpini, Luigi; Monini, Simonetta
2010-01-01
When applied at an early stage, Kabat's rehabilitation was shown to provide a better and faster recovery rate in comparison with non-rehabilitated patients. To assess the validity of an early rehabilitative approach to Bell's palsy patients. A randomized study involved 20 consecutive patients (10 males, 10 females; aged 35-42 years) affected by Bell's palsy, classified according to the House-Brackmann (HB) grading system and grouped on the basis of undergoing or not early physical rehabilitation according to Kabat, i.e. a proprioceptive neuromuscular rehabilitation. The evaluation was carried out by measuring the amplitude of the compound motor action potential (CMAP), as well as by observing the initial and final HB grade, at days 4, 7 and 15 after onset of facial palsy. Patients belonging to the rehabilitation group clearly showed an overall improvement of clinical stage at the planned final observation, i.e. 15 days after onset of facial palsy, without presenting greater values of CMAP.
ERIC Educational Resources Information Center
Galligani, Dennis J.
This second volume of the University of California, Irvine (UCI), Student Affirmative Action (SAA) Five-Year Plan contains the complete student affirmative action plans as submitted by 33 academic and administrative units at UCI. The volume is organized by type of unit: academic units, academic retention units, outreach units, and student life…
Chia, Justin; Eroglu, Fehime Kara; Özen, Seza; Orhan, Dicle; Montealegre-Sanchez, Gina; de Jesus, Adriana A; Goldbach-Mansky, Raphaela; Cowen, Edward W
2016-01-01
Key teaching points • SAVI is a recently described interferonopathy resulting from constitutive action of STING and up-regulation of IFN-β signaling. • SAVI is characterized by facial erythema with telangiectasia, acral/cold-sensitive tissue ulceration and amputations, and interstitial lung disease. It has overlapping features with Aicardi-Goutières syndrome and familial chilblain lupus. • Traditional immunosuppressive medications and biologic therapies appear to be of limited benefit, but JAK inhibitors may impact disease progression. Published by Elsevier Inc.
Facial Scar Revision: Understanding Facial Scar Treatment
... Contact Us Trust your face to a facial plastic surgeon Facial Scar Revision Understanding Facial Scar Treatment ... face like the eyes or lips. A facial plastic surgeon has many options for treating and improving ...
Choi, Hyoung Ju; Shin, Sung Hee
2016-08-01
The purpose of this study was to examine the effects of a facial muscle exercise program including facial massage on the facial muscle function, subjective symptoms related to paralysis and depression in patients with facial palsy. This study was a quasi-experimental research with a non-equivalent control group non-synchronized design. Participants were 70 patients with facial palsy (experimental group 35, control group 35). For the experimental group, the facial muscular exercise program including facial massage was performed 20 minutes a day, 3 times a week for two weeks. Data were analyzed using descriptive statistics, χ²-test, Fisher's exact test and independent sample t-test with the SPSS 18.0 program. Facial muscular function of the experimental group improved significantly compared to the control group. There was no significant difference in symptoms related to paralysis between the experimental group and control group. The level of depression in the experimental group was significantly lower than the control group. Results suggest that a facial muscle exercise program including facial massage is an effective nursing intervention to improve facial muscle function and decrease depression in patients with facial palsy.
Facial neuropathy with imaging enhancement of the facial nerve: a case report
Mumtaz, Sehreen; Jensen, Matthew B
2014-01-01
A young women developed unilateral facial neuropathy 2 weeks after a motor vehicle collision involving fractures of the skull and mandible. MRI showed contrast enhancement of the facial nerve. We review the literature describing facial neuropathy after trauma and facial nerve enhancement patterns with different causes of facial neuropathy. PMID:25574155
Skin Conditions of Youths 12-17, United States. Vital and Health Statistics; Series 11, Number 157.
ERIC Educational Resources Information Center
Roberts, Jean; Ludford, Jacqueline
This report of the National Center for Health Statistics presents national estimates of the prevalence of facial acne and other skin lesions among noninstitutionalized youths aged 12-17 years by age, race, sex, geographic region, population size of place of residence, family income, education of parent, overall health, indications of stress,…
Ethnic and Gender Considerations in the Use of Facial Injectables: African-American Patients.
Burgess, Cheryl; Awosika, Olabola
2015-11-01
The United States is becoming increasingly more diverse as the nonwhite population continues to rise faster than ever. By 2044, the US Census Bureau projects that greater than 50% of the US population will be of nonwhite descent. Ethnic patients are the quickest growing portion of the cosmetic procedures market, with African-Americans comprising 7.1% of the 22% of ethnic minorities who received cosmetic procedures in the United States in 2014. The cosmetic concerns and natural features of this ethnic population are unique and guided by differing structural and aging processes than their white counterparts. As people of color increasingly seek nonsurgical cosmetic procedures, dermatologists and cosmetic surgeons must become aware that the Westernized look does not necessarily constitute beauty in these diverse people. The use of specialized aesthetic approaches and understanding of cultural and ethnic-specific features are warranted in the treatment of these patients. This article will review the key principles to consider when treating African-American patients, including the average facial structure of African-Americans, the impact of their ethnicity on aging and structure of face, and soft-tissue augmentation strategies specific to African-American skin.
Segal, Nancy L
2014-02-01
The story of her allegedly stolen twin brother in Armenia is recounted by a 'singleton twin' living in the United States. The behavioral consequences and societal implications of this loss are considered. This case is followed by twin research reports on the evolution of sleep length, dental treatment of craniopagus conjoined twins, cryopreserved double embryo transfer (DET), and gender options in multiple pregnancy. Current events include the diagnosis of appendectomy in one identical twin, the accomplishments of autistic twin marathon runners, the power of three-dimensional (3D) facial recognition, and the goals of twin biathletes heading to the 2014 Sochi Olympics in Russia.
Orthodontic Protocol Using Mini-Implant for Class II Treatment in Patient with Special Needs
Carvalho Ferreira, Fernando Pedrin; de Paula, Eliana de Cássia Molina; Ferreira Conti, Ana Claudia de Castro; Valarelli, Danilo Pinelli; de Almeida-Pedrin, Renata Rodrigues
2016-01-01
Improving facial and dental appearance and social interaction are the main factors for special needs (SN) patients to seek orthodontic treatment. The cooperation of SN patients and their parents is crucial for treatment success. Objective. To show through a case report the satisfactory results, both functional and esthetic, in patients with intellectual disability, congenital nystagmus, and severe scoliosis. Materials Used. Pendulum device with mini-implants as anchorage unit. Results. Improvement of facial and dental esthetics, correction of Class II malocclusion, and no root resorption shown in the radiographic follow-up. Conclusion. Knowing the limitations of SN patients, having a trained team, motivating and counting on the cooperation of parents and patients, and employing quick and low-cost orthodontic therapy have been shown to be the essential factors for treatment success. PMID:27847652
Traumatic facial nerve neuroma with facial palsy presenting in infancy.
Clark, James H; Burger, Peter C; Boahene, Derek Kofi; Niparko, John K
2010-07-01
To describe the management of traumatic neuroma of the facial nerve in a child and literature review. Sixteen-month-old male subject. Radiological imaging and surgery. Facial nerve function. The patient presented at 16 months with a right facial palsy and was found to have a right facial nerve traumatic neuroma. A transmastoid, middle fossa resection of the right facial nerve lesion was undertaken with a successful facial nerve-to-hypoglossal nerve anastomosis. The facial palsy improved postoperatively. A traumatic neuroma should be considered in an infant who presents with facial palsy, even in the absence of an obvious history of trauma. The treatment of such lesion is complex in any age group but especially in young children. Symptoms, age, lesion size, growth rate, and facial nerve function determine the appropriate management.
Outcome of different facial nerve reconstruction techniques.
Mohamed, Aboshanif; Omi, Eigo; Honda, Kohei; Suzuki, Shinsuke; Ishikawa, Kazuo
There is no technique of facial nerve reconstruction that guarantees facial function recovery up to grade III. To evaluate the efficacy and safety of different facial nerve reconstruction techniques. Facial nerve reconstruction was performed in 22 patients (facial nerve interpositional graft in 11 patients and hypoglossal-facial nerve transfer in another 11 patients). All patients had facial function House-Brackmann (HB) grade VI, either caused by trauma or after resection of a tumor. All patients were submitted to a primary nerve reconstruction except 7 patients, where late reconstruction was performed two weeks to four months after the initial surgery. The follow-up period was at least two years. For facial nerve interpositional graft technique, we achieved facial function HB grade III in eight patients and grade IV in three patients. Synkinesis was found in eight patients, and facial contracture with synkinesis was found in two patients. In regards to hypoglossal-facial nerve transfer using different modifications, we achieved facial function HB grade III in nine patients and grade IV in two patients. Facial contracture, synkinesis and tongue atrophy were found in three patients, and synkinesis was found in five patients. However, those who had primary direct facial-hypoglossal end-to-side anastomosis showed the best result without any neurological deficit. Among various reanimation techniques, when indicated, direct end-to-side facial-hypoglossal anastomosis through epineural suturing is the most effective technique with excellent outcomes for facial reanimation and preservation of tongue movement, particularly when performed as a primary technique. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Choi, Kyung-Sik; Kim, Min-Su; Kwon, Hyeok-Gyu; Jang, Sung-Ho
2014-01-01
Objective Facial nerve palsy is a common complication of treatment for vestibular schwannoma (VS), so preserving facial nerve function is important. The preoperative visualization of the course of facial nerve in relation to VS could help prevent injury to the nerve during the surgery. In this study, we evaluate the accuracy of diffusion tensor tractography (DTT) for preoperative identification of facial nerve. Methods We prospectively collected data from 11 patients with VS, who underwent preoperative DTT for facial nerve. Imaging results were correlated with intraoperative findings. Postoperative DTT was performed at postoperative 3 month. Facial nerve function was clinically evaluated according to the House-Brackmann (HB) facial nerve grading system. Results Facial nerve courses on preoperative tractography were entirely correlated with intraoperative findings in all patients. Facial nerve was located on the anterior of the tumor surface in 5 cases, on anteroinferior in 3 cases, on anterosuperior in 2 cases, and on posteroinferior in 1 case. In postoperative facial nerve tractography, preservation of facial nerve was confirmed in all patients. No patient had severe facial paralysis at postoperative one year. Conclusion This study shows that DTT for preoperative identification of facial nerve in VS surgery could be a very accurate and useful radiological method and could help to improve facial nerve preservation. PMID:25289119
2017-01-01
Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models. PMID:28742816
Kikuchi, K; Masuda, Y; Yamashita, T; Sato, K; Katagiri, C; Hirao, T; Mizokami, Y; Yaguchi, H
2016-08-01
Facial skin pigmentation is one of the most prominent visible features of skin aging and often affects perception of health and beauty. To date, facial pigmentation has been evaluated using various image analysis methods developed for the cosmetic and esthetic fields. However, existing methods cannot provide precise information on pigmented spots, such as variations in size, color shade, and distribution pattern. The purpose of this study is the development of image evaluation methods to analyze individual pigmented spots and acquire detailed information on their age-related changes. To characterize the individual pigmented spots within a cheek image, we established a simple object-counting algorithm. First, we captured cheek images using an original imaging system equipped with an illumination unit and a high-resolution digital camera. The acquired images were converted into melanin concentration images using compensation formulae. Next, the melanin images were converted into binary images. The binary images were then subjected to noise reduction. Finally, we calculated parameters such as the melanin concentration, quantity, and size of individual pigmented spots using a connected-components labeling algorithm, which assigns a unique label to each separate group of connected pixels. The cheek image analysis was evaluated on 643 female Japanese subjects. We confirmed that the proposed method was sufficiently sensitive to measure the melanin concentration, and the numbers and sizes of individual pigmented spots through manual evaluation of the cheek images. The image analysis results for the 643 Japanese women indicated clear relationships between age and the changes in the pigmented spots. We developed a new quantitative evaluation method for individual pigmented spots in facial skin. This method facilitates the analysis of the characteristics of various pigmented facial spots and is directly applicable to the fields of dermatology, pharmacology, and esthetic cosmetology. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
The Political Gender Gap: Gender Bias in Facial Inferences that Predict Voting Behavior
Chiao, Joan Y.; Bowman, Nicholas E.; Gill, Harleen
2008-01-01
Background Throughout human history, a disproportionate degree of political power around the world has been held by men. Even in democracies where the opportunity to serve in top political positions is available to any individual elected by the majority of their constituents, most of the highest political offices are occupied by male leaders. What psychological factors underlie this political gender gap? Contrary to the notion that people use deliberate, rational strategies when deciding whom to vote for in major political elections, research indicates that people use shallow decision heuristics, such as impressions of competence solely from a candidate's facial appearance, when deciding whom to vote for. Because gender has previously been shown to affect a number of inferences made from the face, here we investigated the hypothesis that gender of both voter and candidate affects the kinds of facial impressions that predict voting behavior. Methodology/Principal Finding Male and female voters judged a series of male and female political candidates on how competent, dominant, attractive and approachable they seemed based on their facial appearance. Then they saw a series of pairs of political candidates and decided which politician they would vote for in a hypothetical election for President of the United States. Results indicate that both gender of voter and candidate affect the kinds of facial impressions that predict voting behavior. All voters are likely to vote for candidates who appear more competent. However, male candidates that appear more approachable and female candidates who appear more attractive are more likely to win votes. In particular, men are more likely to vote for attractive female candidates whereas women are more likely to vote for approachable male candidates. Conclusions/Significance Here we reveal gender biases in the intuitive heuristics that voters use when deciding whom to vote for in major political elections. Our findings underscore the impact of gender and physical appearance on shaping voter decision-making and provide novel insight into the psychological foundations underlying the political gender gap. PMID:18974841
Hosoya, Haruo; Hyvärinen, Aapo
2017-07-01
Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models.
The political gender gap: gender bias in facial inferences that predict voting behavior.
Chiao, Joan Y; Bowman, Nicholas E; Gill, Harleen
2008-01-01
Throughout human history, a disproportionate degree of political power around the world has been held by men. Even in democracies where the opportunity to serve in top political positions is available to any individual elected by the majority of their constituents, most of the highest political offices are occupied by male leaders. What psychological factors underlie this political gender gap? Contrary to the notion that people use deliberate, rational strategies when deciding whom to vote for in major political elections, research indicates that people use shallow decision heuristics, such as impressions of competence solely from a candidate's facial appearance, when deciding whom to vote for. Because gender has previously been shown to affect a number of inferences made from the face, here we investigated the hypothesis that gender of both voter and candidate affects the kinds of facial impressions that predict voting behavior. Male and female voters judged a series of male and female political candidates on how competent, dominant, attractive and approachable they seemed based on their facial appearance. Then they saw a series of pairs of political candidates and decided which politician they would vote for in a hypothetical election for President of the United States. Results indicate that both gender of voter and candidate affect the kinds of facial impressions that predict voting behavior. All voters are likely to vote for candidates who appear more competent. However, male candidates that appear more approachable and female candidates who appear more attractive are more likely to win votes. In particular, men are more likely to vote for attractive female candidates whereas women are more likely to vote for approachable male candidates. Here we reveal gender biases in the intuitive heuristics that voters use when deciding whom to vote for in major political elections. Our findings underscore the impact of gender and physical appearance on shaping voter decision-making and provide novel insight into the psychological foundations underlying the political gender gap.
Facial animation on an anatomy-based hierarchical face model
NASA Astrophysics Data System (ADS)
Zhang, Yu; Prakash, Edmond C.; Sung, Eric
2003-04-01
In this paper we propose a new hierarchical 3D facial model based on anatomical knowledge that provides high fidelity for realistic facial expression animation. Like real human face, the facial model has a hierarchical biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators and underlying skull structure. The deformable skin model has multi-layer structure to approximate different types of soft tissue. It takes into account the nonlinear stress-strain relationship of the skin and the fact that soft tissue is almost incompressible. Different types of muscle models have been developed to simulate distribution of the muscle force on the skin due to muscle contraction. By the presence of the skull model, our facial model takes advantage of both more accurate facial deformation and the consideration of facial anatomy during the interactive definition of facial muscles. Under the muscular force, the deformation of the facial skin is evaluated using numerical integration of the governing dynamic equations. The dynamic facial animation algorithm runs at interactive rate with flexible and realistic facial expressions to be generated.
Human Facial Expressions as Adaptations:Evolutionary Questions in Facial Expression Research
SCHMIDT, KAREN L.; COHN, JEFFREY F.
2007-01-01
The importance of the face in social interaction and social intelligence is widely recognized in anthropology. Yet the adaptive functions of human facial expression remain largely unknown. An evolutionary model of human facial expression as behavioral adaptation can be constructed, given the current knowledge of the phenotypic variation, ecological contexts, and fitness consequences of facial behavior. Studies of facial expression are available, but results are not typically framed in an evolutionary perspective. This review identifies the relevant physical phenomena of facial expression and integrates the study of this behavior with the anthropological study of communication and sociality in general. Anthropological issues with relevance to the evolutionary study of facial expression include: facial expressions as coordinated, stereotyped behavioral phenotypes, the unique contexts and functions of different facial expressions, the relationship of facial expression to speech, the value of facial expressions as signals, and the relationship of facial expression to social intelligence in humans and in nonhuman primates. Human smiling is used as an example of adaptation, and testable hypotheses concerning the human smile, as well as other expressions, are proposed. PMID:11786989
Santana, Sharlene E.; Dobson, Seth D.; Diogo, Rui
2014-01-01
Facial colour patterns and facial expressions are among the most important phenotypic traits that primates use during social interactions. While colour patterns provide information about the sender's identity, expressions can communicate its behavioural intentions. Extrinsic factors, including social group size, have shaped the evolution of facial coloration and mobility, but intrinsic relationships and trade-offs likely operate in their evolution as well. We hypothesize that complex facial colour patterning could reduce how salient facial expressions appear to a receiver, and thus species with highly expressive faces would have evolved uniformly coloured faces. We test this hypothesis through a phylogenetic comparative study, and explore the underlying morphological factors of facial mobility. Supporting our hypothesis, we find that species with highly expressive faces have plain facial colour patterns. The number of facial muscles does not predict facial mobility; instead, species that are larger and have a larger facial nucleus have more expressive faces. This highlights a potential trade-off between facial mobility and colour patterning in primates and reveals complex relationships between facial features during primate evolution. PMID:24850898
Facial dynamics and emotional expressions in facial aging treatments.
Michaud, Thierry; Gassia, Véronique; Belhaouari, Lakhdar
2015-03-01
Facial expressions convey emotions that form the foundation of interpersonal relationships, and many of these emotions promote and regulate our social linkages. Hence, the facial aging symptomatological analysis and the treatment plan must of necessity include knowledge of the facial dynamics and the emotional expressions of the face. This approach aims to more closely meet patients' expectations of natural-looking results, by correcting age-related negative expressions while observing the emotional language of the face. This article will successively describe patients' expectations, the role of facial expressions in relational dynamics, the relationship between facial structures and facial expressions, and the way facial aging mimics negative expressions. Eventually, therapeutic implications for facial aging treatment will be addressed. © 2015 Wiley Periodicals, Inc.
The effects of an action video game on visual and affective information processing.
Bailey, Kira; West, Robert
2013-04-04
Playing action video games can have beneficial effects on visuospatial cognition and negative effects on social information processing. However, these two effects have not been demonstrated in the same individuals in a single study. The current study used event-related brain potentials (ERPs) to examine the effects of playing an action or non-action video game on the processing of emotion in facial expression. The data revealed that 10h of playing an action or non-action video game had differential effects on the ERPs relative to a no-contact control group. Playing an action game resulted in two effects: one that reflected an increase in the amplitude of the ERPs following training over the right frontal and posterior regions that was similar for angry, happy, and neutral faces; and one that reflected a reduction in the allocation of attention to happy faces. In contrast, playing a non-action game resulted in changes in slow wave activity over the central-parietal and frontal regions that were greater for targets (i.e., angry and happy faces) than for non-targets (i.e., neutral faces). These data demonstrate that the contrasting effects of action video games on visuospatial and emotion processing occur in the same individuals following the same level of gaming experience. This observation leads to the suggestion that caution should be exercised when using action video games to modify visual processing, as this experience could also have unintended effects on emotion processing. Published by Elsevier B.V.
Wingenbach, Tanja S. H.; Brosnan, Mark; Pfaltz, Monique C.; Plichta, Michael M.; Ashwin, Chris
2018-01-01
According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed. PMID:29928240
Wingenbach, Tanja S H; Brosnan, Mark; Pfaltz, Monique C; Plichta, Michael M; Ashwin, Chris
2018-01-01
According to embodied cognition accounts, viewing others' facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others' facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others' faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions' order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.
Understanding the Burden of Adult Female Acne
Kawata, Ariane K.; Daniels, Selena R.; Yeomans, Karen; Burk, Caroline T.; Callender, Valerie D.
2014-01-01
Objective: Typically regarded as an adolescent condition, acne among adult females is also prevalent. Limited data are available on the clinical characteristics and burden of adult female acne. The study objective was to describe clinical characteristics and psychosocial impact of acne in adult women. Design: Cross-sectional, web-based survey. Setting: Data were collected from a diverse sample of United States females. Participants: Women ages 25 to 45 years with facial acne (≥25 visible lesions). Measurements: Outcomes included sociodemographic and clinical characteristics, perceptions, coping behaviors, psychosocial impact of acne (health-related quality of life using acne-specific Quality of Life questionnaire and psychological status using Patient Health Questionnaire), and work/productivity. Results: A total of 208 women completed the survey (mean age 35±6 years), comprising White/Caucasian (51.4%), Black/African American (24.5%), Hispanic/Latino (11.1%), Asian (7.7%), and Other (5.3%). Facial acne presented most prominently on cheeks, chin, and forehead and was characterized by erythema, postinflammatory hyperpigmentation, and scarring. Average age of adult onset was 25±6 years, and one-third (33.7%) were diagnosed with acne as an adult. The majority (80.3%) had 25 to 49 visible facial lesions. Acne was perceived as troublesome and impacted self-confidence. Makeup was frequently used to conceal acne. Facial acne negatively affected health-related quality of life, was associated with mild/moderate symptoms of depression and/or anxiety, and impacted ability to concentrate on work or school. Conclusion: Results highlight the multifaceted impact of acne and provide evidence that adult female acne is under-recognized and burdensome. PMID:24578779
Phenytoin (Dilantin) and acupuncture therapy in the treatment of intractable oral and facial pain.
Lu, Dominic P; Lu, Winston I; Lu, Gabriel P
2011-01-01
Phenytoin is an anti-convulsant and anti-arrhythmic medication. Manufactured by various pharmaceutical companies with various brand names, phenytoin (PHT) is also known as Dilantain, Hydantoin or Phenytek in the United States; Dilantain or Remytoine in Canada; Epamin, Hidantoina in Mexico; and Fenidatoin or Fenitron or other names elsewhere in the world. Phenytoin (PHT) is especially useful for patients suffering from intractable oral and facial pain especially those who exhibit anger, stress, depression and irrational emotions commonly seen in the patients with oral and facial pain. When used properly, Phenytoin is also an effective anxiolysis drug in addition to its theraputic effects on pain and can be used alone or, even better, if combined with other compatible sedatives. Phenytoin is particularly valuable when combined with acupuncture for patients with trigeminal neuralgia, glossopharyneal neuralgia, Bell's palsy, and some other facial paralysis and pain. It also has an advantage of keeping the patient relatively lucid after treatment. Either PHT or acupuncture alone can benefit patients but the success of treatment outcome may be limited. We found by combining both acupuncture and PHT with Selective Drug Uptake Enhancement by stimulating middle finger at the first segment of ventral (palmar) and lateral surfaces, as well as prescribing PHT with the dosage predetermined for each patient by Bi-Digital O-Ring Test (BDORT), the treatment outcome was much better resulted with less recurrence and intensity of pain during episodes of attack. Patients with Bell's palsy were most benefited by acupuncture therapy that could completely get rid of the illness.
Choi, Kyung-Sik; Kim, Min-Su; Jang, Sung-Ho
2014-01-01
Recently, the increasing rates of facial nerve preservation after vestibular schwannoma (VS) surgery have been achieved. However, the management of a partially or completely damaged facial nerve remains an important issue. The authors report a patient who was had a good recovery after a facial nerve reconstruction using fibrin glue-coated collagen fleece for a totally transected facial nerve during VS surgery. And, we verifed the anatomical preservation and functional outcome of the facial nerve with postoperative diffusion tensor (DT) imaging facial nerve tractography, electroneurography (ENoG) and House-Brackmann (HB) grade. DT imaging tractography at the 3rd postoperative day revealed preservation of facial nerve. And facial nerve degeneration ratio was 94.1% at 7th postoperative day ENoG. At postoperative 3 months and 1 year follow-up examination with DT imaging facial nerve tractography and ENoG, good results for facial nerve function were observed. PMID:25024825
Pattern of facial palsy in a typical Nigerian specialist hospital.
Lamina, S; Hanif, S
2012-12-01
Data on incidence of facial palsy is generally lacking in Nigeria. To assess six years' incidence of facial palsy in Murtala Muhammed Specialist Hospital (MMSH), Kano, Nigeria. The records of patients diagnosed as facial problems between January 2000 and December 2005 were scrutinized. Data on diagnosis, age, sex, side affected, occupation and causes were obtained. A total number of 698 patients with facial problems were recorded. Five hundred and ninety four (85%) were diagnosed as facial palsy. Out of the diagnosed facial palsy, males (56.2%) had a higher incidence than females; 20-34 years age group (40.3%) had a greater prevalence; the commonest cause of facial palsy was found out to be Idiopathic (39.1%) and was most common among business men (31.6%). Right sided facial palsy (52.2%) was predominant. Incidence of facial palsy was highest in 2003 (25.3%) and decreased from 2004. It was concluded that the incidence of facial palsy was high and Bell's palsy remains the most common causes of facial (nerve) paralysis.
ERIC Educational Resources Information Center
Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno
2007-01-01
This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…
Turning Avatar into Realistic Human Expression Using Linear and Bilinear Interpolations
NASA Astrophysics Data System (ADS)
Hazim Alkawaz, Mohammed; Mohamad, Dzulkifli; Rehman, Amjad; Basori, Ahmad Hoirul
2014-06-01
The facial animation in term of 3D facial data has accurate research support of the laser scan and advance 3D tools for complex facial model production. However, the approach still lacks facial expression based on emotional condition. Though, facial skin colour is required to offers an effect of facial expression improvement, closely related to the human emotion. This paper presents innovative techniques for facial animation transformation using the facial skin colour based on linear interpolation and bilinear interpolation. The generated expressions are almost same to the genuine human expression and also enhance the facial expression of the virtual human.
Cerit, Hilâl; Veer, Ilya M; Dahan, Albert; Niesters, Marieke; Harmer, Catherine J; Miskowiak, Kamilla W; Rombouts, Serge A R B; Van der Does, Willem
2015-12-01
Studies on the neural effects of Erythropoietin (EPO) indicate that EPO may have antidepressant effects. Due to its hematopoietic effects, EPO may cause serious side-effects with repeated administration if patients are not monitored extensively. ARA290 is an EPO-analog peptide without such hematopoietic side-effects but may have neurotrophic and antidepressant effects. The aim of this study was to investigate the possible antidepressant effects of ARA290 in a neuropsychological model of drug action. Healthy participants (N=36) received ARA290 (2mg) or placebo in a double-blind, randomized, parallel-group design. Neural and cognitive effects were assessed one week after administration. Primary outcome measures were the neural processing of fearful vs happy faces and the behavioral recognition of emotional facial expressions. ARA290-treated individuals displayed lower neural responses to happy faces in the fusiform gyrus. ARA290 tended to lower the recognition of happy and disgust facial expressions. Although ARA290 was not associated with a better memory for positive words, it was associated with faster categorization of positive vs negative words. Finally, ARA290 increased attention towards positive emotional pictures. No effects were observed on mood and affective symptoms. ARA290 may modulate some aspects of emotional processing, however, the direction and the strength of its effects do not unequivocally support an antidepressant-like profile for ARA290. Future studies may investigate the effects of different timing and dose. Copyright © 2015 Elsevier B.V. and ECNP. All rights reserved.
Acevedo, Bianca P; Aron, Elaine N; Aron, Arthur; Sangster, Matthew-Donald; Collins, Nancy; Brown, Lucy L
2014-07-01
Theory and research suggest that sensory processing sensitivity (SPS), found in roughly 20% of humans and over 100 other species, is a trait associated with greater sensitivity and responsiveness to the environment and to social stimuli. Self-report studies have shown that high-SPS individuals are strongly affected by others' moods, but no previous study has examined neural systems engaged in response to others' emotions. This study examined the neural correlates of SPS (measured by the standard short-form Highly Sensitive Person [HSP] scale) among 18 participants (10 females) while viewing photos of their romantic partners and of strangers displaying positive, negative, or neutral facial expressions. One year apart, 13 of the 18 participants were scanned twice. Across all conditions, HSP scores were associated with increased brain activation of regions involved in attention and action planning (in the cingulate and premotor area [PMA]). For happy and sad photo conditions, SPS was associated with activation of brain regions involved in awareness, integration of sensory information, empathy, and action planning (e.g., cingulate, insula, inferior frontal gyrus [IFG], middle temporal gyrus [MTG], and PMA). As predicted, for partner images and for happy facial photos, HSP scores were associated with stronger activation of brain regions involved in awareness, empathy, and self-other processing. These results provide evidence that awareness and responsiveness are fundamental features of SPS, and show how the brain may mediate these traits.
Negative ion treatment increases positive emotional processing in seasonal affective disorder.
Harmer, C J; Charles, M; McTavish, S; Favaron, E; Cowen, P J
2012-08-01
Antidepressant drug treatments increase the processing of positive compared to negative affective information early in treatment. Such effects have been hypothesized to play a key role in the development of later therapeutic responses to treatment. However, it is unknown whether these effects are a common mechanism of action for different treatment modalities. High-density negative ion (HDNI) treatment is an environmental manipulation that has efficacy in randomized clinical trials in seasonal affective disorder (SAD). The current study investigated whether a single session of HDNI treatment could reverse negative affective biases seen in seasonal depression using a battery of emotional processing tasks in a double-blind, placebo-controlled randomized study. Under placebo conditions, participants with seasonal mood disturbance showed reduced recognition of happy facial expressions, increased recognition memory for negative personality characteristics and increased vigilance to masked presentation of negative words in a dot-probe task compared to matched healthy controls. Negative ion treatment increased the recognition of positive compared to negative facial expression and improved vigilance to unmasked stimuli across participants with seasonal depression and healthy controls. Negative ion treatment also improved recognition memory for positive information in the SAD group alone. These effects were seen in the absence of changes in subjective state or mood. These results are consistent with the hypothesis that early change in emotional processing may be an important mechanism for treatment action in depression and suggest that these effects are also apparent with negative ion treatment in seasonal depression.
Acevedo, Bianca P; Aron, Elaine N; Aron, Arthur; Sangster, Matthew-Donald; Collins, Nancy; Brown, Lucy L
2014-01-01
Background Theory and research suggest that sensory processing sensitivity (SPS), found in roughly 20% of humans and over 100 other species, is a trait associated with greater sensitivity and responsiveness to the environment and to social stimuli. Self-report studies have shown that high-SPS individuals are strongly affected by others' moods, but no previous study has examined neural systems engaged in response to others' emotions. Methods This study examined the neural correlates of SPS (measured by the standard short-form Highly Sensitive Person [HSP] scale) among 18 participants (10 females) while viewing photos of their romantic partners and of strangers displaying positive, negative, or neutral facial expressions. One year apart, 13 of the 18 participants were scanned twice. Results Across all conditions, HSP scores were associated with increased brain activation of regions involved in attention and action planning (in the cingulate and premotor area [PMA]). For happy and sad photo conditions, SPS was associated with activation of brain regions involved in awareness, integration of sensory information, empathy, and action planning (e.g., cingulate, insula, inferior frontal gyrus [IFG], middle temporal gyrus [MTG], and PMA). Conclusions As predicted, for partner images and for happy facial photos, HSP scores were associated with stronger activation of brain regions involved in awareness, empathy, and self-other processing. These results provide evidence that awareness and responsiveness are fundamental features of SPS, and show how the brain may mediate these traits. PMID:25161824
Lee, Anthony J.; Mitchem, Dorian G.; Wright, Margaret J.; Martin, Nicholas G.; Keller, Matthew C.; Zietsch, Brendan P.
2014-01-01
For women, choosing a facially masculine man as a mate is thought to confer genetic benefits to offspring. Crucial assumptions of this hypothesis have not been adequately tested. It has been assumed that variation in facial masculinity is due to genetic variation and that genetic factors that increase male facial masculinity do not increase facial masculinity in female relatives. We objectively quantified the facial masculinity in photos of identical (n = 411) and nonidentical (n = 782) twins and their siblings (n = 106). Using biometrical modeling, we found that much of the variation in male and female facial masculinity is genetic. However, we also found that masculinity of male faces is unrelated to their attractiveness and that facially masculine men tend to have facially masculine, less-attractive sisters. These findings challenge the idea that facially masculine men provide net genetic benefits to offspring and call into question this popular theoretical framework. PMID:24379153
Lee, Anthony J; Mitchem, Dorian G; Wright, Margaret J; Martin, Nicholas G; Keller, Matthew C; Zietsch, Brendan P
2014-02-01
For women, choosing a facially masculine man as a mate is thought to confer genetic benefits to offspring. Crucial assumptions of this hypothesis have not been adequately tested. It has been assumed that variation in facial masculinity is due to genetic variation and that genetic factors that increase male facial masculinity do not increase facial masculinity in female relatives. We objectively quantified the facial masculinity in photos of identical (n = 411) and nonidentical (n = 782) twins and their siblings (n = 106). Using biometrical modeling, we found that much of the variation in male and female facial masculinity is genetic. However, we also found that masculinity of male faces is unrelated to their attractiveness and that facially masculine men tend to have facially masculine, less-attractive sisters. These findings challenge the idea that facially masculine men provide net genetic benefits to offspring and call into question this popular theoretical framework.
Lee, Chi-Heon; Moon, Suk-Hee; Park, Ki-Min; Kang, Youngjin
2016-12-01
In the title compound, [Ir(C 11 H 8 N) 2 (C 18 H 14 N)], the Ir III ion adopts a distorted octa-hedral coordination environment defined by three C , N -chelating ligands, one stemming from a 2-(4-phenyl-5-methyl-pyridin-2-yl)phenyl ligand and two from 2-(pyridin-2-yl)phenyl ligands, arranged in a facial manner. The Ir III ion lies almost in the equatorial plane [deviation = 0.0069 (15) Å]. In the crystal, inter-molecular π-π stacking inter-actions, as well as inter-molecular C-H⋯π inter-actions, are present, leading to a three-dimensional network.
Facial nerve palsy associated with a cystic lesion of the temporal bone.
Kim, Na Hyun; Shin, Seung-Ho
2014-03-01
Facial nerve palsy results in the loss of facial expression and is most commonly caused by a benign, self-limiting inflammatory condition known as Bell palsy. However, there are other conditions that may cause facial paralysis, such as neoplastic conditions of the facial nerve, traumatic nerve injury, and temporal bone lesions. We present a case of facial nerve palsy concurrent with a benign cystic lesion of the temporal bone, adjacent to the tympanic segment of the facial nerve. The patient's symptoms subsided after facial nerve decompression via a transmastoid approach.
Contemporary solutions for the treatment of facial nerve paralysis.
Garcia, Ryan M; Hadlock, Tessa A; Klebuc, Michael J; Simpson, Roger L; Zenn, Michael R; Marcus, Jeffrey R
2015-06-01
After reviewing this article, the participant should be able to: 1. Understand the most modern indications and technique for neurotization, including masseter-to-facial nerve transfer (fifth-to-seventh cranial nerve transfer). 2. Contrast the advantages and limitations associated with contiguous muscle transfers and free-muscle transfers for facial reanimation. 3. Understand the indications for a two-stage and one-stage free gracilis muscle transfer for facial reanimation. 4. Apply nonsurgical adjuvant treatments for acute facial nerve paralysis. Facial expression is a complex neuromotor and psychomotor process that is disrupted in patients with facial paralysis breaking the link between emotion and physical expression. Contemporary reconstructive options are being implemented in patients with facial paralysis. While static procedures provide facial symmetry at rest, true 'facial reanimation' requires restoration of facial movement. Contemporary treatment options include neurotization procedures (a new motor nerve is used to restore innervation to a viable muscle), contiguous regional muscle transfer (most commonly temporalis muscle transfer), microsurgical free muscle transfer, and nonsurgical adjuvants used to balance facial symmetry. Each approach has advantages and disadvantages along with ongoing controversies and should be individualized for each patient. Treatments for patients with facial paralysis continue to evolve in order to restore the complex psychomotor process of facial expression.
Brain responses to facial attractiveness induced by facial proportions: evidence from an fMRI study
Shen, Hui; Chau, Desmond K. P.; Su, Jianpo; Zeng, Ling-Li; Jiang, Weixiong; He, Jufang; Fan, Jintu; Hu, Dewen
2016-01-01
Brain responses to facial attractiveness induced by facial proportions are investigated by using functional magnetic resonance imaging (fMRI), in 41 young adults (22 males and 19 females). The subjects underwent fMRI while they were presented with computer-generated, yet realistic face images, which had varying facial proportions, but the same neutral facial expression, baldhead and skin tone, as stimuli. Statistical parametric mapping with parametric modulation was used to explore the brain regions with the response modulated by facial attractiveness ratings (ARs). The results showed significant linear effects of the ARs in the caudate nucleus and the orbitofrontal cortex for all of the subjects, and a non-linear response profile in the right amygdala for only the male subjects. Furthermore, canonical correlation analysis was used to learn the most relevant facial ratios that were best correlated with facial attractiveness. A regression model on the fMRI-derived facial ratio components demonstrated a strong linear relationship between the visually assessed mean ARs and the predictive ARs. Overall, this study provided, for the first time, direct neurophysiologic evidence of the effects of facial ratios on facial attractiveness and suggested that there are notable gender differences in perceiving facial attractiveness as induced by facial proportions. PMID:27779211
Brain responses to facial attractiveness induced by facial proportions: evidence from an fMRI study.
Shen, Hui; Chau, Desmond K P; Su, Jianpo; Zeng, Ling-Li; Jiang, Weixiong; He, Jufang; Fan, Jintu; Hu, Dewen
2016-10-25
Brain responses to facial attractiveness induced by facial proportions are investigated by using functional magnetic resonance imaging (fMRI), in 41 young adults (22 males and 19 females). The subjects underwent fMRI while they were presented with computer-generated, yet realistic face images, which had varying facial proportions, but the same neutral facial expression, baldhead and skin tone, as stimuli. Statistical parametric mapping with parametric modulation was used to explore the brain regions with the response modulated by facial attractiveness ratings (ARs). The results showed significant linear effects of the ARs in the caudate nucleus and the orbitofrontal cortex for all of the subjects, and a non-linear response profile in the right amygdala for only the male subjects. Furthermore, canonical correlation analysis was used to learn the most relevant facial ratios that were best correlated with facial attractiveness. A regression model on the fMRI-derived facial ratio components demonstrated a strong linear relationship between the visually assessed mean ARs and the predictive ARs. Overall, this study provided, for the first time, direct neurophysiologic evidence of the effects of facial ratios on facial attractiveness and suggested that there are notable gender differences in perceiving facial attractiveness as induced by facial proportions.
7 CFR 275.16 - Corrective action planning.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 4 2010-01-01 2010-01-01 false Corrective action planning. 275.16 Section 275.16... Corrective action planning. (a) Corrective action planning is the process by which State agencies shall...)/management unit(s) in the planning, development, and implementation of corrective action are those which: (1...
Nicholoff, T J; Del Castillo, C B; Velmonte, M X
Maxillofacial injuries resulting from trauma can be a challenge to the Maxillo-Facial Surgeon. Frequent causes of these injuries are attributed to automobile accidents, physical altercations, gunshot wounds, home accidents, athletic injuries, work injuries and other injuries. Motor vehicle accidents tend to be the primary cause of most midface fractures and lacerations due to the face hitting the dashboard, windshield and steering wheel or the back of the front seat for passengers in the rear. Seatbelts have been shown to drastically reduce the incidence and severity of these injuries. In the United States seatbelt laws have been enacted in several states thus markedly impacting on the reduction of such trauma. In the Philippines rare is the individual who wears seat belts. Metro city traffic, however, has played a major role in reducing daytime MVA related trauma, as usually there is insufficient speed in traffic areas to cause severe impact damage, the same however cannot be said for night driving, or for driving outside of the city proper where it is not uncommon for drivers to zip into the lane of on-coming traffic in order to overtake the car in front ... often at high speeds. Thus, the potential for severe maxillofacial injuries and other trauma related injuries increases in these circumstances. It is however unfortunate that outside of Metro Manila or other major cities there is no ready access to trauma or tertiary care centers, thus these injuries can be catastrophic if not addressed adequately. With the exception of Le Fort II and III craniofacial fractures, most maxillofacial injuries are not life threatening by themselves, and therefore treatment can be delayed until more serious cerebral or visceral, potentially life threatening injuries are addressed first. Our patient was involved in an MVA in Zambales, seen and stabilized in a provincial primary care center initially, then referred to a provincial secondary care center for further stabilization before his transfer to Manila and then ultimately to our Maxillo-Facial Unit. There was a two week-plus delay in the definitive management because of this. As a result of the delay, fibrous tissue and bone callus formation occurred between the various fracture lines, thus once definitive fracture management was attempted, it took on a more reconstructive nature. Hospital based Oral and Maxillo-Facial Surgeons are uniquely trained to manage all aspects of the maxillo-facial trauma, and their dental background uniquely qualifies them in functional restoration of lower and midface fractures where occlusion plays a most important role. Likewise, their training in clinical medicine which is usually integrated into their residency education (12 months or more) puts them in a unique position to comfortably manage the basic medical needs of these patients. In instances where trauma may affect other regions of the body, an inter-multi-disciplinary approach may be taken or consults called for. In this instance, an opthalmology consult was important. In fresh trauma, often seen in major trauma centers (i.e. overseas), a "Trauma Team" is on standby 24 hours a day, and is prepared to assess and manage trauma patients almost immediately upon their arrival in the ER. The trauma team is usually composed of a Trauma Surgeon who is a general surgeon with subspecialty training in traumatology who assesses and manages the visceral injuries, an Orthopedic Surgeon who manages fractures of the extremities, a Neurosurgeon for cerebral injuries and an Oral and Maxillo-Facial Surgeon for facial injuries. In some institutions, facial trauma call is alternated between the "three major head and neck specialty services", namely Oral and Maxillo-facial Surgery, Otolaryngology-Head & Neck Surgery and Plastic & Reconstructive Surgery. (ABSTRACT TRUNCATED)
Facial approximation-from facial reconstruction synonym to face prediction paradigm.
Stephan, Carl N
2015-05-01
Facial approximation was first proposed as a synonym for facial reconstruction in 1987 due to dissatisfaction with the connotations the latter label held. Since its debut, facial approximation's identity has morphed as anomalies in face prediction have accumulated. Now underpinned by differences in what problems are thought to count as legitimate, facial approximation can no longer be considered a synonym for, or subclass of, facial reconstruction. Instead, two competing paradigms of face prediction have emerged, namely: facial approximation and facial reconstruction. This paper shines a Kuhnian lens across the discipline of face prediction to comprehensively review these developments and outlines the distinguishing features between the two paradigms. © 2015 American Academy of Forensic Sciences.
Reproducibility of the dynamics of facial expressions in unilateral facial palsy.
Alagha, M A; Ju, X; Morley, S; Ayoub, A
2018-02-01
The aim of this study was to assess the reproducibility of non-verbal facial expressions in unilateral facial paralysis using dynamic four-dimensional (4D) imaging. The Di4D system was used to record five facial expressions of 20 adult patients. The system captured 60 three-dimensional (3D) images per second; each facial expression took 3-4seconds which was recorded in real time. Thus a set of 180 3D facial images was generated for each expression. The procedure was repeated after 30min to assess the reproducibility of the expressions. A mathematical facial mesh consisting of thousands of quasi-point 'vertices' was conformed to the face in order to determine the morphological characteristics in a comprehensive manner. The vertices were tracked throughout the sequence of the 180 images. Five key 3D facial frames from each sequence of images were analyzed. Comparisons were made between the first and second capture of each facial expression to assess the reproducibility of facial movements. Corresponding images were aligned using partial Procrustes analysis, and the root mean square distance between them was calculated and analyzed statistically (paired Student t-test, P<0.05). Facial expressions of lip purse, cheek puff, and raising of eyebrows were reproducible. Facial expressions of maximum smile and forceful eye closure were not reproducible. The limited coordination of various groups of facial muscles contributed to the lack of reproducibility of these facial expressions. 4D imaging is a useful clinical tool for the assessment of facial expressions. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Yoshimura, Sayaka; Sato, Wataru; Uono, Shota; Toichi, Motomi
2015-01-01
Previous electromyographic studies have reported that individuals with autism spectrum disorders (ASD) exhibited atypical patterns of facial muscle activity in response to facial expression stimuli. However, whether such activity is expressed in visible facial mimicry remains unknown. To investigate this issue, we videotaped facial responses in…
Multiple Mechanisms in the Perception of Face Gender: Effect of Sex-Irrelevant Features
ERIC Educational Resources Information Center
Komori, Masashi; Kawamura, Satoru; Ishihara, Shigekazu
2011-01-01
Effects of sex-relevant and sex-irrelevant facial features on the evaluation of facial gender were investigated. Participants rated masculinity of 48 male facial photographs and femininity of 48 female facial photographs. Eighty feature points were measured on each of the facial photographs. Using a generalized Procrustes analysis, facial shapes…
Characterization and recognition of mixed emotional expressions in thermal face image
NASA Astrophysics Data System (ADS)
Saha, Priya; Bhattacharjee, Debotosh; De, Barin K.; Nasipuri, Mita
2016-05-01
Facial expressions in infrared imaging have been introduced to solve the problem of illumination, which is an integral constituent of visual imagery. The paper investigates facial skin temperature distribution on mixed thermal facial expressions of our created face database where six are basic expressions and rest 12 are a mixture of those basic expressions. Temperature analysis has been performed on three facial regions of interest (ROIs); periorbital, supraorbital and mouth. Temperature variability of the ROIs in different expressions has been measured using statistical parameters. The temperature variation measurement in ROIs of a particular expression corresponds to a vector, which is later used in recognition of mixed facial expressions. Investigations show that facial features in mixed facial expressions can be characterized by positive emotion induced facial features and negative emotion induced facial features. Supraorbital is a useful facial region that can differentiate basic expressions from mixed expressions. Analysis and interpretation of mixed expressions have been conducted with the help of box and whisker plot. Facial region containing mixture of two expressions is generally less temperature inducing than corresponding facial region containing basic expressions.
Tardif, Carole; Lainé, France; Rodriguez, Mélissa; Gepner, Bruno
2007-09-01
This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on CD-Rom, under audio or silent conditions, and under dynamic visual conditions (slowly, very slowly, at normal speed) plus a static control. Overall, children with autism showed lower performance in expression recognition and more induced facial-vocal imitation than controls. In the autistic group, facial expression recognition and induced facial-vocal imitation were significantly enhanced in slow conditions. Findings may give new perspectives for understanding and intervention for verbal and emotional perceptive and communicative impairments in autistic populations.
Multiracial Facial Golden Ratio and Evaluation of Facial Appearance.
Alam, Mohammad Khursheed; Mohd Noor, Nor Farid; Basri, Rehana; Yew, Tan Fo; Wen, Tay Hui
2015-01-01
This study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM) Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian), with the mean age of 21.54 ± 1.56 (Age range, 18-25). Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects' evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI), Malaysian Chinese (MC) and Malaysian Malay (MM) were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; P<0.05) but no significant difference was found between races. Out of the 286 subjects, 49 (17.1%) were of ideal facial shape, 156 (54.5%) short and 81 (28.3%) long. The facial evaluation questionnaire showed that MC had the lowest satisfaction with mean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts. 1) Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%); 2) Facial index did not depend significantly on races; 3) Significant sexual dimorphism was shown among Malaysian Chinese; 4) All three races are generally satisfied with their own facial appearance; 5) No significant association was found between golden ratio and facial evaluation score among Malaysian population.
Multiracial Facial Golden Ratio and Evaluation of Facial Appearance
2015-01-01
This study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM) Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian), with the mean age of 21.54 ± 1.56 (Age range, 18–25). Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects’ evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI), Malaysian Chinese (MC) and Malaysian Malay (MM) were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; P<0.05) but no significant difference was found between races. Out of the 286 subjects, 49 (17.1%) were of ideal facial shape, 156 (54.5%) short and 81 (28.3%) long. The facial evaluation questionnaire showed that MC had the lowest satisfaction with mean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts. In conclusion: 1) Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%); 2) Facial index did not depend significantly on races; 3) Significant sexual dimorphism was shown among Malaysian Chinese; 4) All three races are generally satisfied with their own facial appearance; 5) No significant association was found between golden ratio and facial evaluation score among Malaysian population. PMID:26562655
A patient with bilateral facial palsy associated with hypertension and chickenpox: learning points.
Al-Abadi, Eslam; Milford, David V; Smith, Martin
2010-11-26
Bilateral facial nerve paralysis is an uncommon presentation and even more so in children. There are reports of different causes of bilateral facial nerve palsy. It is well-established that hypertension and chickenpox causes unilateral facial paralysis and the importance of checking the blood pressure in children with facial nerve paralysis cannot be stressed enough. The authors report a boy with bilateral facial nerve paralysis in association with hypertension and having recently recovered from chickenpox. The authors review aspects of bilateral facial nerve paralysis as well as hypertension and chickenpox causing facial nerve paralysis.
A patient with bilateral facial palsy associated with hypertension and chickenpox: learning points
Al-Abadi, Eslam; Milford, David V; Smith, Martin
2010-01-01
Bilateral facial nerve paralysis is an uncommon presentation and even more so in children. There are reports of different causes of bilateral facial nerve palsy. It is well-established that hypertension and chickenpox causes unilateral facial paralysis and the importance of checking the blood pressure in children with facial nerve paralysis cannot be stressed enough. The authors report a boy with bilateral facial nerve paralysis in association with hypertension and having recently recovered from chickenpox. The authors review aspects of bilateral facial nerve paralysis as well as hypertension and chickenpox causing facial nerve paralysis. PMID:22797481
Facial nerve paralysis secondary to occult malignant neoplasms.
Boahene, Derek O; Olsen, Kerry D; Driscoll, Colin; Lewis, Jean E; McDonald, Thomas J
2004-04-01
This study reviewed patients with unilateral facial paralysis and normal clinical and imaging findings who underwent diagnostic facial nerve exploration. Study design and setting Fifteen patients with facial paralysis and normal findings were seen in the Mayo Clinic Department of Otorhinolaryngology. Eleven patients were misdiagnosed as having Bell palsy or idiopathic paralysis. Progressive facial paralysis with sequential involvement of adjacent facial nerve branches occurred in all 15 patients. Seven patients had a history of regional skin squamous cell carcinoma, 13 patients had surgical exploration to rule out a neoplastic process, and 2 patients had negative exploration. At last follow-up, 5 patients were alive. Patients with facial paralysis and normal clinical and imaging findings should be considered for facial nerve exploration when the patient has a history of pain or regional skin cancer, involvement of other cranial nerves, and prolonged facial paralysis. Occult malignancy of the facial nerve may cause unilateral facial paralysis in patients with normal clinical and imaging findings.
Cavoy, R
2013-09-01
Facial palsy is a daily challenge for the clinicians. Determining whether facial nerve palsy is peripheral or central is a key step in the diagnosis. Central nervous lesions can give facial palsy which may be easily differentiated from peripheral palsy. The next question is the peripheral facial paralysis idiopathic or symptomatic. A good knowledge of anatomy of facial nerve is helpful. A structure approach is given to identify additional features that distinguish symptomatic facial palsy from idiopathic one. The main cause of peripheral facial palsies is idiopathic one, or Bell's palsy, which remains a diagnosis of exclusion. The most common cause of symptomatic peripheral facial palsy is Ramsay-Hunt syndrome. Early identification of symptomatic facial palsy is important because of often worst outcome and different management. The prognosis of Bell's palsy is on the whole favorable and is improved with a prompt tapering course of prednisone. In Ramsay-Hunt syndrome, an antiviral therapy is added along with prednisone. We also discussed of current treatment recommendations. We will review short and long term complications of peripheral facial palsy.
Llamazares-Martín, Clara; Scopa, Chiara; Guillén-Salazar, Federico; Palagi, Elisabetta
2017-07-01
Fine-tuning communication is well documented in mammalian social play which relies on a large variety of specific and non-specific signals. Facial expressions are one of the most frequent patterns in play communication. The reciprocity of facial signals expressed by the players provides information on their reciprocal attentional state and on the correct perception/decoding of the signal itself. Here, for the first time, we explored the Relaxed Open Mouth (ROM), a widespread playful facial expression among mammals, in the South American sea lion (Otaria flavescens). In this species, like many others, ROM appears to be used as a playful signal as distinct from merely being a biting action. ROM was often reciprocated by players. Even though ROM did not vary in frequency of emission as a function of the number of players involved, it was reciprocated more often during dyadic encounters, in which the players had the highest probability to engage in a face-to-face interaction. Finally, we found that it was the reciprocation of ROMs, more than their frequency performance, that was effective in prolonging playful bouts. In conclusion, ROM is widespread in many social mammals and O. flavescens is not an exception. At least in those species for which quantitative data are available, ROM seems to be characterized by similar design features clearly indicating that the signal underwent to similar selective pressures. Copyright © 2017 Elsevier B.V. All rights reserved.
Auerbach, Sarah
2017-01-01
Trait cheerfulness predicts individual differences in experiences and behavioral responses in various humor experiments and settings. The present study is the first to investigate whether trait cheerfulness also influences the impact of a hospital clown intervention on the emotional state of patients. Forty-two adults received a clown visit in a rehabilitation center and rated their emotional state and trait cheerfulness afterward. Facial expressions of patients during the clown visit were coded with the Facial Action Coding System. Looking at the total sample, the hospital clown intervention elicited more frequent facial expressions of genuine enjoyment (Duchenne smiles) than other smiles (Non-Duchenne smiles), and more Duchenne smiles went along with more perceived funniness, a higher level of global positive feelings and transcendence. This supports the notion that overall, hospital clown interventions are beneficial for patients. However, when considering individual differences in the receptiveness to humor, results confirmed that high trait cheerful patients showed more Duchenne smiles than low trait cheerful patients (with no difference in Non-Duchenne smiles), and reported a higher level of positive emotions than low trait cheerful individuals. In summary, although hospital clown interventions on average successfully raise the patients’ level of positive emotions, not all patients in hospitals are equally susceptible to respond to humor with amusement, and thus do not equally benefit from a hospital clown intervention. Implications for research and practitioners are discussed. PMID:29180976
The olfactory fascia: an evo-devo concept of the fibrocartilaginous nose.
Jankowski, Roger; Rumeau, Cécile; de Saint Hilaire, Théophile; Tonnelet, Romain; Nguyen, Duc Trung; Gallet, Patrice; Perez, Manuela
2016-12-01
Evo-devo is the science that studies the link between evolution of species and embryological development. This concept helps to understand the complex anatomy of the human nose. The evo-devo theory suggests the persistence in the adult of an anatomical entity, the olfactory fascia, that unites the cartilages of the nose to the olfactory mucosa. We dissected two fresh specimens. After resecting the superficial tissues of the nose, dissection was focused on the disarticulation of the fibrocartilaginous noses from the facial and skull base skeleton. Dissection shows two fibrocartilaginous sacs that were invaginated side-by-side in the midface and attached to the anterior skull base. These membranous sacs were separated in the midline by the perpendicular plate of the ethmoid. Their walls contained the alar cartilages and the lateral expansions of the septolateral cartilage, which we had to separate from the septal cartilage. The olfactory mucosa was located inside their cranial ends. The olfactory fascia is a continuous membrane uniting the nasal cartilages to the olfactory mucosa. Its origin can be found in the invagination and differentiation processes of the olfactory placodes. The fibrous portions of the olfactory fascia may be described as ligaments that unit the different components of the olfactory fascia one to the other and the fibrocartilaginous nose to the facial and skull base skeleton. The basicranial ligaments, fixing the fibrocartilaginous nose to the skull base, represent key elements in the concept of septorhinoplasty by disarticulation.
26 CFR 301.7403-1 - Action to enforce lien or to subject property to payment of tax.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 18 2010-04-01 2010-04-01 false Action to enforce lien or to subject property... Proceedings Civil Actions by the United States § 301.7403-1 Action to enforce lien or to subject property to... to be filed in a district court of the United States to enforce the lien of the United States under...
Social Interaction Behavior in ADHD in Adults in a Virtual Trust Game.
Lis, Stefanie; Baer, Nina; Franzen, Nele; Hagenhoff, Meike; Gerlach, Maika; Koppe, Georgia; Sammer, Gebhard; Gallhofer, Bernd; Kirsch, Peter
2016-04-01
Social cognitive functions in adults with ADHD were investigated in a virtual social exchange game. The sample consisted of 40 participants (20 adult ADHD participants, 20 healthy controls). Participants played a multiround trust game with virtual trustees who differed in regard to fairness and presence of emotional facial cues. Investments were higher in ADHD participants than in healthy participants except for partners who played fair with constant neutral expressions. ADHD patients did not adapt their behavior to the fairness of the trustee. In the presence of emotional facial cues, ADHD and healthy participants transferred more monetary units to happy rather than angry-looking trustees. Differences in investment behavior were not linked to deficits in emotion-recognition abilities or cognitive dysfunctions. Alterations in interaction behavior and in the formation of a general attitude toward social partners could be shown in adults with ADHD. © The Author(s) 2013.
Islam, Shamim
2018-05-01
Cutaneous leishmaniasis (CL), a common condition in many parts of the world, is being increasingly encountered in non-endemic countries secondary to immigration. The clinical manifestations and course can vary substantially, with appropriate management ranging from observation for self-healing lesions to urgent treatment to prevent damaging anatomical and cosmetic sequelae. While there are now several effective medications, optimal therapy is not well defined, and decision-making can be challenged by the location of lesions and various drug issues, including availability, mode of delivery and adverse effects. A 7-year-old Afghani boy who presented shortly after arriving in the United States with a rapidly progressing crusting and ulcerative facial rash caused by Leishmania tropica is described. The various drugs currently available for CL and experience of using liposomal amphotericin B specifically are reviewed.
[Surgical correction of cleft palate].
Kimura, F T; Pavia Noble, A; Soriano Padilla, F; Soto Miranda, A; Medellín Rodríguez, A
1990-04-01
This study presents a statistical review of corrective surgery for cleft palate, based on cases treated at the maxillo-facial surgery units of the Pediatrics Hospital of the Centro Médico Nacional and at Centro Médico La Raza of the National Institute of Social Security of Mexico, over a five-year period. Interdisciplinary management as performed at the Cleft-Palate Clinic, in an integrated approach involving specialists in maxillo-facial surgery, maxillar orthopedics, genetics, social work and mental hygiene, pursuing to reestablish the stomatological and psychological functions of children afflicted by cleft palate, is amply described. The frequency and classification of the various techniques practiced in that service are described, as well as surgical statistics for 188 patients, which include a total of 256 palate surgeries performed from March 1984 to March 1989, applying three different techniques and proposing a combination of them in a single surgical time, in order to avoid complementary surgery.
Rozin, P; Lowery, L; Imada, S; Haidt, J
1999-04-01
It is proposed that 3 emotions--contempt, anger, and disgust--are typically elicited, across cultures, by violations of 3 moral codes proposed by R. A. Shweder and his colleagues (R. A. Shweder, N. C. Much, M. Mahapatra, & L. Park, 1997). The proposed alignment links anger to autonomy (individual rights violations), contempt to community (violation of communal codes including hierarchy), and disgust to divinity (violations of purity-sanctity). This is the CAD triad hypothesis. Students in the United States and Japan were presented with descriptions of situations that involve 1 of the types of moral violations and asked to assign either an appropriate facial expression (from a set of 6) or an appropriate word (contempt, anger, disgust, or their translations). Results generally supported the CAD triad hypothesis. Results were further confirmed by analysis of facial expressions actually made by Americans to the descriptions of these situations.
The microbiome of New World vultures.
Roggenbuck, Michael; Bærholm Schnell, Ida; Blom, Nikolaj; Bælum, Jacob; Bertelsen, Mads Frost; Sicheritz-Pontén, Thomas; Pontén, Thomas Sicheritz; Sørensen, Søren Johannes; Gilbert, M Thomas P; Graves, Gary R; Hansen, Lars H
2014-11-25
Vultures are scavengers that fill a key ecosystem niche, in which they have evolved a remarkable tolerance to bacterial toxins in decaying meat. Here we report the first deep metagenomic analysis of the vulture microbiome. Through face and gut comparisons of 50 vultures representing two species, we demonstrate a remarkably conserved low diversity of gut microbial flora. The gut samples contained an average of 76 operational taxonomic units (OTUs) per specimen, compared with 528 OTUs on the facial skin. Clostridia and Fusobacteria, widely pathogenic to other vertebrates, dominate the vulture's gut microbiota. We reveal a likely faecal-oral-gut route for their origin. DNA of prey species detectable on facial swabs was completely degraded in the gut samples from most vultures, suggesting that the gastrointestinal tracts of vultures are extremely selective. Our findings show a strong adaption of vultures and their bacteria to their food source, exemplifying a specialized host-microbial alliance.
Facial burns from exploding microwaved foods: Case series and review.
Bagirathan, Shenbana; Rao, Krishna; Al-Benna, Sammy; O'Boyle, Ciaran P
2016-03-01
Microwave ovens allow for quick and simple cooking. However, the importance of adequate food preparation, prior to microwave cooking, and the consequences of inadequate preparation are not well-known. The authors conducted a retrospective outcome analysis of all patients who sustained facial burns from microwaved foods and were treated at a UK regional burns unit over a six-year period. Patients were identified from clinical records. Eight patients presented following inadequate preparation of either tinned potatoes (n=4) or eggs (n=4). All patients sustained <2% total body surface area facial burns. Mean age was 41 years (range 21-68 years). Six cases (75%) had associated ocular injury. One received amniotic membrane grafts; this individual's vision remains poor twelve months after injury. Rapid dielectric heating of water within foods may produce high steam and vapour pressure gradients and cause explosive decompression [1,5,11]. Consumers may fail to recognise differential heating and simply cook foods for longer if they remain cool on the outer surface. Education on safe use and risks of microwave-cooked foods may help prevent these potentially serious injuries. Microwave ovens have become ubiquitous. The authors recognise the need for improved public awareness of safe microwave cooking. Burns resulting from microwave-cooked foods may have life-changing consequences. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.
Exemplar-based human action pose correction.
Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen
2014-07-01
The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems.
Nellis, Jason C.; Ishii, Masaru; Byrne, Patrick J.; Boahene, Kofi D. O.; Dey, Jacob K.; Ishii, Lisa E.
2017-01-01
IMPORTANCE Though anecdotally linked, few studies have investigated the impact of facial paralysis on depression and quality of life (QOL). OBJECTIVE To measure the association between depression, QOL, and facial paralysis in patients seeking treatment at a facial plastic surgery clinic. DESIGN, SETTING, PARTICIPANTS Data were prospectively collected for patients with all-cause facial paralysis and control patients initially presenting to a facial plastic surgery clinic from 2013 to 2015. The control group included a heterogeneous patient population presenting to facial plastic surgery clinic for evaluation. Patients who had prior facial reanimation surgery or missing demographic and psychometric data were excluded from analysis. MAIN OUTCOMES AND MEASURES Demographics, facial paralysis etiology, facial paralysis severity (graded on the House-Brackmann scale), Beck depression inventory, and QOL scores in both groups were examined. Potential confounders, including self-reported attractiveness and mood, were collected and analyzed. Self-reported scores were measured using a 0 to 100 visual analog scale. RESULTS There was a total of 263 patients (mean age, 48.8 years; 66.9% were female) were analyzed. There were 175 control patients and 88 patients with facial paralysis. Sex distributions were not significantly different between the facial paralysis and control groups. Patients with facial paralysis had significantly higher depression, lower self-reported attractiveness, lower mood, and lower QOL scores. Overall, 37 patients with facial paralysis (42.1%) screened positive for depression, with the greatest likelihood in patients with House-Brackmann grade 3 or greater (odds ratio, 10.8; 95% CI, 5.13–22.75) compared with 13 control patients (8.1%) (P < .001). In multivariate regression, facial paralysis and female sex were significantly associated with higher depression scores (constant, 2.08 [95% CI, 0.77–3.39]; facial paralysis effect, 5.98 [95% CI, 4.38–7.58]; female effect, 1.95 [95% CI, 0.65–3.25]). Facial paralysis was associated with lower QOL scores (constant, 81.62 [95% CI, 78.98–84.25]; facial paralysis effect, −16.06 [95% CI, −20.50 to −11.62]). CONCLUSIONS AND RELEVANCE For treatment-seeking patients, facial paralysis was significantly associated with increased depression and worse QOL scores. In addition, female sex was significantly associated with increased depression scores. Moreover, patients with a greater severity of facial paralysis were more likely to screen positive for depression. Clinicians initially evaluating patients should consider the psychological impact of facial paralysis to optimize care. LEVEL OF EVIDENCE 2. PMID:27930763
The Prevalence of Cosmetic Facial Plastic Procedures among Facial Plastic Surgeons.
Moayer, Roxana; Sand, Jordan P; Han, Albert; Nabili, Vishad; Keller, Gregory S
2018-04-01
This is the first study to report on the prevalence of cosmetic facial plastic surgery use among facial plastic surgeons. The aim of this study is to determine the frequency with which facial plastic surgeons have cosmetic procedures themselves. A secondary aim is to determine whether trends in usage of cosmetic facial procedures among facial plastic surgeons are similar to that of nonsurgeons. The study design was an anonymous, five-question, Internet survey distributed via email set in a single academic institution. Board-certified members of the American Academy of Facial Plastic and Reconstructive Surgery (AAFPRS) were included in this study. Self-reported history of cosmetic facial plastic surgery or minimally invasive procedures were recorded. The survey also queried participants for demographic data. A total of 216 members of the AAFPRS responded to the questionnaire. Ninety percent of respondents were male ( n = 192) and 10.3% were female ( n = 22). Thirty-three percent of respondents were aged 31 to 40 years ( n = 70), 25% were aged 41 to 50 years ( n = 53), 21.4% were aged 51 to 60 years ( n = 46), and 20.5% were older than 60 years ( n = 44). Thirty-six percent of respondents had a surgical cosmetic facial procedure and 75% has at least one minimally invasive cosmetic facial procedure. Facial plastic surgeons are frequent users of cosmetic facial plastic surgery. This finding may be due to access, knowledge base, values, or attitudes. By better understanding surgeon attitudes toward facial plastic surgery, we can improve communication with patients and delivery of care. This study is a first step in understanding use of facial plastic procedures among facial plastic surgeons. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Valentova, Jaroslava Varella; Varella, Marco Antonio Corrêa; Havlíček, Jan; Kleisner, Karel
2017-02-01
Various species use multiple sensory modalities in the communication processes. In humans, female facial appearance and vocal display are correlated and it has been suggested that they serve as redundant markers indicating the bearer's reproductive potential and/or residual fertility. In men, evidence for redundancy of facial and vocal attractiveness is ambiguous. We tested the redundancy/multiple signals hypothesis by correlating perceived facial and vocal attractiveness in men and women from two different populations, Brazil and the Czech Republic. We also investigated whether facial and vocal attractiveness are linked to facial morphology. Standardized facial pictures and vocal samples of 86 women (47 from Brazil) and 81 men (41 from Brazil), aged 18-35, were rated for attractiveness by opposite-sex raters. Facial and vocal attractiveness were found to positively correlate in women but not in men. We further applied geometric morphometrics and regressed facial shape coordinates on facial and vocal attractiveness ratings. In women, facial shape was linked to their facial attractiveness but there was no association between facial shape and vocal attractiveness. In men, none of these associations was significant. Having shown that women with more attractive faces possess also more attractive voices, we thus only partly supported the redundant signal hypothesis. Copyright © 2016 Elsevier B.V. All rights reserved.
Nader, Marc-Elie; Bell, Diana; Sturgis, Erich M.; Ginsberg, Lawrence E.; Gidley, Paul W.
2014-01-01
Background Facial nerve paralysis in a patient with a salivary gland mass usually denotes malignancy. However, facial paralysis can also be caused by benign salivary gland tumors. Methods We present a case of facial nerve paralysis due to a benign salivary gland tumor that had the imaging characteristics of an intraparotid facial nerve schwannoma. Results The patient presented to our clinic 4 years after the onset of facial nerve paralysis initially diagnosed as Bell palsy. Computed tomography demonstrated filling and erosion of the stylomastoid foramen with a mass on the facial nerve. Postoperative histopathology showed the presence of a pleomorphic adenoma. Facial paralysis was thought to be caused by extrinsic nerve compression. Conclusions This case illustrates the difficulty of accurate preoperative diagnosis of a parotid gland mass and reinforces the concept that facial nerve paralysis in the context of salivary gland tumors may not always indicate malignancy. PMID:25083397
Nader, Marc-Elie; Bell, Diana; Sturgis, Erich M; Ginsberg, Lawrence E; Gidley, Paul W
2014-08-01
Background Facial nerve paralysis in a patient with a salivary gland mass usually denotes malignancy. However, facial paralysis can also be caused by benign salivary gland tumors. Methods We present a case of facial nerve paralysis due to a benign salivary gland tumor that had the imaging characteristics of an intraparotid facial nerve schwannoma. Results The patient presented to our clinic 4 years after the onset of facial nerve paralysis initially diagnosed as Bell palsy. Computed tomography demonstrated filling and erosion of the stylomastoid foramen with a mass on the facial nerve. Postoperative histopathology showed the presence of a pleomorphic adenoma. Facial paralysis was thought to be caused by extrinsic nerve compression. Conclusions This case illustrates the difficulty of accurate preoperative diagnosis of a parotid gland mass and reinforces the concept that facial nerve paralysis in the context of salivary gland tumors may not always indicate malignancy.
Moriya, Jun; Tanno, Yoshihiko; Sugiura, Yoshinori
2013-11-01
This study investigated whether sensitivity to and evaluation of facial expressions varied with repeated exposure to non-prototypical facial expressions for a short presentation time. A morphed facial expression was presented for 500 ms repeatedly, and participants were required to indicate whether each facial expression was happy or angry. We manipulated the distribution of presentations of the morphed facial expressions for each facial stimulus. Some of the individuals depicted in the facial stimuli expressed anger frequently (i.e., anger-prone individuals), while the others expressed happiness frequently (i.e., happiness-prone individuals). After being exposed to the faces of anger-prone individuals, the participants became less sensitive to those individuals' angry faces. Further, after being exposed to the faces of happiness-prone individuals, the participants became less sensitive to those individuals' happy faces. We also found a relative increase in the social desirability of happiness-prone individuals after exposure to the facial stimuli.
The effects of facial adiposity on attractiveness and perceived leadership ability.
Re, Daniel E; Perrett, David I
2014-01-01
Facial attractiveness has a positive influence on electoral success both in experimental paradigms and in the real world. One parameter that influences facial attractiveness and social judgements is facial adiposity (a facial correlate to body mass index, BMI). Overweight people have high facial adiposity and are perceived to be less attractive and lower in leadership ability. Here, we used an interactive design in order to assess whether the most attractive level of facial adiposity is also perceived as most leader-like. We found that participants reduced facial adiposity more to maximize attractiveness than to maximize perceived leadership ability. These results indicate that facial appearance impacts leadership judgements beyond the effects of attractiveness. We suggest that the disparity between optimal facial adiposity in attractiveness and leadership judgements stems from social trends that have produced thin ideals for attractiveness, while leadership judgements are associated with perception of physical dominance.
El Haj, Mohamad; Daoudi, Mohamed; Gallouj, Karim; Moustafa, Ahmed A; Nandrino, Jean-Louis
2018-05-11
Thanks to the current advances in the software analysis of facial expressions, there is a burgeoning interest in understanding emotional facial expressions observed during the retrieval of autobiographical memories. This review describes the research on facial expressions during autobiographical retrieval showing distinct emotional facial expressions according to the characteristics of retrieved memoires. More specifically, this research demonstrates that the retrieval of emotional memories can trigger corresponding emotional facial expressions (e.g. positive memories may trigger positive facial expressions). Also, this study demonstrates the variations of facial expressions according to specificity, self-relevance, or past versus future direction of memory construction. Besides linking research on facial expressions during autobiographical retrieval to cognitive and affective characteristics of autobiographical memory in general, this review positions this research within the broader context research on the physiologic characteristics of autobiographical retrieval. We also provide several perspectives for clinical studies to investigate facial expressions in populations with deficits in autobiographical memory (e.g. whether autobiographical overgenerality in neurologic and psychiatric populations may trigger few emotional facial expressions). In sum, this review paper demonstrates how the evaluation of facial expressions during autobiographical retrieval may help understand the functioning and dysfunctioning of autobiographical memory.
Aberrant patterns of visual facial information usage in schizophrenia.
Clark, Cameron M; Gosselin, Frédéric; Goghari, Vina M
2013-05-01
Deficits in facial emotion perception have been linked to poorer functional outcome in schizophrenia. However, the relationship between abnormal emotion perception and functional outcome remains poorly understood. To better understand the nature of facial emotion perception deficits in schizophrenia, we used the Bubbles Facial Emotion Perception Task to identify differences in usage of visual facial information in schizophrenia patients (n = 20) and controls (n = 20), when differentiating between angry and neutral facial expressions. As hypothesized, schizophrenia patients required more facial information than controls to accurately differentiate between angry and neutral facial expressions, and they relied on different facial features and spatial frequencies to differentiate these facial expressions. Specifically, schizophrenia patients underutilized the eye regions, overutilized the nose and mouth regions, and virtually ignored information presented at the lowest levels of spatial frequency. In addition, a post hoc one-tailed t test revealed a positive relationship of moderate strength between the degree of divergence from "normal" visual facial information usage in the eye region and lower overall social functioning. These findings provide direct support for aberrant patterns of visual facial information usage in schizophrenia in differentiating between socially salient emotional states. © 2013 American Psychological Association
Automated facial attendance logger for students
NASA Astrophysics Data System (ADS)
Krithika, L. B.; Kshitish, S.; Kishore, M. R.
2017-11-01
From the past two decades, various spheres of activity are in the aspect of ‘Face recognition’ as an essential tool. The complete series of actions of face recognition is composed of 3 stages: Face Detection, Feature Extraction and Recognition. In this paper, we make an effort to put forth a new application of face recognition and detection in education. The proposed system scans the classroom and detects the face of the students in class and matches the scanned face with the templates that is available in the database and updates the attendance of the respective students.
Towards the neurobiology of emotional body language.
de Gelder, Beatrice
2006-03-01
People's faces show fear in many different circumstances. However, when people are terrified, as well as showing emotion, they run for cover. When we see a bodily expression of emotion, we immediately know what specific action is associated with a particular emotion, leaving little need for interpretation of the signal, as is the case for facial expressions. Research on emotional body language is rapidly emerging as a new field in cognitive and affective neuroscience. This article reviews how whole-body signals are automatically perceived and understood, and their role in emotional communication and decision-making.
Belden, Sarah; Miller, Richard A.
2015-01-01
There is a growing demand for noninvasive anti-aging products for which the periorbital region serves as a critical aspect of facial rejuvenation. This article reviews a multitude of cosmeceutical ingredients that have good scientific data, specifically for the periorbital region. Topical treatment options have exponentially grown from extensively studied retinoids, to recently developed technology, such as growth factors and peptides. With a focus on the periorbital anatomy, the authors review the mechanisms of action of topical cosmeceutical ingredients, effectiveness of ingredient penetration through the stratum corneum, and validity of clinical trials. PMID:26430490
Su, Diya; Li, Dezhi; Wang, Shiwei; Qiao, Hui; Li, Ping; Wang, Binbin; Wan, Hong; Schumacher, Michael; Liu, Song
2018-06-06
Closed temporal bone fractures due to cranial trauma often result in facial nerve injury, frequently inducing incomplete facial paralysis. Conventional hypoglossal-facial nerve end-to-end neurorrhaphy may not be suitable for these injuries because sacrifice of the lesioned facial nerve for neurorrhaphy destroys the remnant axons and/or potential spontaneous innervation. we modified the classical method by hypoglossal-facial nerve "side"-to-side neurorrhaphy using an interpositional predegenerated nerve graft to treat these injuries. Five patients who experienced facial paralysis resulting from closed temporal bone fractures due to cranial trauma were treated with the "side"-to-side neurorrhaphy. An additional 4 patients did not receive the neurorrhaphy and served as controls. Before treatment, all patients had suffered House-Brackmann (H-B) grade V or VI facial paralysis for a mean of 5 months. During the 12-30 months of follow-up period, no further detectable deficits were observed, but an improvement in facial nerve function was evidenced over time in the 5 neurorrhaphy-treated patients. At the end of follow-up, the improved facial function reached H-B grade II in 3, grade III in 1 and grade IV in 1 of the 5 patients, consistent with the electrophysiological examinations. In the control group, two patients showed slightly spontaneous innervation with facial function improved from H-B grade VI to V, and the other patients remained unchanged at H-B grade V or VI. We concluded that the hypoglossal-facial nerve "side"-to-side neurorrhaphy can preserve the injured facial nerve and is suitable for treating significant incomplete facial paralysis resulting from closed temporal bone fractures, providing an evident beneficial effect. Moreover, this treatment may be performed earlier after the onset of facial paralysis in order to reduce the unfavorable changes to the injured facial nerve and atrophy of its target muscles due to long-term denervation and allow axonal regrowth in a rich supportive environment.
Patidar, Monika V; Deshmukh, Ashish Ramchandra; Khedkar, Maruti Yadav
2016-01-01
Background: Acne vulgaris is the most common disease of the skin affecting adolescents and young adults causing psychological distress. The combination of antibiotic resistance, adverse effects of topical and systemic anti acne medications and desire for high tech approaches have all led to new enthusiasm for light based acne treatment. Intense pulse light (IPL) therapy has three modes of action in acne vulgaris i.e., photochemical, photo thermal and photo immunological. Aims: (1) to study efficacy of IPL therapy in facial acne vulgaris. (2) To compare two fluences - one normal and other subnormal on right and left side of face respectively. Methods: (Including settings and design and statistical analysis used). Total 45 patients in age group 16 to 28 years with inflammatory facial acne vulgaris were included in prospective study. Baseline data for each patient was recorded. All patients were given 4 sittings of IPL at 2 weeks interval and were followed for 2 months every 2 weeks. Fluence used was 35J/cm2 on right and 20J/cm2 on left side. Percentage reduction in lesion count was calculated at each sitting and follow up and graded as mild (0-25%), moderate (26-50%), good (51-75%) and excellent (76-100%). Side effects were noted. The results were analysed using Mann-Whitney Test. Results: On right side, excellent results were achieved in 10(22%), good in 22(49%) and moderate in 13(29%) patients. On left side excellent were results achieved in 7(15%), good in 19(42%) and moderate in 16(43%) patients. There was no statically significant difference noted in efficacy of two fluences used in treatment of facial acne vulgaris. Conclusions: IPL is a effective and safe option for inflammatory acne vulgaris with minimal reversible side effects. Subnormal fluence is as effective as normal fluence in Indian skin. PMID:27688446
Patidar, Monika V; Deshmukh, Ashish Ramchandra; Khedkar, Maruti Yadav
2016-01-01
Acne vulgaris is the most common disease of the skin affecting adolescents and young adults causing psychological distress. The combination of antibiotic resistance, adverse effects of topical and systemic anti acne medications and desire for high tech approaches have all led to new enthusiasm for light based acne treatment. Intense pulse light (IPL) therapy has three modes of action in acne vulgaris i.e., photochemical, photo thermal and photo immunological. (1) to study efficacy of IPL therapy in facial acne vulgaris. (2) To compare two fluences - one normal and other subnormal on right and left side of face respectively. (Including settings and design and statistical analysis used). Total 45 patients in age group 16 to 28 years with inflammatory facial acne vulgaris were included in prospective study. Baseline data for each patient was recorded. All patients were given 4 sittings of IPL at 2 weeks interval and were followed for 2 months every 2 weeks. Fluence used was 35J/cm(2) on right and 20J/cm(2) on left side. Percentage reduction in lesion count was calculated at each sitting and follow up and graded as mild (0-25%), moderate (26-50%), good (51-75%) and excellent (76-100%). Side effects were noted. The results were analysed using Mann-Whitney Test. On right side, excellent results were achieved in 10(22%), good in 22(49%) and moderate in 13(29%) patients. On left side excellent were results achieved in 7(15%), good in 19(42%) and moderate in 16(43%) patients. There was no statically significant difference noted in efficacy of two fluences used in treatment of facial acne vulgaris. IPL is a effective and safe option for inflammatory acne vulgaris with minimal reversible side effects. Subnormal fluence is as effective as normal fluence in Indian skin.
Akwagyiram, Ivy; Butler, Andrew; Maclure, Robert; Colgan, Patrick; Yan, Nicole; Bosma, Mary Lynn
2016-08-25
Gingivitis can develop as a reaction to dental plaque. It can be limited by curtailing plaque build-up through actions including tooth brushing and the use of medicinal mouthwashes, such as those containing chlorhexidine digluconate (CHX), that can reach parts of the mouth that may be missed when brushing. This study aimed to compare dental stain control of twice-daily brushing with a sodium fluoride (NaF) dentifrice containing 67 % sodium bicarbonate (NaHCO3) or a commercially available NaF silica dentifrice without NaHCO3, while using a mouthwash containing 0.2 % CHX. This was a 6-week, randomised, two-site, examiner-blind, parallel-group study in healthy subjects with at least 'mild' stain levels on the facial surfaces of ≥4 teeth and ≥15 bleeding sites. Assessment was via modified Lobene Stain Index (MLSI), the score being the mean of stain intensity multiplied by area (MLSI [IxA]). One hundred and fifty of 160 randomised subjects completed the study. There were no significant differences in Overall (facial and lingual) MLSI (IxA) scores between dentifrices. The Facial MLSI (IxA) was statistically significant at 6 weeks, favouring the 67 % NaHCO3 dentifrice (p = 0.0404). Post-hoc analysis, conducted due to a significant site interaction, found significant differences for all MLSI scores in favour of the 67 % NaHCO3 dentifrice at Site 1 (both weeks) but not Site 2. No overall significant differences were found between a 67 and 0 % NaHCO3 dentifrice in controlling CHX stain; a significant difference on facial surfaces suggests advantage of the former on more accessible surfaces. This study was registered at ClinicalTrials.gov ( NCT01962493 ) on 10 October 2013 and was funded by GSK Consumer Healthcare.
Cheung, Pui Kwan; Fok, Lincoln
2017-10-01
Plastic microbeads are often added to personal care and cosmetic products (PCCPs) as an abrasive agent in exfoliants. These beads have been reported to contaminate the aquatic environment and are sufficiently small to be readily ingested by aquatic organisms. Plastic microbeads can be directly released into the aquatic environment with domestic sewage if no sewage treatment is provided, and they can also escape from wastewater treatment plants (WWTPs) because of incomplete removal. However, the emissions of microbeads from these two sources have never been estimated for China, and no regulation has been imposed on the use of plastic microbeads in PCCPs. Therefore, in this study, we aimed to estimate the annual microbead emissions in Mainland China from both direct emissions and WWTP emissions. Nine facial scrubs were purchased, and the microbeads in the scrubs were extracted and enumerated. The microbead density in those products ranged from 5219 to 50,391 particles/g, with an average of 20,860 particles/g. Direct emissions arising from the use of facial scrubs were estimated using this average density number, population data, facial scrub usage rate, sewage treatment rate, and a few conservative assumptions. WWTP emissions were calculated by multiplying the annual treated sewage volume and estimated microbead density in treated sewage. We estimated that, on average, 209.7 trillion microbeads (306.9 tonnes) are emitted into the aquatic environment in Mainland China every year. More than 80% of the emissions originate from incomplete removal in WWTPs, and the remaining 20% are derived from direct emissions. Although the weight of the emitted microbeads only accounts for approximately 0.03% of the plastic waste input into the ocean from China, the number of microbeads emitted far exceeds the previous estimate of plastic debris (>330 μm) on the world's sea surface. Immediate actions are required to prevent plastic microbeads from entering the aquatic environment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Interference among the Processing of Facial Emotion, Face Race, and Face Gender.
Li, Yongna; Tse, Chi-Shing
2016-01-01
People can process multiple dimensions of facial properties simultaneously. Facial processing models are based on the processing of facial properties. The current study examined the processing of facial emotion, face race, and face gender using categorization tasks. The same set of Chinese, White and Black faces, each posing a neutral, happy or angry expression, was used in three experiments. Facial emotion interacted with face race in all the tasks. The interaction of face race and face gender was found in the race and gender categorization tasks, whereas the interaction of facial emotion and face gender was significant in the emotion and gender categorization tasks. These results provided evidence for a symmetric interaction between variant facial properties (emotion) and invariant facial properties (race and gender).
Interference among the Processing of Facial Emotion, Face Race, and Face Gender
Li, Yongna; Tse, Chi-Shing
2016-01-01
People can process multiple dimensions of facial properties simultaneously. Facial processing models are based on the processing of facial properties. The current study examined the processing of facial emotion, face race, and face gender using categorization tasks. The same set of Chinese, White and Black faces, each posing a neutral, happy or angry expression, was used in three experiments. Facial emotion interacted with face race in all the tasks. The interaction of face race and face gender was found in the race and gender categorization tasks, whereas the interaction of facial emotion and face gender was significant in the emotion and gender categorization tasks. These results provided evidence for a symmetric interaction between variant facial properties (emotion) and invariant facial properties (race and gender). PMID:27840621
DOE Office of Scientific and Technical Information (OSTI.GOV)
DOE /NV
This Corrective Action Decision Document has been prepared for Corrective Action Unit 340, the NTS Pesticide Release Sites, in accordance with the Federal Facility Agreement and Consent Order of 1996 (FFACO, 1996). Corrective Action Unit 340 is located at the Nevada Test Site, Nevada, and is comprised of the following Corrective Action Sites: 23-21-01, Area 23 Quonset Hut 800 Pesticide Release Ditch; 23-18-03, Area 23 Skid Huts Pesticide Storage; and 15-18-02, Area 15 Quonset Hut 15-11 Pesticide Storage. The purpose of this Corrective Action Decision Document is to identify and provide a rationale for the selection of a recommended correctivemore » action alternative for each Corrective Action Site. The scope of this Corrective Action Decision Document consists of the following tasks: Develop corrective action objectives; Identify corrective action alternative screening criteria; Develop corrective action alternatives; Perform detailed and comparative evaluations of the corrective action alternatives in relation to the corrective action objectives and screening criteria; and Recommend and justify a preferred corrective action alternative for each Corrective Action Site.« less
Kim, B Y; Choi, J W; Park, K C; Youn, S W
2013-02-01
Enlarged facial pores have been esthetic problems and have become a matter of cosmetic concern. Several factors are supposed to be related to the enlargement of facial pores, although scientific evaluations were not performed yet. To assess the correlation between facial pores and possible relating factors such as age, gender, sebum secretion, skin elasticity, and the presence of acne, using objective bioengineering instruments. Sixty volunteers, 30 males and 30 females, participated in this study. Various parameters of facial pores were assessed using the Robo Skin Analyzer. The facial sebum secretion and skin elasticity were measured using the Sebumeter and the Cutometer, respectively. These data were compared and correlated to examine the possible relationship between facial pores and age, sebum secretion and skin elasticity, according to gender and the presence of acne. Male gender and the existence of acne were correlated with higher number of facial pores. Sebum secretion levels showed positive correlation with facial pores. The R7 parameter of skin elasticity was negatively correlated with facial pores, suggesting increased facial pores with decreased skin elasticity. However, the age and the severity of acne did not show a definite relationship with facial pores. Male, increased sebum and decreased skin elasticity were mostly correlated with facial pore development. Further studies on population with various demographic profiles and more severe acne may be helpful to elucidate the potential effect of aging and acne severity on facial pores. © 2011 John Wiley & Sons A/S.
Hatch, Cory D; Wehby, George L; Nidey, Nichole L; Moreno Uribe, Lina M
2017-09-01
Meeting patient desires for enhanced facial esthetics requires that providers have standardized and objective methods to measure esthetics. The authors evaluated the effects of objective 3-dimensional (3D) facial shape and asymmetry measurements derived from 3D facial images on perceptions of facial attractiveness. The 3D facial images of 313 adults in Iowa were digitized with 32 landmarks, and objective 3D facial measurements capturing symmetric and asymmetric components of shape variation, centroid size, and fluctuating asymmetry were obtained from the 3D coordinate data using geo-morphometric analyses. Frontal and profile images of study participants were rated for facial attractiveness by 10 volunteers (5 women and 5 men) on a 5-point Likert scale and a visual analog scale. Multivariate regression was used to identify the effects of the objective 3D facial measurements on attractiveness ratings. Several objective 3D facial measurements had marked effects on attractiveness ratings. Shorter facial heights with protrusive chins, midface retrusion, faces with protrusive noses and thin lips, flat mandibular planes with deep labiomental folds, any cants of the lip commissures and floor of the nose, larger faces overall, and increased fluctuating asymmetry were rated as significantly (P < .001) less attractive. Perceptions of facial attractiveness can be explained by specific 3D measurements of facial shapes and fluctuating asymmetry, which have important implications for clinical practice and research. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Broom, Margaret; Gardner, Anne; Kecskes, Zsuzsoka; Kildea, Sue
2017-07-01
To facilitate staff transition from an open-plan to a two-cot neonatal intensive care unit design. In 2012, an Australian regional neonatal intensive care unit transitioned from an open-plan to a two-cot neonatal intensive care unit design. Research has reported single- and small-room neonatal intensive care unit design may negatively impact on the distances nurses walk, reducing the time they spend providing direct neonatal care. Studies have also reported nurses feel isolated and need additional support and education in such neonatal intensive care units. Staff highlighted their concerns regarding the impact of the new design on workflow and clinical practice. A participatory action research approach. A participatory action group titled the Change and Networking Group collaborated with staff over a four-year period (2009-2013) to facilitate the transition. The Change and Networking Group used a collaborative, cyclical process of planning, gathering data, taking action and reviewing the results to plan the next action. Data sources included meeting and workshop minutes, newsletters, feedback boards, subgroup reports and a staff satisfaction survey. The study findings include a description of (1) how the participatory action research cycles were used by the Change and Networking Group: providing examples of projects and strategies undertaken; and (2) evaluations of participatory action research methodology and Group by neonatal intensive care unit staff and Change and Networking members. This study has described the benefits of using participatory action research to facilitate staff transition from an open-plan to a two-cot neonatal intensive care unit design. Participatory action research methodology enabled the inclusion of staff to find solutions to design and clinical practice questions. Future research is required to assess the long-term effect of neonatal intensive care unit design on staff workload, maintaining and supporting a skilled workforce as well as the impact of a new neonatal intensive care unit design on the neonatal intensive care unit culture. A supportive work environment for staff is critical in providing high-quality health care. © 2016 John Wiley & Sons Ltd.
Facial nerve palsy due to birth trauma
Seventh cranial nerve palsy due to birth trauma; Facial palsy - birth trauma; Facial palsy - neonate; Facial palsy - infant ... An infant's facial nerve is also called the seventh cranial nerve. It can be damaged just before or at the time of delivery. ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-04-01
This Closure Report summarizes the corrective actions which were completed at the Corrective Action Sites within Corrective Action Unit 211 Area 15 Farm Waste Sties at the Nevada Test Site. Current site descriptions, observations and identification of wastes removed are included on FFACO Corrective Action Site housekeeping closure verification forms.
Taylor, Helena O; Morrison, Clinton S; Linden, Olivia; Phillips, Benjamin; Chang, Johnny; Byrne, Margaret E; Sullivan, Stephen R; Forrest, Christopher R
2014-01-01
Although symmetry is hailed as a fundamental goal of aesthetic and reconstructive surgery, our tools for measuring this outcome have been limited and subjective. With the advent of three-dimensional photogrammetry, surface geometry can be captured, manipulated, and measured quantitatively. Until now, few normative data existed with regard to facial surface symmetry. Here, we present a method for reproducibly calculating overall facial symmetry and present normative data on 100 subjects. We enrolled 100 volunteers who underwent three-dimensional photogrammetry of their faces in repose. We collected demographic data on age, sex, and race and subjectively scored facial symmetry. We calculated the root mean square deviation (RMSD) between the native and reflected faces, reflecting about a plane of maximum symmetry. We analyzed the interobserver reliability of the subjective assessment of facial asymmetry and the quantitative measurements and compared the subjective and objective values. We also classified areas of greatest asymmetry as localized to the upper, middle, or lower facial thirds. This cluster of normative data was compared with a group of patients with subtle but increasing amounts of facial asymmetry. We imaged 100 subjects by three-dimensional photogrammetry. There was a poor interobserver correlation between subjective assessments of asymmetry (r = 0.56). There was a high interobserver reliability for quantitative measurements of facial symmetry RMSD calculations (r = 0.91-0.95). The mean RMSD for this normative population was found to be 0.80 ± 0.24 mm. Areas of greatest asymmetry were distributed as follows: 10% upper facial third, 49% central facial third, and 41% lower facial third. Precise measurement permitted discrimination of subtle facial asymmetry within this normative group and distinguished norms from patients with subtle facial asymmetry, with placement of RMSDs along an asymmetry ruler. Facial surface symmetry, which is poorly assessed subjectively, can be easily and reproducibly measured using three-dimensional photogrammetry. The RMSD for facial asymmetry of healthy volunteers clusters at approximately 0.80 ± 0.24 mm. Patients with facial asymmetry due to a pathologic process can be differentiated from normative facial asymmetry based on their RMSDs.
Beurskens, Carien H G; Heymans, Peter G
2006-01-01
What is the effect of mime therapy on facial symmetry and severity of paresis in people with facial nerve paresis? Randomised controlled trial. 50 people recruited from the Outpatient department of two metropolitan hospitals with facial nerve paresis for more than nine months. The experimental group received three months of mime therapy consisting of massage, relaxation, inhibition of synkinesis, and co-ordination and emotional expression exercises. The control group was placed on a waiting list. Assessments were made on admission to the trial and three months later by a measurer blinded to group allocation. Facial symmetry was measured using the Sunnybrook Facial Grading System. Severity of paresis was measured using the House-Brackmann Facial Grading System. After three months of mime therapy, the experimental group had improved their facial symmetry by 20.4 points (95% CI 10.4 to 30.4) on the Sunnybrook Facial Grading System compared with the control group. In addition, the experimental group had reduced the severity of their paresis by 0.6 grade (95% CI 0.1 to 1.1) on the House-Brackmann Facial Grading System compared with the control group. These effects were independent of age, sex, and duration of paresis. Mime therapy improves facial symmetry and reduces the severity of paresis in people with facial nerve paresis.
Facial transplantation for massive traumatic injuries.
Alam, Daniel S; Chi, John J
2013-10-01
This article describes the challenges of facial reconstruction and the role of facial transplantation in certain facial defects and injuries. This information is of value to surgeons assessing facial injuries with massive soft tissue loss or injury. Copyright © 2013 Elsevier Inc. All rights reserved.
Brown, William; Liu, Connie; John, Rita Marie; Ford, Phoebe
2014-01-01
Developing gross and fine motor skills and expressing complex emotion is critical for child development. We introduce “StorySense”, an eBook-integrated mobile app prototype that can sense face and sound topologies and identify movement and expression to promote children’s motor skills and emotional developmental. Currently, most interactive eBooks on mobile devices only leverage “low-motor” interaction (i.e. tapping or swiping). Our app senses a greater breath of motion (e.g. clapping, snapping, and face tracking), and dynamically alters the storyline according to physical responses in ways that encourage the performance of predetermined motor skills ideal for a child’s gross and fine motor development. In addition, our app can capture changes in facial topology, which can later be mapped using the Facial Action Coding System (FACS) for later interpretation of emotion. StorySense expands the human computer interaction vocabulary for mobile devices. Potential clinical applications include child development, physical therapy, and autism. PMID:25954336
Human and animal sounds influence recognition of body language.
Van den Stock, Jan; Grèzes, Julie; de Gelder, Beatrice
2008-11-25
In naturalistic settings emotional events have multiple correlates and are simultaneously perceived by several sensory systems. Recent studies have shown that recognition of facial expressions is biased towards the emotion expressed by a simultaneously presented emotional expression in the voice even if attention is directed to the face only. So far, no study examined whether this phenomenon also applies to whole body expressions, although there is no obvious reason why this crossmodal influence would be specific for faces. Here we investigated whether perception of emotions expressed in whole body movements is influenced by affective information provided by human and by animal vocalizations. Participants were instructed to attend to the action displayed by the body and to categorize the expressed emotion. The results indicate that recognition of body language is biased towards the emotion expressed by the simultaneously presented auditory information, whether it consist of human or of animal sounds. Our results show that a crossmodal influence from auditory to visual emotional information obtains for whole body video images with the facial expression blanked and includes human as well as animal sounds.
Facial dog bite injuries in children: treatment and outcome assessment.
Eppley, Barry L; Schleich, Arno Rene
2013-03-01
Dog bite injuries to a child's face are not an infrequent occurrence. They may require primary and revisional surgery. All result in permanent facial scars. This report describes the treatment and outcomes of dog bites of the face, scalp, and neck based on a case series of 107 children over a 10-year period.The average children's age was 5.9 years. In cases where the dog was identified (95%), it was known to the victim and their family. The events leading to the dog bite were categorized as provoked (77%) in the majority of the cases.The majority of wounds could be closed primarily without a significant risk of wound infection. Complex reconstructions were required in more severe cases. The majority of families (77%) opted for scar revision between 9 and 18 months after initial treatment to improve the aesthetic outcome.Lawsuit actions resulted in 39 of the cases making good documentation an essential part of treatment. Dogbite injuries to the face in children frequently require multiple scar revisions to obtain the best possible aesthetic outcome, and the family should be so counseled at the onset of treatment.
Picasso and the art of distortion and dislocation: the artist as researcher and experimentalist.
Cohen, M M
1991-01-01
This paper is divided into four parts. In the first part, some general ideas about Picasso are set forth including: his association with multiple artistic innovations; his use of many different media; his notions of beauty and the relationship between art and nature; his ideas about the placement of body parts, symmetry, and color; and his relish in producing paintings with shock value. Emphasis is placed on the relationship between art and science and on Picasso's role as a researcher and experimentalist. In the second part, works of art during the cubist and postcubist years are discussed with emphasis on the development of simultaneity--the coexistence of different views of an object in the same picture. Topics included are: facial grafting; facial accommodation; multifacialism; profile insertion; snout formation, elevation, and rotation; concurrent faces; and more comprehensive simultaneity. In the third part, other influences on Picasso are presented including: the effects of action, motion, and activity; Surrealism and sexual symbolism; and Picasso's artistic treatment of women. In the fourth part, a comparison is made between Picasso's experiments and nature's experiments.
Sayette, Michael A.; Creswell, Kasey G.; Dimoff, John D.; Fairbairn, Catharine E.; Cohn, Jeffrey F.; Heckman, Bryan W.; Kirchner, Thomas R.; Levine, John M.; Moreland, Richard L.
2017-01-01
We integrated research on emotion and on small groups to address a fundamental and enduring question facing alcohol researchers: What are the specific mechanisms that underlie the reinforcing effects of drinking? In one of the largest alcohol-administration studies yet conducted, we employed a novel group-formation paradigm to evaluate the socioemotional effects of alcohol. Seven hundred twenty social drinkers (360 male, 360 female) were assembled into groups of 3 unacquainted persons each and given a moderate dose of an alcoholic, placebo, or control beverage, which they consumed over 36 min. These groups’ social interactions were video recorded, and the duration and sequence of interaction partners’ facial and speech behaviors were systematically coded (e.g., using the Facial Action Coding System). Alcohol consumption enhanced individual- and group-level behaviors associated with positive affect, reduced individual-level behaviors associated with negative affect, and elevated self-reported bonding. Our results indicate that alcohol facilitates bonding during group formation. Assessing nonverbal responses in social contexts offers new directions for evaluating the effects of alcohol. PMID:22760882
Facial expressions and pair bonds in hylobatids.
Florkiewicz, Brittany; Skollar, Gabriella; Reichard, Ulrich H
2018-06-06
Facial expressions are an important component of primate communication that functions to transmit social information and modulate intentions and motivations. Chimpanzees and macaques, for example, produce a variety of facial expressions when communicating with conspecifics. Hylobatids also produce various facial expressions; however, the origin and function of these facial expressions are still largely unclear. It has been suggested that larger facial expression repertoires may have evolved in the context of social complexity, but this link has yet to be tested at a broader empirical basis. The social complexity hypothesis offers a possible explanation for the evolution of complex communicative signals such as facial expressions, because as the complexity of an individual's social environment increases so does the need for communicative signals. We used an intraspecies, pair-focused study design to test the link between facial expressions and sociality within hylobatids, specifically the strength of pair-bonds. The current study compared 206 hr of video and 103 hr of focal animal data for ten hylobatid pairs from three genera (Nomascus, Hoolock, and Hylobates) living at the Gibbon Conservation Center. Using video footage, we explored 5,969 facial expressions along three dimensions: repertoire use, repertoire breadth, and facial expression synchrony [FES]. We then used focal animal data to compare dimensions of facial expressiveness to pair bond strength and behavioral synchrony. Hylobatids in our study overlapped in only half of their facial expressions (50%) with the only other detailed, quantitative study of hylobatid facial expressions, while 27 facial expressions were uniquely observed in our study animals. Taken together, hylobatids have a large facial expression repertoire of at least 80 unique facial expressions. Contrary to our prediction, facial repertoire composition was not significantly correlated with pair bond strength, rates of territorial synchrony, or rates of behavioral synchrony. We found that FES was the strongest measure of hylobatid expressiveness and was significantly positively correlated with higher sociality index scores; however, FES showed no significant correlation with behavioral synchrony. No noticeable differences between pairs were found regarding rates of behavioral or territorial synchrony. Facial repertoire sizes and FES were not significantly correlated with rates of behavioral synchrony or territorial synchrony. Our study confirms an important role of facial expressions in maintaining pair bonds and coordinating activities in hylobatids. Data support the hypothesis that facial expressions and sociality have been linked in hylobatid and primate evolution. It is possible that larger facial repertoires may have contributed to strengthening pair bonds in primates, because richer facial repertoires provide more opportunities for FES which can effectively increase the "understanding" between partners through smoother coordination of interaction patterns. This study supports the social complexity hypothesis as the driving force for the evolution of complex communication signaling. © 2018 Wiley Periodicals, Inc.
Yetiser, Sertac
2018-06-08
Three patients with large intratemporal facial schwannomas underwent tumor removal and facial nerve reconstruction with hypoglossal anastomosis. The surgical strategy for the cases was tailored to the location of the mass and its extension along the facial nerve. To provide data on the different clinical aspects of facial nerve schwannoma, the appropriate planning for management, and the predictive outcomes of facial function. Three patients with facial schwannomas (two men and one woman, ages 45, 36, and 52 years, respectively) who presented to the clinic between 2009 and 2015 were reviewed. They all had hearing loss but normal facial function. All patients were operated on with radical tumor removal via mastoidectomy and subtotal petrosectomy and simultaneous cranial nerve (CN) 7- CN 12 anastomosis. Multiple segments of the facial nerve were involved ranging in size from 3 to 7 cm. In the follow-up period of 9 to 24 months, there was no tumor recurrence. Facial function was scored House-Brackmann grades II and III, but two patients are still in the process of functional recovery. Conservative treatment with sparing of the nerve is considered in patients with small tumors. Excision of a large facial schwannoma with immediate hypoglossal nerve grafting as a primary procedure can provide satisfactory facial nerve function. One of the disadvantages of performing anastomosis is that there is not enough neural tissue just before the bifurcation of the main stump to provide neural suturing without tension because middle fossa extension of the facial schwannoma frequently involves the main facial nerve at the stylomastoid foramen. Reanimation should be processed with extensive backward mobilization of the hypoglossal nerve. Georg Thieme Verlag KG Stuttgart · New York.
Imaging the Facial Nerve: A Contemporary Review
Gupta, Sachin; Mends, Francine; Hagiwara, Mari; Fatterpekar, Girish; Roehm, Pamela C.
2013-01-01
Imaging plays a critical role in the evaluation of a number of facial nerve disorders. The facial nerve has a complex anatomical course; thus, a thorough understanding of the course of the facial nerve is essential to localize the sites of pathology. Facial nerve dysfunction can occur from a variety of causes, which can often be identified on imaging. Computed tomography and magnetic resonance imaging are helpful for identifying bony facial canal and soft tissue abnormalities, respectively. Ultrasound of the facial nerve has been used to predict functional outcomes in patients with Bell's palsy. More recently, diffusion tensor tractography has appeared as a new modality which allows three-dimensional display of facial nerve fibers. PMID:23766904
Dynamic facial expression recognition based on geometric and texture features
NASA Astrophysics Data System (ADS)
Li, Ming; Wang, Zengfu
2018-04-01
Recently, dynamic facial expression recognition in videos has attracted growing attention. In this paper, we propose a novel dynamic facial expression recognition method by using geometric and texture features. In our system, the facial landmark movements and texture variations upon pairwise images are used to perform the dynamic facial expression recognition tasks. For one facial expression sequence, pairwise images are created between the first frame and each of its subsequent frames. Integration of both geometric and texture features further enhances the representation of the facial expressions. Finally, Support Vector Machine is used for facial expression recognition. Experiments conducted on the extended Cohn-Kanade database show that our proposed method can achieve a competitive performance with other methods.
Axelrod, Vadim; Yovel, Galit
2010-08-15
Most studies of face identity have excluded external facial features by either removing them or covering them with a hat. However, external facial features may modify the representation of internal facial features. Here we assessed whether the representation of face identity in the fusiform face area (FFA), which has been primarily studied for internal facial features, is modified by differences in external facial features. We presented faces in which external and internal facial features were manipulated independently. Our findings show that the FFA was sensitive to differences in external facial features, but this effect was significantly larger when the external and internal features were aligned than misaligned. We conclude that the FFA generates a holistic representation in which the internal and the external facial features are integrated. These results indicate that to better understand real-life face recognition both external and internal features should be included. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Sound-induced facial synkinesis following facial nerve paralysis.
Ma, Ming-San; van der Hoeven, Johannes H; Nicolai, Jean-Philippe A; Meek, Marcel F
2009-08-01
Facial synkinesis (or synkinesia) (FS) occurs frequently after paresis or paralysis of the facial nerve and is in most cases due to aberrant regeneration of (branches of) the facial nerve. Patients suffer from inappropriate and involuntary synchronous facial muscle contractions. Here we describe two cases of sound-induced facial synkinesis (SFS) after facial nerve injury. As far as we know, this phenomenon has not been described in the English literature before. Patient A presented with right hemifacial palsy after lesion of the facial nerve due to skull base fracture. He reported involuntary muscle activity at the right corner of the mouth, specifically on hearing ringing keys. Patient B suffered from left hemifacial palsy following otitis media and developed involuntary muscle contraction in the facial musculature specifically on hearing clapping hands or a trumpet sound. Both patients were evaluated by means of video, audio and EMG analysis. Possible mechanisms in the pathophysiology of SFS are postulated and therapeutic options are discussed.
Noninvasive Facial Rejuvenation. Part 1: Patient-Directed
Commander, Sarah Jane; Chang, Daniel; Fakhro, Abdulla; Nigro, Marjory G.; Lee, Edward I.
2016-01-01
A proper knowledge of noninvasive facial rejuvenation is integral to the practice of a cosmetic surgeon. Noninvasive facial rejuvenation can be divided into patient- versus physician-directed modalities. Patient-directed facial rejuvenation combines the use of facial products such as sunscreen, moisturizers, retinoids, α-hydroxy acids, and various antioxidants to both maintain youthful skin and rejuvenate damaged skin. Physicians may recommend and often prescribe certain products, but the patients are in control of this type of facial rejuvenation. On the other hand, physician-directed facial rejuvenation entails modalities that require direct physician involvement, such as neuromodulators, filler injections, laser resurfacing, microdermabrasion, and chemical peels. With the successful integration of each of these modalities, a complete facial regimen can be established and patient satisfaction can be maximized. This article is the first in a three-part series describing noninvasive facial rejuvenation. The authors focus on patient-directed facial rejuvenation. It is important, however, to emphasize that even in a patient-directed modality, a physician's involvement through education and guidance is integral to its success. PMID:27478421
Muthuswamy, M B; Thomas, B N; Williams, D; Dingley, J
2014-09-01
Patients recovering from critical illness especially those with critical illness related neuropathy, myopathy, or burns to face, arms and hands are often unable to communicate by writing, speech (due to tracheostomy) or lip reading. This may frustrate both patient and staff. Two low cost movement tracking systems based around a laptop webcam and a laser/optical gaming system sensor were utilised as control inputs for on-screen text creation software and both were evaluated as communication tools in volunteers. Two methods were used to control an on-screen cursor to create short sentences via an on-screen keyboard: (i) webcam-based facial feature tracking, (ii) arm movement tracking by laser/camera gaming sensor and modified software. 16 volunteers with simulated tracheostomy and bandaged arms to simulate communication via gross movements of a burned limb, communicated 3 standard messages using each system (total 48 per system) in random sequence. Ten and 13 minor typographical errors occurred with each system respectively, however all messages were comprehensible. Speed of sentence formation ranged from 58 to 120s with the facial feature tracking system, and 60-160s with the arm movement tracking system. The average speed of sentence formation was 81s (range 58-120) and 104s (range 60-160) for facial feature and arm tracking systems respectively, (P<0.001, 2-tailed independent sample t-test). Both devices may be potentially useful communication aids in patients in general and burns critical care units who cannot communicate by conventional means, due to the nature of their injuries. Copyright © 2014 Elsevier Ltd and ISBI. All rights reserved.
Gold, Michael H; Biesman, Brian S; Taylor, Mark
2017-06-01
Fractional bipolar radiofrequency treatment and treatment with bipolar radiofrequency combined with infrared light have been shown in previous trials to safely and effectively improve the appearance of facial wrinkles. To evaluate a high-energy protocol with combined bipolar radiofrequency and infrared light energies for improvement in photoaged facial skin. Seventy-two patients presenting with mild to moderate facial wrinkles underwent a single full-face treatment (n=54) or two treatments (n=18) at 6-week intervals. Independent blinded assessment and investigator assessment were performed, using the Fitzpatrick Wrinkle and Elastosis Scale (0-9) and the Global Aesthetic Improvement scale. Patients also completed a self-assessment questionnaire concerning satisfaction with the treatment. All patients achieved some degree of improvement in their wrinkles and skin appearance, following a single treatment or two treatments with the enhanced-energy protocol. Blinded evaluation demonstrated 71% and 70% of the patients showing improvement of one unit or greater on the Fitzpatrick Scale, at the 12-week and 24-week follow-ups post-treatment, respectively. Similar results were reported by investigators. Under the Global Aesthetic Improvement Scale, investigators observed 87%, 91% and 81% of patients showing improvement at the 6-, 12-, and 24-week post-treatment end, respectively. Patients tolerated the treatments well and were satisfied with the clinical results. The enhanced-energy treatment protocol, with fractional bipolar radiofrequency treatment and treatment with bipolar radiofrequency combined with infrared light applications, yields significant improvement of skin texture, wrinkling, and overall appearance following a single treatment. The results appear gradually over time and are maintained for at least 6 months' post-treatment. © 2017 Wiley Periodicals, Inc.
Hindy, A.
2009-01-01
Summary Facial burns vary from relatively minor insults to severe debilitating injuries. Sustaining a burn injury is often a psychological trauma for the victim and is especially menacing when the face and neck are involved. This study was carried out on 60 patients with superficial dermal burns to the face admitted to the Burn Unit of Tanta University Hospital, Egypt, from September 2007 to July 2008. The patients were allocated randomly to one of three groups, each of which was treated with one of the following: sodium carboxymethyl-cellulose silver (Aquacel Ag®), MEBO® (moist exposed burn ointment), or saline-soaked dressing. We found that patients managed with MEBO® had less pain and itching and easier movement than those managed with Aquacel Ag®, while the Aquacel Ag® group required a shorter duration of time for healing, without any bad odour, than the MEBO® group. Quality of healing and patient satisfaction were nearly equal as regards MEBO® and Aquacel Ag®. Saline-soaked dressings were least satisfactory - they caused the most pain and itching, limited the patients' movements the most, needed the longest time for healing, and gave patients the least satisfaction. It was concluded that MEBO® was an excellent choice for management of facial burns owing to its soothing effect, ease of patient movement, easy handling, and good healing properties. Aquacel Ag® was found to be comparable to MEBO® and is specially recommended when frequent dressings cause difficulties for the patients or when they cannot accept a bad odour; saline-soaked dressings are not recommended for the management of facial burns because of the pain they cause, itching, limitation of patient movement, and delayed healing. PMID:21991168
Hindy, A
2009-09-30
Facial burns vary from relatively minor insults to severe debilitating injuries. Sustaining a burn injury is often a psychological trauma for the victim and is especially menacing when the face and neck are involved. This study was carried out on 60 patients with superficial dermal burns to the face admitted to the Burn Unit of Tanta University Hospital, Egypt, from September 2007 to July 2008. The patients were allocated randomly to one of three groups, each of which was treated with one of the following: sodium carboxymethyl-cellulose silver (Aquacel Ag®), MEBO® (moist exposed burn ointment), or saline-soaked dressing. We found that patients managed with MEBO® had less pain and itching and easier movement than those managed with Aquacel Ag®, while the Aquacel Ag® group required a shorter duration of time for healing, without any bad odour, than the MEBO® group. Quality of healing and patient satisfaction were nearly equal as regards MEBO® and Aquacel Ag®. Saline-soaked dressings were least satisfactory - they caused the most pain and itching, limited the patients' movements the most, needed the longest time for healing, and gave patients the least satisfaction. It was concluded that MEBO® was an excellent choice for management of facial burns owing to its soothing effect, ease of patient movement, easy handling, and good healing properties. Aquacel Ag® was found to be comparable to MEBO® and is specially recommended when frequent dressings cause difficulties for the patients or when they cannot accept a bad odour; saline-soaked dressings are not recommended for the management of facial burns because of the pain they cause, itching, limitation of patient movement, and delayed healing.
Children of the Philippines: attitudes toward visible physical impairment.
Harper, D C; Peterson, D B
2001-11-01
This pilot study was designed to evaluate children's attitudes and understanding of physical disabilities with special reference to those with craniofacial anomalies in the Philippines. Children with and without craniofacial anomalies were studied. This was a two-group correlational design with additional statistical assessment of subgroup differences. Each group was interviewed and information obtained on a standard disability preference task, attributions for playmate choice, and frequency of contact with disabilities. Parents completed a structured interview. Participants were 122 children recruited from Negros, Philippines. Fifty-four children with craniofacial anomalies (aged 7 to 12 years) were enrolled in the study, and 68 children without any disabilities were recruited from a local school in Bacolod City, Negros, Philippines. Participants completed a picture-ranking interview of specific physical disabilities and provided their reasons for their play choices and their contact with physical disabilities. The Kendall W correlation was significant for the children with craniofacial anomalies and for those without physical disabilities. Both groups reported lower preferences for disabilities that interfere with play and social interactions. Children depicted with facial anomalies received lower preference, compared with other physical disabilities. Children with craniofacial anomalies who have experienced surgical repair reported more positive rankings for the child depicted with a facial cleft. Sex differences in disability preference were noted. Children in the Philippines with and without craniofacial differences revealed similarities in preferences to children in several Western (United States) and non-Western countries. Children depicted with facial anomalies received lower preference than other visible physical differences. Children reported both positive and negative explanations for their disability play preferences. Facial differences may result in illogical and negative explanations for social avoidance among children. Similar reactions are noted in other parts of the world.
Wang, Siyu; Zhang, Guirong; Meng, Huimin; Li, Li
2013-02-01
Evidence demonstrated that sweat was an important factor affecting skin physiological properties. We intended to assess the effects of exercise-induced sweating on the sebum, stratum corneum (SC) hydration and skin surface pH of facial skin. 102 subjects (aged 5-60, divided into five groups) were enrolled to be measured by a combination device called 'Derma Unit SSC3' in their frontal and zygomatic regions when they were in a resting state (RS), at the beginning of sweating (BS), during excessive sweating (ES) and an hour after sweating (AS), respectively. Compared to the RS, SC hydration in both regions increased at the BS or during ES, and sebum increased at the BS but lower during ES. Compared to during ES, Sebum increased in AS but lower than RS. Compared to the RS, pH decreased in both regions at the BS in the majority of groups, and increased in frontal region during ES and in zygomatic region in the AS. There was an increase in pH in both regions during ES in the majority of groups compared to the BS, but a decrease in the AS compared to during ES. The study implies that even in summer, after we sweat excessively, lipid products should be applied locally in order to maintain stability of the barrier function of the SC. The study suggests that after a short term(1 h or less) of self adjustment, excessive sweat from moderate exercise will not impair the primary acidic surface pH of the facial skin. Exercise-induced sweating significantly affected the skin physiological properties of facial region. © 2012 John Wiley & Sons A/S.
Kikuchi, Kumiko; Masuda, Yuji; Yamashita, Toyonobu; Kawai, Eriko; Hirao, Tetsuji
2015-05-01
Heterogeneity with respect to skin color tone is one of the key factors in visual perception of facial attractiveness and age. However, there have been few studies on quantitative analyses of the color heterogeneity of facial skin. The purpose of this study was to develop image evaluation methods for skin color heterogeneity focusing on skin chromophores and then characterize ethnic differences and age-related changes. A facial imaging system equipped with an illumination unit and a high-resolution digital camera was used to develop image evaluation methods for skin color heterogeneity. First, melanin and/or hemoglobin images were obtained using pigment-specific image-processing techniques, which involved conversion from Commission Internationale de l'Eclairage XYZ color values to melanin and/or hemoglobin indexes as measures of their contents. Second, a spatial frequency analysis with threshold settings was applied to the individual images. Cheek skin images of 194 healthy Asian and Caucasian female subjects were acquired using the imaging system. Applying this methodology, the skin color heterogeneity of Asian and Caucasian faces was characterized. The proposed pigment-specific image-processing techniques allowed visual discrimination of skin redness from skin pigmentation. In the heterogeneity analyses of cheek skin color, age-related changes in melanin were clearly detected in Asian and Caucasian skin. Furthermore, it was found that the heterogeneity indexes of hemoglobin were significantly higher in Caucasian skin than in Asian skin. We have developed evaluation methods for skin color heterogeneity by image analyses based on the major chromophores, melanin and hemoglobin, with special reference to their size. This methodology focusing on skin color heterogeneity should be useful for better understanding of aging and ethnic differences. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Fanti, Kostas A; Kyranides, Melina Nicole; Panayiotou, Georgia
2017-02-01
The current study adds to prior research by investigating specific (happiness, sadness, surprise, disgust, anger and fear) and general (corrugator and zygomatic muscle activity) facial reactions to violent and comedy films among individuals with varying levels of callous-unemotional (CU) traits and impulsive aggression (IA). Participants at differential risk of CU traits and IA were selected from a sample of 1225 young adults. In Experiment 1, participants (N = 82) facial expressions were recorded while they watched violent and comedy films. Video footage of participants' facial expressions was analysed using FaceReader, a facial coding software that classifies facial reactions. Findings suggested that individuals with elevated CU traits showed reduced facial reactions of sadness and disgust to violent films, indicating low empathic concern in response to victims' distress. In contrast, impulsive aggressors produced specifically more angry facial expressions when viewing violent and comedy films. In Experiment 2 (N = 86), facial reactions were measured by monitoring facial electromyography activity. FaceReader findings were verified by the reduced facial electromyography at the corrugator, but not the zygomatic, muscle in response to violent films shown by individuals high in CU traits. Additional analysis suggested that sympathy to victims explained the association between CU traits and reduced facial reactions to violent films.
Zhao, Yi-Jiao; Xiong, Yu-Xue; Wang, Yong
2017-01-01
In this study, the practical accuracy (PA) of optical facial scanners for facial deformity patients in oral clinic was evaluated. Ten patients with a variety of facial deformities from oral clinical were included in the study. For each patient, a three-dimensional (3D) face model was acquired, via a high-accuracy industrial "line-laser" scanner (Faro), as the reference model and two test models were obtained, via a "stereophotography" (3dMD) and a "structured light" facial scanner (FaceScan) separately. Registration based on the iterative closest point (ICP) algorithm was executed to overlap the test models to reference models, and "3D error" as a new measurement indicator calculated by reverse engineering software (Geomagic Studio) was used to evaluate the 3D global and partial (upper, middle, and lower parts of face) PA of each facial scanner. The respective 3D accuracy of stereophotography and structured light facial scanners obtained for facial deformities was 0.58±0.11 mm and 0.57±0.07 mm. The 3D accuracy of different facial partitions was inconsistent; the middle face had the best performance. Although the PA of two facial scanners was lower than their nominal accuracy (NA), they all met the requirement for oral clinic use.
Urrego, Diana; Múnera, Alejandro; Troncoso, Julieta
2011-01-01
Little evidence is available concerning the morphological modifications of motor cortex neurons associated with peripheral nerve injuries, and the consequences of those injuries on post lesion functional recovery. Dendritic branching of cortico-facial neurons was characterized with respect to the effects of irreversible facial nerve injury. Twenty-four adult male rats were distributed into four groups: sham (no lesion surgery), and dendritic assessment at 1, 3 and 5 weeks post surgery. Eighteen lesion animals underwent surgical transection of the mandibular and buccal branches of the facial nerve. Dendritic branching was examined by contralateral primary motor cortex slices stained with the Golgi-Cox technique. Layer V pyramidal (cortico-facial) neurons from sham and injured animals were reconstructed and their dendritic branching was compared using Sholl analysis. Animals with facial nerve lesions displayed persistent vibrissal paralysis throughout the five week observation period. Compared with control animal neurons, cortico-facial pyramidal neurons of surgically injured animals displayed shrinkage of their dendritic branches at statistically significant levels. This shrinkage persisted for at least five weeks after facial nerve injury. Irreversible facial motoneuron axonal damage induced persistent dendritic arborization shrinkage in contralateral cortico-facial neurons. This morphological reorganization may be the physiological basis of functional sequelae observed in peripheral facial palsy patients.
Tang, Dorothy Y Y; Liu, Amy C Y; Lui, Simon S Y; Lam, Bess Y H; Siu, Bonnie W M; Lee, Tatia M C; Cheung, Eric F C
2016-02-28
Impairment in facial emotion perception is believed to be associated with aggression. Schizophrenia patients with antisocial features are more impaired in facial emotion perception than their counterparts without these features. However, previous studies did not define the comorbidity of antisocial personality disorder (ASPD) using stringent criteria. We recruited 30 participants with dual diagnoses of ASPD and schizophrenia, 30 participants with schizophrenia and 30 controls. We employed the Facial Emotional Recognition paradigm to measure facial emotion perception, and administered a battery of neurocognitive tests. The Life History of Aggression scale was used. ANOVAs and ANCOVAs were conducted to examine group differences in facial emotion perception, and control for the effect of other neurocognitive dysfunctions on facial emotion perception. Correlational analyses were conducted to examine the association between facial emotion perception and aggression. Patients with dual diagnoses performed worst in facial emotion perception among the three groups. The group differences in facial emotion perception remained significant, even after other neurocognitive impairments were controlled for. Severity of aggression was correlated with impairment in perceiving negative-valenced facial emotions in patients with dual diagnoses. Our findings support the presence of facial emotion perception impairment and its association with aggression in schizophrenia patients with comorbid ASPD. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Idiopathic ophthalmodynia and idiopathic rhinalgia: two topographic facial pain syndromes.
Pareja, Juan A; Cuadrado, María L; Porta-Etessam, Jesús; Fernández-de-las-Peñas, César; Gili, Pablo; Caminero, Ana B; Cebrián, José L
2010-09-01
To describe 2 topographic facial pain conditions with the pain clearly localized in the eye (idiopathic ophthalmodynia) or in the nose (idiopathic rhinalgia), and to propose their distinction from persistent idiopathic facial pain. Persistent idiopathic facial pain, burning mouth syndrome, atypical odontalgia, and facial arthromyalgia are idiopathic facial pain syndromes that have been separated according to topographical criteria. Still, some other facial pain syndromes might have been veiled under the broad term of persistent idiopathic facial pain. Through a 10-year period we have studied all patients referred to our neurological clinic because of facial pain of unknown etiology that might deviate from all well-characterized facial pain syndromes. In a group of patients we have identified 2 consistent clinical pictures with pain precisely located either in the eye (n=11) or in the nose (n=7). Clinical features resembled those of other localized idiopathic facial syndromes, the key differences relying on the topographic distribution of the pain. Both idiopathic ophthalmodynia and idiopathic rhinalgia seem specific pain syndromes with a distinctive location, and may deserve a nosologic status just as other focal pain syndromes of the face. Whether all such focal syndromes are topographic variants of persistent idiopathic facial pain or independent disorders remains a controversial issue.
ERIC Educational Resources Information Center
Donahoo, Saran
2008-01-01
Background/Context: Although frequently associated with the United States, affirmative action is not a uniquely American social policy. Indeed, 2003 witnessed review and revision of affirmative action policies affecting higher education institutions in both France and the United States. Using critical race theory (CRT) as a theoretical lens, this…
77 FR 48586 - Notice of Final Federal Agency Actions on United States Highway 77
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-14
.... Sec. 139(l)(1). The actions relate to a proposed highway project, United States (US) 77, extending from Interstate Highway 37 (IH 37) in Corpus Christi, Texas to US 83 in Harlingen, Texas. Those actions..., permits, and approvals for the following highway project in the State of Texas: United States highway (US...
Analysis of facial expressions in parkinson's disease through video-based automatic methods.
Bandini, Andrea; Orlandi, Silvia; Escalante, Hugo Jair; Giovannelli, Fabio; Cincotta, Massimo; Reyes-Garcia, Carlos A; Vanni, Paola; Zaccara, Gaetano; Manfredi, Claudia
2017-04-01
The automatic analysis of facial expressions is an evolving field that finds several clinical applications. One of these applications is the study of facial bradykinesia in Parkinson's disease (PD), which is a major motor sign of this neurodegenerative illness. Facial bradykinesia consists in the reduction/loss of facial movements and emotional facial expressions called hypomimia. In this work we propose an automatic method for studying facial expressions in PD patients relying on video-based METHODS: 17 Parkinsonian patients and 17 healthy control subjects were asked to show basic facial expressions, upon request of the clinician and after the imitation of a visual cue on a screen. Through an existing face tracker, the Euclidean distance of the facial model from a neutral baseline was computed in order to quantify the changes in facial expressivity during the tasks. Moreover, an automatic facial expressions recognition algorithm was trained in order to study how PD expressions differed from the standard expressions. Results show that control subjects reported on average higher distances than PD patients along the tasks. This confirms that control subjects show larger movements during both posed and imitated facial expressions. Moreover, our results demonstrate that anger and disgust are the two most impaired expressions in PD patients. Contactless video-based systems can be important techniques for analyzing facial expressions also in rehabilitation, in particular speech therapy, where patients could get a definite advantage from a real-time feedback about the proper facial expressions/movements to perform. Copyright © 2017 Elsevier B.V. All rights reserved.
Interim Action Proposed Plan for the Chemicals, Metals, and Pesticides (CMP) Pits Operable Unit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, J.
2002-06-18
The purpose of this Interim Action Proposed Plan (IAPP) is to describe the preferred interim remedial action for addressing the Chemicals, Metals, and Pesticides (CMP) Pits Operable Unit and to provide an opportunity for public input into the remedial action selection process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NSTec Environmental Restoration
This Corrective Action Decision Document (CADD)/Closure Report (CR) was prepared by the Defense Threat Reduction Agency (DTRA) for Corrective Action Unit (CAU) 477, N-Tunnel Muckpile. This CADD/CR is consistent with the requirements of the Federal Facility Agreement and Consent Order (FFACO) agreed to by the State of Nevada, the U.S. Department of Energy, and the U.S. Department of Defense. Corrective Action Unit 477 is comprised of one Corrective Action Site (CAS): • 12-06-03, Muckpile The purpose of this CADD/CR is to provide justification and documentation supporting the recommendation for closure with no further action, by placing use restrictions on CAUmore » 477.« less
Chondromyxoid fibroma of the mastoid facial nerve canal mimicking a facial nerve schwannoma.
Thompson, Andrew L; Bharatha, Aditya; Aviv, Richard I; Nedzelski, Julian; Chen, Joseph; Bilbao, Juan M; Wong, John; Saad, Reda; Symons, Sean P
2009-07-01
Chondromyxoid fibroma of the skull base is a rare entity. Involvement of the temporal bone is particularly rare. We present an unusual case of progressive facial nerve paralysis with imaging and clinical findings most suggestive of a facial nerve schwannoma. The lesion was tubular in appearance, expanded the mastoid facial nerve canal, protruded out of the stylomastoid foramen, and enhanced homogeneously. The only unusual imaging feature was minor calcification within the tumor. Surgery revealed an irregular, cystic lesion. Pathology diagnosed a chondromyxoid fibroma involving the mastoid portion of the facial nerve canal, destroying the facial nerve.
Recognizing Facial Expressions Automatically from Video
NASA Astrophysics Data System (ADS)
Shan, Caifeng; Braspenning, Ralph
Facial expressions, resulting from movements of the facial muscles, are the face changes in response to a person's internal emotional states, intentions, or social communications. There is a considerable history associated with the study on facial expressions. Darwin [22] was the first to describe in details the specific facial expressions associated with emotions in animals and humans, who argued that all mammals show emotions reliably in their faces. Since that, facial expression analysis has been a area of great research interest for behavioral scientists [27]. Psychological studies [48, 3] suggest that facial expressions, as the main mode for nonverbal communication, play a vital role in human face-to-face communication. For illustration, we show some examples of facial expressions in Fig. 1.
Middle ear osteoma causing progressive facial nerve weakness: a case report.
Curtis, Kate; Bance, Manohar; Carter, Michael; Hong, Paul
2014-09-18
Facial nerve weakness is most commonly due to Bell's palsy or cerebrovascular accidents. Rarely, middle ear tumor presents with facial nerve dysfunction. We report a very unusual case of middle ear osteoma in a 49-year-old Caucasian woman causing progressive facial nerve deficit. A subtle middle ear lesion was observed on otoscopy and computed tomographic images demonstrated an osseous middle ear tumor. Complete surgical excision resulted in the partial recovery of facial nerve function. Facial nerve dysfunction is rarely caused by middle ear tumors. The weakness is typically due to a compressive effect on the middle ear portion of the facial nerve. Early recognition is crucial since removal of these lesions may lead to the recuperation of facial nerve function.
Wireless electronic-tattoo for long-term high fidelity facial muscle recordings
NASA Astrophysics Data System (ADS)
Inzelberg, Lilah; David Pur, Moshe; Steinberg, Stanislav; Rand, David; Farah, Maroun; Hanein, Yael
2017-05-01
Facial surface electromyography (sEMG) is a powerful tool for objective evaluation of human facial expressions and was accordingly suggested in recent years for a wide range of psychological and neurological assessment applications. Owing to technical challenges, in particular the cumbersome gelled electrodes, the use of facial sEMG was so far limited. Using innovative facial temporary tattoos optimized specifically for facial applications, we demonstrate the use of sEMG as a platform for robust identification of facial muscle activation. In particular, differentiation between diverse facial muscles is demonstrated. We also demonstrate a wireless version of the system. The potential use of the presented technology for user-experience monitoring and objective psychological and neurological evaluations is discussed.
Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine
NASA Astrophysics Data System (ADS)
Lawi, Armin; Sya'Rani Machrizzandi, M.
2018-03-01
Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.
Realistic facial animation generation based on facial expression mapping
NASA Astrophysics Data System (ADS)
Yu, Hui; Garrod, Oliver; Jack, Rachael; Schyns, Philippe
2014-01-01
Facial expressions reflect internal emotional states of a character or in response to social communications. Though much effort has been taken to generate realistic facial expressions, it still remains a challenging topic due to human being's sensitivity to subtle facial movements. In this paper, we present a method for facial animation generation, which reflects true facial muscle movements with high fidelity. An intermediate model space is introduced to transfer captured static AU peak frames based on FACS to the conformed target face. And then dynamic parameters derived using a psychophysics method is integrated to generate facial animation, which is assumed to represent natural correlation of multiple AUs. Finally, the animation sequence in the intermediate model space is mapped to the target face to produce final animation.
Ferrucci, Roberta; Giannicola, Gaia; Rosa, Manuela; Fumagalli, Manuela; Boggio, Paulo Sergio; Hallett, Mark; Zago, Stefano; Priori, Alberto
2012-01-01
Some evidence suggests that the cerebellum participates in the complex network processing emotional facial expression. To evaluate the role of the cerebellum in recognising facial expressions we delivered transcranial direct current stimulation (tDCS) over the cerebellum and prefrontal cortex. A facial emotion recognition task was administered to 21 healthy subjects before and after cerebellar tDCS; we also tested subjects with a visual attention task and a visual analogue scale (VAS) for mood. Anodal and cathodal cerebellar tDCS both significantly enhanced sensory processing in response to negative facial expressions (anodal tDCS, p=.0021; cathodal tDCS, p=.018), but left positive emotion and neutral facial expressions unchanged (p>.05). tDCS over the right prefrontal cortex left facial expressions of both negative and positive emotion unchanged. These findings suggest that the cerebellum is specifically involved in processing facial expressions of negative emotion.
Ardizzi, Martina; Sestito, Mariateresa; Martini, Francesca; Umiltà, Maria Alessandra; Ravera, Roberto; Gallese, Vittorio
2014-01-01
Age-group membership effects on explicit emotional facial expressions recognition have been widely demonstrated. In this study we investigated whether Age-group membership could also affect implicit physiological responses, as facial mimicry and autonomic regulation, to observation of emotional facial expressions. To this aim, facial Electromyography (EMG) and Respiratory Sinus Arrhythmia (RSA) were recorded from teenager and adult participants during the observation of facial expressions performed by teenager and adult models. Results highlighted that teenagers exhibited greater facial EMG responses to peers' facial expressions, whereas adults showed higher RSA-responses to adult facial expressions. The different physiological modalities through which young and adults respond to peers' emotional expressions are likely to reflect two different ways to engage in social interactions with coetaneous. Findings confirmed that age is an important and powerful social feature that modulates interpersonal interactions by influencing low-level physiological responses. PMID:25337916
de la Rosa, Stephan; Fademrecht, Laura; Bülthoff, Heinrich H; Giese, Martin A; Curio, Cristóbal
2018-06-01
Motor-based theories of facial expression recognition propose that the visual perception of facial expression is aided by sensorimotor processes that are also used for the production of the same expression. Accordingly, sensorimotor and visual processes should provide congruent emotional information about a facial expression. Here, we report evidence that challenges this view. Specifically, the repeated execution of facial expressions has the opposite effect on the recognition of a subsequent facial expression than the repeated viewing of facial expressions. Moreover, the findings of the motor condition, but not of the visual condition, were correlated with a nonsensory condition in which participants imagined an emotional situation. These results can be well accounted for by the idea that facial expression recognition is not always mediated by motor processes but can also be recognized on visual information alone.
Aspects of Facial Contrast Decrease with Age and Are Cues for Age Perception
Porcheron, Aurélie; Mauger, Emmanuelle; Russell, Richard
2013-01-01
Age is a primary social dimension. We behave differently toward people as a function of how old we perceive them to be. Age perception relies on cues that are correlated with age, such as wrinkles. Here we report that aspects of facial contrast–the contrast between facial features and the surrounding skin–decreased with age in a large sample of adult Caucasian females. These same aspects of facial contrast were also significantly correlated with the perceived age of the faces. Individual faces were perceived as younger when these aspects of facial contrast were artificially increased, but older when these aspects of facial contrast were artificially decreased. These findings show that facial contrast plays a role in age perception, and that faces with greater facial contrast look younger. Because facial contrast is increased by typical cosmetics use, we infer that cosmetics function in part by making the face appear younger. PMID:23483959
Prass, R L; Kinney, S E; Hardy, R W; Hahn, J F; Lüders, H
1987-12-01
Facial electromyographic (EMG) activity was continuously monitored via loudspeaker during eleven translabyrinthine and nine suboccipital consecutive unselected acoustic neuroma resections. Ipsilateral facial EMG activity was synchronously recorded on the audio channels of operative videotapes, which were retrospectively reviewed in order to allow detailed evaluation of the potential benefit of various acoustic EMG patterns in the performance of specific aspects of acoustic neuroma resection. The use of evoked facial EMG activity was classified and described. Direct local mechanical (surgical) stimulation and direct electrical stimulation were of benefit in the localization and/or delineation of the facial nerve contour. Burst and train acoustic patterns of EMG activity appeared to indicate surgical trauma to the facial nerve that would not have been appreciated otherwise. Early results of postoperative facial function of monitored patients are presented, and the possible value of burst and train acoustic EMG activity patterns in the intraoperative assessment of facial nerve function is discussed. Acoustic facial EMG monitoring appears to provide a potentially powerful surgical tool for delineation of the facial nerve contour, the ongoing use of which may lead to continued improvement in facial nerve function preservation through modification of dissection strategy.
Rerksuppaphol, Lakkana; Charoenpong, Theekapun; Rerksuppaphol, Sanguansak
2016-02-01
To evaluate the efficacy of acupuncture treatments in treating facial melasma, contrasting treatments involving facial acupuncture with facial/body acupuncture. Women suffering with melasma were randomly assigned into: 1) facial acupuncture (n = 20); or 2) facial/body acupuncture (n = 21). Each group was given 2 sessions per week for 8 weeks. Melasma area and darkness of its pigmentation were assessed using digital images. 95.2% and 90% of participants in facial/body and facial acupuncture, respectively, had decreased melasma areas, with a mean reduction area being 2.6 cm(2) (95%CI 1.6-3.6 cm(2)) and 2.4 cm(2) (95%CI 1.6-3.3 cm(2)), respectively. 66.7% (facial/body acupuncture) and 80.0% (facial acupuncture) of participants had lighter melasma pigmentation compared to their baselines (p-value = 0.482). Facial acupuncture, with or without body acupuncture, was shown to be effective in decreasing the size of melasma areas. This study is registered with the Thai Clinical Trial Registry (TCTR20140903004). Copyright © 2015 Elsevier Ltd. All rights reserved.
Facial nerve mapping and monitoring in lymphatic malformation surgery.
Chiara, Jospeh; Kinney, Greg; Slimp, Jefferson; Lee, Gi Soo; Oliaei, Sepehr; Perkins, Jonathan A
2009-10-01
Establish the efficacy of preoperative facial nerve mapping and continuous intraoperative EMG monitoring in protecting the facial nerve during resection of cervicofacial lymphatic malformations. Retrospective study in which patients were clinically followed for at least 6 months postoperatively, and long-term outcome was evaluated. Patient demographics, lesion characteristics (i.e., size, stage, location) were recorded. Operative notes revealed surgical techniques, findings, and complications. Preoperative, short-/long-term postoperative facial nerve function was standardized using the House-Brackmann Classification. Mapping was done prior to incision by percutaneously stimulating the facial nerve and its branches and recording the motor responses. Intraoperative monitoring and mapping were accomplished using a four-channel, free-running EMG. Neurophysiologists continuously monitored EMG responses and blindly analyzed intraoperative findings and final EMG interpretations for abnormalities. Seven patients collectively underwent 8 lymphatic malformation surgeries. Median age was 30 months (2-105 months). Lymphatic malformation diagnosis was recorded in 6/8 surgeries. Facial nerve function was House-Brackmann grade I in 8/8 cases preoperatively. Facial nerve was abnormally elongated in 1/8 cases. EMG monitoring recorded abnormal activity in 4/8 cases--two suggesting facial nerve irritation, and two with possible facial nerve damage. Transient or long-term facial nerve paresis occurred in 1/8 cases (House-Brackmann grade II). Preoperative facial nerve mapping combined with continuous intraoperative EMG and mapping is a successful method of identifying the facial nerve course and protecting it from injury during resection of cervicofacial lymphatic malformations involving the facial nerve.
A small-world network model of facial emotion recognition.
Takehara, Takuma; Ochiai, Fumio; Suzuki, Naoto
2016-01-01
Various models have been proposed to increase understanding of the cognitive basis of facial emotions. Despite those efforts, interactions between facial emotions have received minimal attention. If collective behaviours relating to each facial emotion in the comprehensive cognitive system could be assumed, specific facial emotion relationship patterns might emerge. In this study, we demonstrate that the frameworks of complex networks can effectively capture those patterns. We generate 81 facial emotion images (6 prototypes and 75 morphs) and then ask participants to rate degrees of similarity in 3240 facial emotion pairs in a paired comparison task. A facial emotion network constructed on the basis of similarity clearly forms a small-world network, which features an extremely short average network distance and close connectivity. Further, even if two facial emotions have opposing valences, they are connected within only two steps. In addition, we show that intermediary morphs are crucial for maintaining full network integration, whereas prototypes are not at all important. These results suggest the existence of collective behaviours in the cognitive systems of facial emotions and also describe why people can efficiently recognize facial emotions in terms of information transmission and propagation. For comparison, we construct three simulated networks--one based on the categorical model, one based on the dimensional model, and one random network. The results reveal that small-world connectivity in facial emotion networks is apparently different from those networks, suggesting that a small-world network is the most suitable model for capturing the cognitive basis of facial emotions.
Contralateral botulinum toxin injection to improve facial asymmetry after acute facial paralysis.
Kim, Jin
2013-02-01
The application of botulinum toxin to the healthy side of the face in patients with long-standing facial paralysis has been shown to be a minimally invasive technique that improves facial symmetry at rest and during facial motion, but our experience using botulinum toxin therapy for facial sequelae prompted the idea that botulinum toxin might be useful in acute cases of facial paralysis, leading to improve facial asymmetry. In cases in which medical or surgical treatment options are limited because of existing medical problems or advanced age, most patients with acute facial palsy are advised to await spontaneous recovery or are informed that no effective intervention exists. The purpose of this study was to evaluate the effect of botulinum toxin treatment for facial asymmetry in 18 patients after acute facial palsy who could not be optimally treated by medical or surgical management because of severe medical or other problems. From 2009 to 2011, nine patients with Bell's palsy, 5 with herpes zoster oticus and 4 with traumatic facial palsy (10 men and 8 women; age range, 22-82 yr; mean, 50.8 yr) participated in this study. Botulinum toxin A (Botox; Allergan Incorporated, Irvine, CA, USA) was injected using a tuberculin syringe with a 27-gauge needle. The amount injected per site varied from 2.5 to 3 U, and the total dose used per patient was 32 to 68 U (mean, 47.5 +/- 8.4 U). After administration of a single dose of botulinum toxin A on the nonparalyzed side of 18 patients with acute facial paralysis, marked relief of facial asymmetry was observed in 8 patients within 1 month of injection. Decreased facial asymmetry and strengthened facial function on the paralyzed side led to an increased HB and SB grade within 6 months after injection. Use of botulinum toxin after acute facial palsy cases is of great value. Such therapy decreases the relative hyperkinesis contralateral to the paralysis, leading to greater symmetric function. Especially in patients with medical problems that limit the medical or surgical treatment options, botulinum toxin therapy represents a useful alternative. (C) 2013 Otology & Neurotology, Inc.
Lee, Juhun; Fingeret, Michelle C; Bovik, Alan C; Reece, Gregory P; Skoracki, Roman J; Hanasono, Matthew M; Markey, Mia K
2015-03-27
Patients with facial cancers can experience disfigurement as they may undergo considerable appearance changes from their illness and its treatment. Individuals with difficulties adjusting to facial cancer are concerned about how others perceive and evaluate their appearance. Therefore, it is important to understand how humans perceive disfigured faces. We describe a new strategy that allows simulation of surgically plausible facial disfigurement on a novel face for elucidating the human perception on facial disfigurement. Longitudinal 3D facial images of patients (N = 17) with facial disfigurement due to cancer treatment were replicated using a facial mannequin model, by applying Thin-Plate Spline (TPS) warping and linear interpolation on the facial mannequin model in polar coordinates. Principal Component Analysis (PCA) was used to capture longitudinal structural and textural variations found within each patient with facial disfigurement arising from the treatment. We treated such variations as disfigurement. Each disfigurement was smoothly stitched on a healthy face by seeking a Poisson solution to guided interpolation using the gradient of the learned disfigurement as the guidance field vector. The modeling technique was quantitatively evaluated. In addition, panel ratings of experienced medical professionals on the plausibility of simulation were used to evaluate the proposed disfigurement model. The algorithm reproduced the given face effectively using a facial mannequin model with less than 4.4 mm maximum error for the validation fiducial points that were not used for the processing. Panel ratings of experienced medical professionals on the plausibility of simulation showed that the disfigurement model (especially for peripheral disfigurement) yielded predictions comparable to the real disfigurements. The modeling technique of this study is able to capture facial disfigurements and its simulation represents plausible outcomes of reconstructive surgery for facial cancers. Thus, our technique can be used to study human perception on facial disfigurement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
G. B. Mitchem
2001-08-22
This annual progress and performance evaluation report discusses the groundwater remedial actions in the 100 Area, including the interim actions at the 100-HR-3 and 100-KR-4 Operable Units, and also discusses the expedited response action in the 100-NR-2 operable unit.
Adaptation of facial synthesis to parameter analysis in MPEG-4 visual communication
NASA Astrophysics Data System (ADS)
Yu, Lu; Zhang, Jingyu; Liu, Yunhai
2000-12-01
In MPEG-4, Facial Definition Parameters (FDPs) and Facial Animation Parameters (FAPs) are defined to animate 1 a facial object. Most of the previous facial animation reconstruction systems were focused on synthesizing animation from manually or automatically generated FAPs but not the FAPs extracted from natural video scene. In this paper, an analysis-synthesis MPEG-4 visual communication system is established, in which facial animation is reconstructed from FAPs extracted from natural video scene.
Facial paralysis for the plastic surgeon.
Kosins, Aaron M; Hurvitz, Keith A; Evans, Gregory Rd; Wirth, Garrett A
2007-01-01
Facial paralysis presents a significant and challenging reconstructive problem for plastic surgeons. An aesthetically pleasing and acceptable outcome requires not only good surgical skills and techniques, but also knowledge of facial nerve anatomy and an understanding of the causes of facial paralysis.The loss of the ability to move the face has both social and functional consequences for the patient. At the Facial Palsy Clinic in Edinburgh, Scotland, 22,954 patients were surveyed, and over 50% were found to have a considerable degree of psychological distress and social withdrawal as a consequence of their facial paralysis. Functionally, patients present with unilateral or bilateral loss of voluntary and nonvoluntary facial muscle movements. Signs and symptoms can include an asymmetric smile, synkinesis, epiphora or dry eye, abnormal blink, problems with speech articulation, drooling, hyperacusis, change in taste and facial pain.With respect to facial paralysis, surgeons tend to focus on the surgical, or 'hands-on', aspect. However, it is believed that an understanding of the disease process is equally (if not more) important to a successful surgical outcome. The purpose of the present review is to describe the anatomy and diagnostic patterns of the facial nerve, and the epidemiology and common causes of facial paralysis, including clinical features and diagnosis. Treatment options for paralysis are vast, and may include nerve decompression, facial reanimation surgery and botulinum toxin injection, but these are beyond the scope of the present paper.
Hypoglossal-facial-jump-anastomosis without an interposition nerve graft.
Beutner, Dirk; Luers, Jan C; Grosheva, Maria
2013-10-01
The hypoglossal-facial-anastomosis is the most often applied procedure for the reanimation of a long lasting peripheral facial nerve paralysis. The use of an interposition graft and its end-to-side anastomosis to the hypoglossal nerve allows the preservation of the tongue function and also requires two anastomosis sites and a free second donor nerve. We describe the modified technique of the hypoglossal-facial-jump-anastomosis without an interposition and present the first results. Retrospective case study. We performed the facial nerve reconstruction in five patients. The indication for the surgery was a long-standing facial paralysis with preserved portion distal to geniculate ganglion, absent voluntary activity in the needle facial electromyography, and an intact bilateral hypoglossal nerve. Following mastoidectomy, the facial nerve was mobilized in the fallopian canal down to its bifurcation in the parotid gland and cut in its tympanic portion distal to the lesion. Then, a tensionless end-to-side suture to the hypoglossal nerve was performed. The facial function was monitored up to 16 months postoperatively. The reconstruction technique succeeded in all patients: The facial function improved within the average time period of 10 months to the House-Brackmann score 3. This modified technique of the hypoglossal-facial reanimation is a valid method with good clinical results, especially in cases of a preserved intramastoidal facial nerve. Level 4. Copyright © 2013 The American Laryngological, Rhinological and Otological Society, Inc.
How to Avoid Facial Nerve Injury in Mastoidectomy?
Ryu, Nam-Gyu
2016-01-01
Unexpected iatrogenic facial nerve paralysis not only affects facial disfiguration, but also imposes a devastating effect on the social, psychological, and economic aspects of an affected person's life at once. The aims of this study were to postulate where surgeons had mistakenly drilled or where obscured by granulations or by fibrous bands and to look for surgical approach with focused on the safety of facial nerve in mastoid surgery. We had found 14 cases of iatrogenic facial nerve injury (IFNI) during mastoid surgery for 5 years in Korea. The medical records of all the patients were obtained and analyzed injured site of facial nerve segment with surgical technique of mastoidectomy. Eleven patients underwent facial nerve exploration and three patients had conservative management. 43% (6 cases) of iatrogenic facial nerve injuries had occurred in tympanic segment, 28.5% (4 cases) of injuries in second genu combined with tympanic segment, and 28.5% (4 cases) of injuries in mastoid segment. Surgeons should try to identify the facial nerve using available landmarks and be kept in mind the anomalies of the facial nerve. With use of intraoperative facial nerve monitoring, the avoidance of in order to avoid IFNI would be possible in more cases. Many authors emphasized the importance of intraoperative facial nerve monitoring, even in primary otologic surgery. However, anatomical understanding of intratemporal landmarks with meticulous dissection could not be emphasized as possible to prevent IFNI. PMID:27626078
Facial palsy after dental procedures - Is viral reactivation responsible?
Gaudin, Robert A; Remenschneider, Aaron K; Phillips, Katie; Knipfer, Christian; Smeets, Ralf; Heiland, Max; Hadlock, Tessa A
2017-01-01
Herpes labialis viral reactivation has been reported following dental procedures, but the incidence, characteristics and outcomes of delayed peripheral facial nerve palsy following dental work is poorly understood. Herein we describe the unique features of delayed facial paresis following dental procedures. An institutional retrospective review was performed to identify patients diagnosed with delayed facial nerve palsy within 30 days of dental manipulation. Demographics, prodromal signs and symptoms, initial medical treatment and outcomes were assessed. Of 2471 patients with facial palsy, 16 (0.7%) had delayed facial paresis following ipsilateral dental procedures. Average age at presentation was 44 yrs and 56% (9/16) were female. Clinical evaluation was consistent with Bell's palsy in 14 (88%) and Ramsay-Hunt syndrome in 2 patients (12%). Patients developed facial paresis an average of 3.9 days after the dental procedure, with all individuals developing a flaccid paralysis (House Brackmann (HB) grade VI) during the acute stage. 50% of patients developed persistent facial palsy in the form of non-flaccid facial paralysis (HBIII-IV). Facial palsy, like herpes labialis, can occur in the days following dental procedures and may also be related to viral reactivation. In this small cohort, long-term facial outcomes appear worse than for spontaneous Bell's palsy. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Facial paralysis for the plastic surgeon
Kosins, Aaron M; Hurvitz, Keith A; Evans, Gregory RD; Wirth, Garrett A
2007-01-01
Facial paralysis presents a significant and challenging reconstructive problem for plastic surgeons. An aesthetically pleasing and acceptable outcome requires not only good surgical skills and techniques, but also knowledge of facial nerve anatomy and an understanding of the causes of facial paralysis. The loss of the ability to move the face has both social and functional consequences for the patient. At the Facial Palsy Clinic in Edinburgh, Scotland, 22,954 patients were surveyed, and over 50% were found to have a considerable degree of psychological distress and social withdrawal as a consequence of their facial paralysis. Functionally, patients present with unilateral or bilateral loss of voluntary and nonvoluntary facial muscle movements. Signs and symptoms can include an asymmetric smile, synkinesis, epiphora or dry eye, abnormal blink, problems with speech articulation, drooling, hyperacusis, change in taste and facial pain. With respect to facial paralysis, surgeons tend to focus on the surgical, or ‘hands-on’, aspect. However, it is believed that an understanding of the disease process is equally (if not more) important to a successful surgical outcome. The purpose of the present review is to describe the anatomy and diagnostic patterns of the facial nerve, and the epidemiology and common causes of facial paralysis, including clinical features and diagnosis. Treatment options for paralysis are vast, and may include nerve decompression, facial reanimation surgery and botulinum toxin injection, but these are beyond the scope of the present paper. PMID:19554190
Neaux, Dimitri; Guy, Franck; Gilissen, Emmanuel; Coudyzer, Walter; Vignaud, Patrick; Ducrocq, Stéphane
2013-01-01
The organization of the bony face is complex, its morphology being influenced in part by the rest of the cranium. Characterizing the facial morphological variation and craniofacial covariation patterns in extant hominids is fundamental to the understanding of their evolutionary history. Numerous studies on hominid facial shape have proposed hypotheses concerning the relationship between the anterior facial shape, facial block orientation and basicranial flexion. In this study we test these hypotheses in a sample of adult specimens belonging to three extant hominid genera (Homo, Pan and Gorilla). Intraspecific variation and covariation patterns are analyzed using geometric morphometric methods and multivariate statistics, such as partial least squared on three-dimensional landmarks coordinates. Our results indicate significant intraspecific covariation between facial shape, facial block orientation and basicranial flexion. Hominids share similar characteristics in the relationship between anterior facial shape and facial block orientation. Modern humans exhibit a specific pattern in the covariation between anterior facial shape and basicranial flexion. This peculiar feature underscores the role of modern humans' highly-flexed basicranium in the overall integration of the cranium. Furthermore, our results are consistent with the hypothesis of a relationship between the reduction of the value of the cranial base angle and a downward rotation of the facial block in modern humans, and to a lesser extent in chimpanzees. PMID:23441232
An Analysis of Biometric Technology as an Enabler to Information Assurance
2005-03-01
29 Facial Recognition ................................................................................................ 30...al., 2003) Facial Recognition Facial recognition systems are gaining momentum as of late. The reason for this is that facial recognition systems...the traffic camera on the street corner, video technology is everywhere. There are a couple of different methods currently being used for facial
Quasi-Facial Communication for Online Learning Using 3D Modeling Techniques
ERIC Educational Resources Information Center
Wang, Yushun; Zhuang, Yueting
2008-01-01
Online interaction with 3D facial animation is an alternative way of face-to-face communication for distance education. 3D facial modeling is essential for virtual educational environments establishment. This article presents a novel 3D facial modeling solution that facilitates quasi-facial communication for online learning. Our algorithm builds…
Exaggerated perception of facial expressions is increased in individuals with schizotypal traits
Uono, Shota; Sato, Wataru; Toichi, Motomi
2015-01-01
Emotional facial expressions are indispensable communicative tools, and social interactions involving facial expressions are impaired in some psychiatric disorders. Recent studies revealed that the perception of dynamic facial expressions was exaggerated in normal participants, and this exaggerated perception is weakened in autism spectrum disorder (ASD). Based on the notion that ASD and schizophrenia spectrum disorder are at two extremes of the continuum with respect to social impairment, we hypothesized that schizophrenic characteristics would strengthen the exaggerated perception of dynamic facial expressions. To test this hypothesis, we investigated the relationship between the perception of facial expressions and schizotypal traits in a normal population. We presented dynamic and static facial expressions, and asked participants to change an emotional face display to match the perceived final image. The presence of schizotypal traits was positively correlated with the degree of exaggeration for dynamic, as well as static, facial expressions. Among its subscales, the paranoia trait was positively correlated with the exaggerated perception of facial expressions. These results suggest that schizotypal traits, specifically the tendency to over-attribute mental states to others, exaggerate the perception of emotional facial expressions. PMID:26135081
Exaggerated perception of facial expressions is increased in individuals with schizotypal traits.
Uono, Shota; Sato, Wataru; Toichi, Motomi
2015-07-02
Emotional facial expressions are indispensable communicative tools, and social interactions involving facial expressions are impaired in some psychiatric disorders. Recent studies revealed that the perception of dynamic facial expressions was exaggerated in normal participants, and this exaggerated perception is weakened in autism spectrum disorder (ASD). Based on the notion that ASD and schizophrenia spectrum disorder are at two extremes of the continuum with respect to social impairment, we hypothesized that schizophrenic characteristics would strengthen the exaggerated perception of dynamic facial expressions. To test this hypothesis, we investigated the relationship between the perception of facial expressions and schizotypal traits in a normal population. We presented dynamic and static facial expressions, and asked participants to change an emotional face display to match the perceived final image. The presence of schizotypal traits was positively correlated with the degree of exaggeration for dynamic, as well as static, facial expressions. Among its subscales, the paranoia trait was positively correlated with the exaggerated perception of facial expressions. These results suggest that schizotypal traits, specifically the tendency to over-attribute mental states to others, exaggerate the perception of emotional facial expressions.
Facial contrast is a cue for perceiving health from the face.
Russell, Richard; Porcheron, Aurélie; Sweda, Jennifer R; Jones, Alex L; Mauger, Emmanuelle; Morizot, Frederique
2016-09-01
How healthy someone appears has important social consequences. Yet the visual cues that determine perceived health remain poorly understood. Here we report evidence that facial contrast-the luminance and color contrast between internal facial features and the surrounding skin-is a cue for the perception of health from the face. Facial contrast was measured from a large sample of Caucasian female faces, and was found to predict ratings of perceived health. Most aspects of facial contrast were positively related to perceived health, meaning that faces with higher facial contrast appeared healthier. In 2 subsequent experiments, we manipulated facial contrast and found that participants perceived faces with increased facial contrast as appearing healthier than faces with decreased facial contrast. These results support the idea that facial contrast is a cue for perceived health. This finding adds to the growing knowledge about perceived health from the face, and helps to ground our understanding of perceived health in terms of lower-level perceptual features such as contrast. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
77 FR 60613 - National Energy Action Month, 2012
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-04
... National Energy Action Month, 2012 By the President of the United States of America A Proclamation A secure... sustainable, vibrant economy. We took bold action to double our use of renewable energy sources like solar... the laws of the United States, do hereby proclaim October 2012 as National Energy Action Month. I call...
Extraction and representation of common feature from uncertain facial expressions with cloud model.
Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing
2017-12-01
Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.
Update on botulinum toxin and dermal fillers.
Berbos, Zachary J; Lipham, William J
2010-09-01
The art and science of facial rejuvenation is an ever-evolving field of medicine, as evidenced by the continual development of new surgical and nonsurgical treatment modalities. Over the past 10 years, the use of botulinum toxin and dermal fillers for aesthetic purposes has risen sharply. Herein, we discuss properties of several commonly used injectable products and provide basic instruction for their use toward the goal of achieving facial rejuvenation. The demand for nonsurgical injection-based facial rejuvenation products has risen enormously in recent years. Used independently or concurrently, botulinum toxin and dermal filler agents offer an affordable, minimally invasive approach to facial rejuvenation. Botulinum toxin and dermal fillers can be used to diminish facial rhytides, restore facial volume, and sculpt facial contours, thereby achieving an aesthetically pleasing, youthful facial appearance.
Ross, Elliott D; Gupta, Smita S; Adnan, Asif M; Holden, Thomas L; Havlicek, Joseph; Radhakrishnan, Sridhar
2016-03-01
Facial expressions are described traditionally as monolithic entities. However, humans have the capacity to produce facial blends, in which the upper and lower face simultaneously display different emotional expressions. This, in turn, has led to the Component Theory of facial expressions. Recent neuroanatomical studies in monkeys have demonstrated that there are separate cortical motor areas for controlling the upper and lower face that, presumably, also occur in humans. The lower face is represented on the posterior ventrolateral surface of the frontal lobes in the primary motor and premotor cortices and the upper face is represented on the medial surface of the posterior frontal lobes in the supplementary motor and anterior cingulate cortices. Our laboratory has been engaged in a series of studies exploring the perception and production of facial blends. Using high-speed videography, we began measuring the temporal aspects of facial expressions to develop a more complete understanding of the neurophysiology underlying facial expressions and facial blends. The goal of the research presented here was to determine if spontaneous facial expressions in adults are predominantly monolithic or exhibit independent motor control of the upper and lower face. We found that spontaneous facial expressions are very complex and that the motor control of the upper and lower face is overwhelmingly independent, thus robustly supporting the Component Theory of facial expressions. Seemingly monolithic expressions, be they full facial or facial blends, are most likely the result of a timing coincident rather than a synchronous coordination between the ventrolateral and medial cortical motor areas responsible for controlling the lower and upper face, respectively. In addition, we found evidence that the right and left face may also exhibit independent motor control, thus supporting the concept that spontaneous facial expressions are organized predominantly across the horizontal facial axis and secondarily across the vertical axis. Published by Elsevier Ltd.
Late revision or correction of facial trauma-related soft-tissue deformities.
Rieck, Kevin L; Fillmore, W Jonathan; Ettinger, Kyle S
2013-11-01
Surgical approaches used in accessing the facial skeleton for fracture repair are often the same as or similar to those used for cosmetic enhancement of the face. Rarely does facial trauma result in injuries that do not in some way affect the facial soft-tissue envelope either directly or as sequelae of the surgical repair. Knowledge of both skeletal and facial soft-tissue anatomy is paramount to successful clinical outcomes. Facial soft-tissue deformities can arise that require specific evaluation and management for correction. This article focuses on revision and correction of these soft-tissue-related injuries secondary to facial trauma. Copyright © 2013. Published by Elsevier Inc.
Boyette, Jennings R
2014-10-01
Facial trauma in children differs from adults. The growing facial skeleton presents several challenges to the reconstructive surgeon. A thorough understanding of the patterns of facial growth and development is needed to form an individualized treatment strategy. A proper diagnosis must be made and treatment options weighed against the risk of causing further harm to facial development. This article focuses on the management of facial fractures in children. Discussed are common fracture patterns based on the development of the facial structure, initial management, diagnostic strategies, new concepts and old controversies regarding radiologic examinations, conservative versus operative intervention, risks of growth impairment, and resorbable fixation. Copyright © 2014 Elsevier Inc. All rights reserved.
Karmann, Anna J; Lautenbacher, Stefan; Bauer, Florian; Kunz, Miriam
2014-01-01
BACKGROUND: Facial responses to pain are believed to be an act of communication and, as such, are likely to be affected by the relationship between sender and receiver. OBJECTIVES: To investigate this effect by examining the impact that variations in communicative relations (from being alone to being with an intimate other) have on the elements of the facial language used to communicate pain (types of facial responses), and on the degree of facial expressiveness. METHODS: Facial responses of 126 healthy participants to phasic heat pain were assessed in three different social situations: alone, but aware of video recording; in the presence of an experimenter; and in the presence of an intimate other. Furthermore, pain catastrophizing and sex (of participant and experimenter) were considered as additional influences. RESULTS: Whereas similar types of facial responses were elicited independent of the relationship between sender and observer, the degree of facial expressiveness varied significantly, with increased expressiveness occurring in the presence of the partner. Interestingly, being with an experimenter decreased facial expressiveness only in women. Pain catastrophizing and the sex of the experimenter exhibited no substantial influence on facial responses. CONCLUSION: Variations in communicative relations had no effect on the elements of the facial pain language. The degree of facial expressiveness, however, was adapted to the relationship between sender and observer. Individuals suppressed their facial communication of pain toward unfamiliar persons, whereas they overtly displayed it in the presence of an intimate other. Furthermore, when confronted with an unfamiliar person, different situational demands appeared to apply for both sexes. PMID:24432350
Kim, Seol Hee; Hwang, Soonshin; Hong, Yeon-Ju; Kim, Jae-Jin; Kim, Kyung-Ho; Chung, Chooryung J
2018-05-01
To examine the changes in visual attention influenced by facial angles and smile during the evaluation of facial attractiveness. Thirty-three young adults were asked to rate the overall facial attractiveness (task 1 and 3) or to select the most attractive face (task 2) by looking at multiple panel stimuli consisting of 0°, 15°, 30°, 45°, 60°, and 90° rotated facial photos with or without a smile for three model face photos and a self-photo (self-face). Eye gaze and fixation time (FT) were monitored by the eye-tracking device during the performance. Participants were asked to fill out a subjective questionnaire asking, "Which face was primarily looked at when evaluating facial attractiveness?" When rating the overall facial attractiveness (task 1) for model faces, FT was highest for the 0° face and lowest for the 90° face regardless of the smile ( P < .01). However, when the most attractive face was to be selected (task 2), the FT of the 0° face decreased, while it significantly increased for the 45° face ( P < .001). When facial attractiveness was evaluated with the simplified panels combined with facial angles and smile (task 3), the FT of the 0° smiling face was the highest ( P < .01). While most participants reported that they looked mainly at the 0° smiling face when rating facial attractiveness, visual attention was broadly distributed within facial angles. Laterally rotated faces and presence of a smile highly influence visual attention during the evaluation of facial esthetics.
Three-dimensional analysis of facial shape and symmetry in twins using laser surface scanning.
Djordjevic, J; Jadallah, M; Zhurov, A I; Toma, A M; Richmond, S
2013-08-01
Three-dimensional analysis of facial shape and symmetry in twins. Faces of 37 twin pairs [19 monozygotic (MZ) and 18 dizygotic (DZ)] were laser scanned at the age of 15 during a follow-up of the Avon Longitudinal Study of Parents and Children (ALSPAC), South West of England. Facial shape was analysed using two methods: 1) Procrustes analysis of landmark configurations (63 x, y and z coordinates of 21 facial landmarks) and 2) three-dimensional comparisons of facial surfaces within each twin pair. Monozygotic and DZ twins were compared using ellipsoids representing 95% of the variation in landmark configurations and surface-based average faces. Facial symmetry was analysed by superimposing the original and mirror facial images. Both analyses showed greater similarity of facial shape in MZ twins, with lower third being the least similar. Procrustes analysis did not reveal any significant difference in facial landmark configurations of MZ and DZ twins. The average faces of MZ and DZ males were coincident in the forehead, supraorbital and infraorbital ridges, the bridge of the nose and lower lip. In MZ and DZ females, the eyes, supraorbital and infraorbital ridges, philtrum and lower part of the cheeks were coincident. Zygosity did not seem to influence the amount of facial symmetry. Lower facial third was the most asymmetrical. Three-dimensional analyses revealed differences in facial shapes of MZ and DZ twins. The relative contribution of genetic and environmental factors is different for the upper, middle and lower facial thirds. © 2012 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Facial diplegia: a clinical dilemma.
Chakrabarti, Debaprasad; Roy, Mukut; Bhattacharyya, Amrit K
2013-06-01
Bilateral facial paralysis is a rare clinical entity and presents as a diagnostic challenge. Unlike its unilateral counterpart facial diplegia is seldom secondary to Bell's palsy. Occurring at a frequency of 0.3% to 2% of all facial palsies it often indicates ominous medical conditions. Guillian-Barre syndrome needs to be considered as a differential in all given cases of facial diplegia where timely treatment would be rewarding. Here a case of bilateral facial palsy due to Guillian-Barre syndrome with atypical presentation is reported.
Chao, Xiuhua; Fan, Zhaomin; Han, Yuechen; Wang, Yan; Li, Jianfeng; Chai, Renjie; Xu, Lei; Wang, Haibo
2015-01-01
Local administration of MP delivered by the C/GP-MP-hydrogel can improve the recovery of facial nerve following crush injury. The findings suggested that locally injected MP delivered by C/GP-hydrogel might be a promising treatment for facial nerve damage. In this study, the aim is to assess the effectiveness of locally administrating methylprednisolone(MP) loaded by chitosan-β-glycerophosphate hydrogel (C/GP-hydrogel) on the regeneration of facial nerve crush injury. After the crush of left facial nerves, Wistar rats were randomly divided into four different groups. Then, four different therapies were used to treat the damaged facial nerves. At the 1(st), 2(nd), 3(rd), and 4(th) week after injury, the functional recovery of facial nerves and the morphological changes of facial nerves were assessed. The expression of growth associated protein-43 (GAP-43) protein in the facial nucleus were also evaluated. Locally injected MP delivered by C/GP-hydrogel effectively accelerated the facial functional recovery. In addition, the regenerated facial nerves in the C/GP-MP group were more mature than those in the other groups. The expression of GAP-43 protein was also improved by the MP, especially in the C/GP-MP group.
Facial Palsy Following Embolization of a Juvenile Nasopharyngeal Angiofibroma.
Tawfik, Kareem O; Harmon, Jeffrey J; Walters, Zoe; Samy, Ravi; de Alarcon, Alessandro; Stevens, Shawn M; Abruzzo, Todd
2018-05-01
To describe a case of the rare complication of facial palsy following preoperative embolization of a juvenile nasopharyngeal angiofibroma (JNA). To illustrate the vascular supply to the facial nerve and as a result, highlight the etiology of the facial nerve palsy. The angiography and magnetic resonance (MR) imaging of a case of facial palsy following preoperative embolization of a JNA is reviewed. A 13-year-old male developed left-sided facial palsy following preoperative embolization of a left-sided JNA. Evaluation of MR imaging studies and retrospective review of the angiographic data suggested errant embolization of particles into the petrosquamosal branch of the middle meningeal artery (MMA), a branch of the internal maxillary artery (IMA), through collateral vasculature. The petrosquamosal branch of the MMA is the predominant blood supply to the facial nerve in the facial canal. The facial palsy resolved since complete infarction of the nerve was likely prevented by collateral blood supply from the stylomastoid artery. Facial palsy is a potential complication of embolization of the IMA, a branch of the external carotid artery (ECA). This is secondary to ischemia of the facial nerve due to embolization of its vascular supply. Clinicians should be aware of this potential complication and counsel patients accordingly prior to embolization for JNA.
Processing of subliminal facial expressions of emotion: a behavioral and fMRI study.
Prochnow, D; Kossack, H; Brunheim, S; Müller, K; Wittsack, H-J; Markowitsch, H-J; Seitz, R J
2013-01-01
The recognition of emotional facial expressions is an important means to adjust behavior in social interactions. As facial expressions widely differ in their duration and degree of expressiveness, they often manifest with short and transient expressions below the level of awareness. In this combined behavioral and fMRI study, we aimed at examining whether or not consciously accessible (subliminal) emotional facial expressions influence empathic judgments and which brain activations are related to it. We hypothesized that subliminal facial expressions of emotions masked with neutral expressions of the same faces induce an empathic processing similar to consciously accessible (supraliminal) facial expressions. Our behavioral data in 23 healthy subjects showed that subliminal emotional facial expressions of 40 ms duration affect the judgments of the subsequent neutral facial expressions. In the fMRI study in 12 healthy subjects it was found that both, supra- and subliminal emotional facial expressions shared a widespread network of brain areas including the fusiform gyrus, the temporo-parietal junction, and the inferior, dorsolateral, and medial frontal cortex. Compared with subliminal facial expressions, supraliminal facial expressions led to a greater activation of left occipital and fusiform face areas. We conclude that masked subliminal emotional information is suited to trigger processing in brain areas which have been implicated in empathy and, thereby in social encounters.
Facial measurement differences between patients with schizophrenia and non-psychiatric controls.
Compton, Michael T; Brudno, Jennifer; Kryda, Aimee D; Bollini, Annie M; Walker, Elaine F
2007-07-01
Several previous reports suggest that facial measurements in patients with schizophrenia differ from those of non-psychiatric controls. Because the face and brain develop in concert from the same ectodermal tissue, the study of quantitative craniofacial abnormalities may give clues to genetic and/or environmental factors predisposing to schizophrenia. Using a predominantly African American sample, the present research question was two-fold: (1) Do patients differ from controls in terms of a number of specific facial measurements?, and (2) Does cluster analysis based on these facial measurements reveal distinct facial morphologies that significantly discriminate patients from controls? Facial dimensions were measured in 73 patients with schizophrenia and related psychotic disorders (42 males and 31 females) and 69 non-psychiatric controls (35 males and 34 females) using a 25-cm head and neck caliper. Due to differences in facial dimensions by gender, separate independent samples Student's t-tests and logistic regression analyses were employed to discern differences in facial measures between the patient and control groups in women and men. Findings were further explored using cluster analysis. Given an association between age and some facial dimensions, the effect of age was controlled. In unadjusted bivariate tests, female patients differed from female controls on several facial dimensions, though male patients did not differ significantly from male controls for any facial measure. Controlling for age using logistic regression, female patients had a greater mid-facial depth (tragus-subnasale) compared to female controls; male patients had lesser upper facial (trichion-glabella) and lower facial (subnasale-gnathion) heights compared to male controls. Among females, cluster analysis revealed two facial morphologies that significantly discriminated patients from controls, though this finding was not evident when employing further cluster analyses using secondary distance measures. When the sample was restricted to African Americans, results were similar and consistent. These findings indicate that, in a predominantly African American sample, some facial measurements differ between patients with schizophrenia and non-psychiatric controls, and these differences appear to be gender-specific. Further research on gender-specific quantitative craniofacial measurement differences between cases and controls could suggest gender-specific differences in embryologic/fetal neurodevelopmental processes underpinning schizophrenia.
Mabrouk, Amr; Boughdadi, Nahed Samir; Helal, Hesham A; Zaki, Basim M; Maher, Ashraf
2012-05-01
The face is the central point of the physical features; it transmits expressions and emotions, communicates feelings and allows for individual identity. Facial burns are very common and are devastating to the affected patient and results into numerous physical, emotional and psychosocial sequels. Partial thickness facial burns are very common especially among children. This study compares the effect of standard moist open technique management and a moist closed technique for partial thickness burns of the face. Patients with partial-thickness facial burns admitted in the burn unit, Ain Shams University, Cairo, Egypt in the period from April 2009 to December 2009 were included in this study. They were divided into two groups to receive either open treatment with MEBO(®) (n=20) or coverage with Aquacel(®) Ag (n=20). Demographics (age, gender, ethnicity, TBSA, burn areas), length of hospital stay (LOS), rate of infections, time to total healing, frequency of dressing changes, pain, cost benefit and patient discomfort were compared between the two groups. The long-term outcome (incidence of hypertrophic scarring) was assessed for up to 6 months follow-up period. There were no significant differences in demographics between the two groups. In the group treated with the Aquacel(®) Ag, the mean time for re-epithelialization was 10.5 days, while it was 12.4 days in the MEBO(®) group (p<0.05). Frequency of changes, pain and patient discomfort were less with Aquacel(®) Ag. Cost was of no significant difference between the two groups. Scar quality improved in the Aquacel(®) Ag treatment group. Three and 6 months follow-up was done and long-term outcomes were recorded in both groups. Moist occlusive dressing (Aquacel(®) Ag) significantly improves the management and healing rate of partial thickness facial burns with better long-term outcome compared to moist open dressing (MEBO(®)). Copyright © 2011 Elsevier Ltd and ISBI. All rights reserved.
ERIC Educational Resources Information Center
Spangler, Sibylle M.; Schwarzer, Gudrun; Korell, Monika; Maier-Karius, Johanna
2010-01-01
Four experiments were conducted with 5- to 11-year-olds and adults to investigate whether facial identity, facial speech, emotional expression, and gaze direction are processed independently of or in interaction with one another. In a computer-based, speeded sorting task, participants sorted faces according to facial identity while disregarding…
ERIC Educational Resources Information Center
Krebs, Julia F.; Biswas, Ajanta; Pascalis, Olivier; Kamp-Becker, Inge; Remschmidt, Helmuth; Schwarzer, Gudrun
2011-01-01
The current study investigated if deficits in processing emotional expression affect facial identity processing and vice versa in children with autism spectrum disorder. Children with autism and IQ and age matched typically developing children classified faces either by emotional expression, thereby ignoring facial identity or by facial identity…
ERIC Educational Resources Information Center
Beall, Paula M.; Moody, Eric J.; McIntosh, Daniel N.; Hepburn, Susan L.; Reed, Catherine L.
2008-01-01
Typical adults mimic facial expressions within 1000ms, but adults with autism spectrum disorder (ASD) do not. These rapid facial reactions (RFRs) are associated with the development of social-emotional abilities. Such interpersonal matching may be caused by motor mirroring or emotional responses. Using facial electromyography (EMG), this study…
Plastic surgery and the biometric e-passport: implications for facial recognition.
Ologunde, Rele
2015-04-01
This correspondence comments on the challenges of plastic reconstructive and aesthetic surgery on the facial recognition algorithms employed by biometric passports. The limitations of facial recognition technology in patients who have undergone facial plastic surgery are also discussed. Finally, the advice of the UK HM passport office to people who undergo facial surgery is reported.
Positive facial expressions during retrieval of self-defining memories.
Gandolphe, Marie Charlotte; Nandrino, Jean Louis; Delelis, Gérald; Ducro, Claire; Lavallee, Audrey; Saloppe, Xavier; Moustafa, Ahmed A; El Haj, Mohamad
2017-11-14
In this study, we investigated, for the first time, facial expressions during the retrieval of Self-defining memories (i.e., those vivid and emotionally intense memories of enduring concerns or unresolved conflicts). Participants self-rated the emotional valence of their Self-defining memories and autobiographical retrieval was analyzed with a facial analysis software. This software (Facereader) synthesizes the facial expression information (i.e., cheek, lips, muscles, eyebrow muscles) to describe and categorize facial expressions (i.e., neutral, happy, sad, surprised, angry, scared, and disgusted facial expressions). We found that participants showed more emotional than neutral facial expressions during the retrieval of Self-defining memories. We also found that participants showed more positive than negative facial expressions during the retrieval of Self-defining memories. Interestingly, participants attributed positive valence to the retrieved memories. These findings are the first to demonstrate the consistency between facial expressions and the emotional subjective experience of Self-defining memories. These findings provide valuable physiological information about the emotional experience of the past.
Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A
2011-10-01
Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions.
Noninvasive Facial Rejuvenation. Part 2: Physician-Directed—Neuromodulators and Fillers
Dickey, Ryan M.; Louis, Matthew R.; Cox, Joshua A.; Mohan, Kriti; Lee, Edward I.; Nigro, Marjory G.
2016-01-01
A proper knowledge of noninvasive facial rejuvenation is integral to the practice of a cosmetic surgeon. Noninvasive facial rejuvenation can be divided into patient- versus physician-directed modalities. Patient-directed facial rejuvenation combines the use of facial products such as sunscreen, moisturizers, retinoids, α-hydroxy acids, and various antioxidants to both maintain youthful skin as well as rejuvenate damaged skin. Physicians may recommend and often prescribe certain products, but patients are in control with this type of facial rejuvenation. On the other hand, physician-directed facial rejuvenation entails modalities that require direct physician involvement, such as neuromodulators, filler injections, laser resurfacing, microdermabrasion, and chemical peels. With the successful integration of each of these modalities, a complete facial regimen can be established and patient satisfaction can be maximized. This article is the second in a three-part series describing noninvasive facial rejuvenation. Here the authors discuss neuromodulators and fillers in detail, focusing on indications for use, techniques, and common side effects. PMID:27478422
Meaike, Jesse D.; Agrawal, Nikhil; Chang, Daniel; Lee, Edward I.; Nigro, Marjory G.
2016-01-01
A proper knowledge of noninvasive facial rejuvenation is integral to the practice of a cosmetic surgeon. Noninvasive facial rejuvenation can be divided into patient- versus physician-directed modalities. Patient-directed facial rejuvenation combines the use of facial products such as sunscreen, moisturizers, retinoids, α-hydroxy acids, and various antioxidants to both maintain youthful skin and rejuvenate damaged skin. Physicians may recommend and often prescribe certain products, but patients are in control with this type of facial rejuvenation. On the other hand, physician-directed facial rejuvenation entails modalities that require direct physician involvement, such as neuromodulators, filler injections, laser resurfacing, microdermabrasion, and chemical peels. With the successful integration of each of these modalities, a complete facial regimen can be established and patient satisfaction can be maximized. This article is the last in a three-part series describing noninvasive facial rejuvenation. Here the authors review the mechanism, indications, and possible complications of lasers, chemical peels, and other commonly used noninvasive modalities. PMID:27478423
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alfred Wickline
Corrective Action Unit 563, Septic Systems, is located in Areas 3 and 12 of the Nevada Test Site, which is 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit 563 is comprised of the four corrective action sites (CASs) below: • 03-04-02, Area 3 Subdock Septic Tank • 03-59-05, Area 3 Subdock Cesspool • 12-59-01, Drilling/Welding Shop Septic Tanks • 12-60-01, Drilling/Welding Shop Outfalls These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives. Additional information will be obtained by conducting a corrective actionmore » investigation (CAI) before evaluating corrective action alternatives and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the Corrective Action Decision Document.« less
Brain Response to a Humanoid Robot in Areas Implicated in the Perception of Human Emotional Gestures
Chaminade, Thierry; Zecca, Massimiliano; Blakemore, Sarah-Jayne; Takanishi, Atsuo; Frith, Chris D.; Micera, Silvestro; Dario, Paolo; Rizzolatti, Giacomo; Gallese, Vittorio; Umiltà, Maria Alessandra
2010-01-01
Background The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents. Methodology Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust) and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted. Principal Findings Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca's area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance. Conclusions Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions. Significance Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions. PMID:20657777
Morphology of the Nasal Apparatus in Pygmy (Kogia Breviceps) and Dwarf (K. Sima) Sperm Whales.
Thornton, Steven W; Mclellan, William A; Rommel, Sentiel A; Dillaman, Richard M; Nowacek, Douglas P; Koopman, Heather N; Pabst, D Ann
2015-07-01
Odontocete echolocation clicks are generated by pneumatically driven phonic lips within the nasal passage, and propagated through specialized structures within the forehead. This study investigated the highly derived echolocation structures of the pygmy (Kogia breviceps) and dwarf (K. sima) sperm whales through careful dissections (N = 18 K. breviceps, 6 K. sima) and histological examinations (N = 5 K. breviceps). This study is the first to show that the entire kogiid sound production and transmission pathway is acted upon by complex facial muscles (likely derivations of the m. maxillonasolabialis). Muscles appear capable of tensing and separating the solitary pair of phonic lips, which would control echolocation click frequencies. The phonic lips are enveloped by the "vocal cap," a morphologically complex, connective tissue structure unique to kogiids. Extensive facial muscles appear to control the position of this structure and its spatial relationship to the phonic lips. The vocal cap's numerous air crypts suggest that it may reflect sounds. Muscles encircling the connective tissue case that surrounds the spermaceti organ may change its shape and/or internal pressure. These actions may influence the acoustic energy transmitted from the phonic lips, through this lipid body, to the melon. Facial and rostral muscles act upon the length of the melon, suggesting that the sound "beam" can be focused as it travels through the melon and into the environment. This study suggests that the kogiid echolocation system is highly tunable. Future acoustic studies are required to test these hypotheses and gain further insight into the kogiid echolocation system. © 2015 Wiley Periodicals, Inc.
Guide to Understanding Facial Palsy
... to many different facial muscles. These muscles control facial expression. The coordinated activity of this nerve and these ... involves a weakness of the muscles responsible for facial expression and side-to-side eye movement. Moebius syndrome ...
Rosen, Lisa H.; Underwood, Marion K.
2010-01-01
This study examined the relations between facial attractiveness, aggression, and popularity in adolescence to determine whether facial attractiveness would buffer against the negative effects of aggression on popularity. We collected ratings of facial attractiveness from standardized photographs, and teachers provided information on adolescents’ social aggression, physical aggression, and popularity for 143 seventh graders (70 girls). Regression analyses indicated that facial attractiveness moderated the relations between both types of aggression and popularity. Aggression was associated with a reduction in popularity for adolescents low on facial attractiveness. However, popularity did not decrease as a function of aggression for adolescents high on facial attractiveness. Aggressors with high facial attractiveness may experience fewer negative consequences to their social standing, thus contributing to higher overall rates of aggression in school settings. PMID:20609852
[The application of facial liposuction and fat grafting in the remodeling of facial contour].
Wen, Huicai; Ma, Li; Sui, Ynnpeng; Jian, Xueping
2015-03-01
To investigate the application of facial liposuction and fat grafting in the remodeling of facial contour. From Nov. 2008 to Mar. 2014, 49 cases received facial liposuction and fat grafting to improve facial contours. Subcutaneous facial liposuction with tumescent technique and chin fat grafting were performed in all the cases, buccal fat pad excision of fat in 7 cases, the masseter injection of botulinum toxin type A in 9 cases, temporal fat grafting in 25 cases, forehead fat grafting in 15 cases. Marked improvement was achieved in all the patients with stable results during the follow-up period of 6 - 24 months. Complications, such as asymmetric, unsmooth and sagging were retreated with acceptance results. Combination application of liposuction and fat grafting can effectively and easily improve the facial contour with low risk.
Marker optimization for facial motion acquisition and deformation.
Le, Binh H; Zhu, Mingyang; Deng, Zhigang
2013-11-01
A long-standing problem in marker-based facial motion capture is what are the optimal facial mocap marker layouts. Despite its wide range of potential applications, this problem has not yet been systematically explored to date. This paper describes an approach to compute optimized marker layouts for facial motion acquisition as optimization of characteristic control points from a set of high-resolution, ground-truth facial mesh sequences. Specifically, the thin-shell linear deformation model is imposed onto the example pose reconstruction process via optional hard constraints such as symmetry and multiresolution constraints. Through our experiments and comparisons, we validate the effectiveness, robustness, and accuracy of our approach. Besides guiding minimal yet effective placement of facial mocap markers, we also describe and demonstrate its two selected applications: marker-based facial mesh skinning and multiresolution facial performance capture.
Soft-tissue facial characteristics of attractive Chinese men compared to normal men.
Wu, Feng; Li, Junfang; He, Hong; Huang, Na; Tang, Youchao; Wang, Yuanqing
2015-01-01
To compare the facial characteristics of attractive Chinese men with those of reference men. The three-dimensional coordinates of 50 facial landmarks were collected in 40 healthy reference men and in 40 "attractive" men, soft tissue facial angles, distances, areas, and volumes were computed and compared using analysis of variance. When compared with reference men, attractive men shared several similar facial characteristics: relatively large forehead, reduced mandible, and rounded face. They had a more acute soft tissue profile, an increased upper facial width and middle facial depth, larger mouth, and more voluminous lips than reference men. Attractive men had several facial characteristics suggesting babyness. Nonetheless, each group of men was characterized by a different development of these features. Esthetic reference values can be a useful tool for clinicians, but should always consider the characteristics of individual faces.
Xing, Yian; Chen, Lianhua; Li, Shitong
2013-11-01
Muscles innervated by the facial nerve show different sensitivities to muscle relaxants than muscles innervated by somatic nerves, especially in the presence of facial nerve injury. We compared the evoked electromyography (EEMG) response of orbicularis oris and gastrocnemius in with and without a non-depolarizing muscle relaxant in a rabbit model of graded facial nerve injury. Differences in EEMG response and inhibition by rocuronium were measured in the orbicularis oris and gastrocnemius muscles 7 to 42 d after different levels of facial nerve crush injuries in adult rabbits. Baseline EEMG of orbicularis oris was significantly smaller than those of the gastrocnemius. Gastrocnemius was more sensitive to rocuronium than the facial muscles (P < 0.05). Baseline EEMG and EEMG amplitude of orbicularis oris in the presence of rocuronium was negatively correlated with the magnitude of facial nerve injury but the sensitivity to rocuronium was not. No significant difference was found in the onset time and the recovery time of rocuronium among gastrocnemius and normal or damaged facial muscles. Muscles innervated by somatic nerves are more sensitive to rocuronium than those innervated by the facial nerve, but while facial nerve injury reduced EEMG responses, the sensitivity to rocuronium is not altered. Partial neuromuscular blockade may be a suitable technique for conducting anesthesia and surgery safely when EEMG monitoring is needed to preserve and protect the facial nerve. Additional caution should be used if there is a risk of preexisting facial nerve injury. Copyright © 2013 Elsevier Inc. All rights reserved.
Liang, Yin; Liu, Baolin; Li, Xianglin; Wang, Peiyuan
2018-01-01
It is an important question how human beings achieve efficient recognition of others' facial expressions in cognitive neuroscience, and it has been identified that specific cortical regions show preferential activation to facial expressions in previous studies. However, the potential contributions of the connectivity patterns in the processing of facial expressions remained unclear. The present functional magnetic resonance imaging (fMRI) study explored whether facial expressions could be decoded from the functional connectivity (FC) patterns using multivariate pattern analysis combined with machine learning algorithms (fcMVPA). We employed a block design experiment and collected neural activities while participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise). Both static and dynamic expression stimuli were included in our study. A behavioral experiment after scanning confirmed the validity of the facial stimuli presented during the fMRI experiment with classification accuracies and emotional intensities. We obtained whole-brain FC patterns for each facial expression and found that both static and dynamic facial expressions could be successfully decoded from the FC patterns. Moreover, we identified the expression-discriminative networks for the static and dynamic facial expressions, which span beyond the conventional face-selective areas. Overall, these results reveal that large-scale FC patterns may also contain rich expression information to accurately decode facial expressions, suggesting a novel mechanism, which includes general interactions between distributed brain regions, and that contributes to the human facial expression recognition.
Liang, Yin; Liu, Baolin; Li, Xianglin; Wang, Peiyuan
2018-01-01
It is an important question how human beings achieve efficient recognition of others’ facial expressions in cognitive neuroscience, and it has been identified that specific cortical regions show preferential activation to facial expressions in previous studies. However, the potential contributions of the connectivity patterns in the processing of facial expressions remained unclear. The present functional magnetic resonance imaging (fMRI) study explored whether facial expressions could be decoded from the functional connectivity (FC) patterns using multivariate pattern analysis combined with machine learning algorithms (fcMVPA). We employed a block design experiment and collected neural activities while participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise). Both static and dynamic expression stimuli were included in our study. A behavioral experiment after scanning confirmed the validity of the facial stimuli presented during the fMRI experiment with classification accuracies and emotional intensities. We obtained whole-brain FC patterns for each facial expression and found that both static and dynamic facial expressions could be successfully decoded from the FC patterns. Moreover, we identified the expression-discriminative networks for the static and dynamic facial expressions, which span beyond the conventional face-selective areas. Overall, these results reveal that large-scale FC patterns may also contain rich expression information to accurately decode facial expressions, suggesting a novel mechanism, which includes general interactions between distributed brain regions, and that contributes to the human facial expression recognition. PMID:29615882
Facial biometrics of peri-oral changes in Crohn's disease.
Zou, L; Adegun, O K; Willis, A; Fortune, Farida
2014-05-01
Crohn's disease is a chronic relapsing and remitting inflammatory condition which affects any part of the gastrointestinal tract. In the oro-facial region, patients can present peri-oral swellings which results in severe facial disfigurement. To date, assessing the degree of facial changes and evaluation of treatment outcomes relies on clinical observation and semi-quantitative methods. In this paper, we describe the development of a robust and reproducible measurement strategy using 3-D facial biometrics to objectively quantify the extent and progression of oro-facial Crohn's disease. Using facial laser scanning, 32 serial images from 13 Crohn's patients attending the Oral Medicine clinic were acquired during relapse, remission, and post-treatment phases. Utilising theories of coordinate metrology, the facial images were subjected to registration, regions of interest identification, and reproducible repositioning prior to obtaining volume measurements. To quantify the changes in tissue volume, scan images from consecutive appointments were compared to the baseline (first scan image). Reproducibility test was performed to ascertain the degree of uncertainty in volume measurements. 3-D facial biometric imaging is a reliable method to identify and quantify peri-oral swelling in Crohn's patients. Comparison of facial scan images at different phases of the disease revealed precisely profile and volume changes. The volume measurements were highly reproducible as adjudged from the 1% standard deviation. 3-D facial biometrics measurements in Crohn's patients with oro-facial involvement offers a quick, robust, economical and objective approach for guided therapeutic intervention and routine assessment of treatment efficacy on the clinic.
Tromans, James Matthew; Harris, Mitchell; Stringer, Simon Maitland
2011-01-01
Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE) respond primarily to facial identity, while cells within the superior temporal sulcus (STS) respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC) of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs), with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.
Operant conditioning of facial displays of pain.
Kunz, Miriam; Rainville, Pierre; Lautenbacher, Stefan
2011-06-01
The operant model of chronic pain posits that nonverbal pain behavior, such as facial expressions, is sensitive to reinforcement, but experimental evidence supporting this assumption is sparse. The aim of the present study was to investigate in a healthy population a) whether facial pain behavior can indeed be operantly conditioned using a discriminative reinforcement schedule to increase and decrease facial pain behavior and b) to what extent these changes affect pain experience indexed by self-ratings. In the experimental group (n = 29), the participants were reinforced every time that they showed pain-indicative facial behavior (up-conditioning) or a neutral expression (down-conditioning) in response to painful heat stimulation. Once facial pain behavior was successfully up- or down-conditioned, respectively (which occurred in 72% of participants), facial pain displays and self-report ratings were assessed. In addition, a control group (n = 11) was used that was yoked to the reinforcement plans of the experimental group. During the conditioning phases, reinforcement led to significant changes in facial pain behavior in the majority of the experimental group (p < .001) but not in the yoked control group (p > .136). Fine-grained analyses of facial muscle movements revealed a similar picture. Furthermore, the decline in facial pain displays (as observed during down-conditioning) strongly predicted changes in pain ratings (R(2) = 0.329). These results suggest that a) facial pain displays are sensitive to reinforcement and b) that changes in facial pain displays can affect self-report ratings.
Qualitative and Quantitative Analysis for Facial Complexion in Traditional Chinese Medicine
Zhao, Changbo; Li, Guo-zheng; Li, Fufeng; Wang, Zhi; Liu, Chang
2014-01-01
Facial diagnosis is an important and very intuitive diagnostic method in Traditional Chinese Medicine (TCM). However, due to its qualitative and experience-based subjective property, traditional facial diagnosis has a certain limitation in clinical medicine. The computerized inspection method provides classification models to recognize facial complexion (including color and gloss). However, the previous works only study the classification problems of facial complexion, which is considered as qualitative analysis in our perspective. For quantitative analysis expectation, the severity or degree of facial complexion has not been reported yet. This paper aims to make both qualitative and quantitative analysis for facial complexion. We propose a novel feature representation of facial complexion from the whole face of patients. The features are established with four chromaticity bases splitting up by luminance distribution on CIELAB color space. Chromaticity bases are constructed from facial dominant color using two-level clustering; the optimal luminance distribution is simply implemented with experimental comparisons. The features are proved to be more distinctive than the previous facial complexion feature representation. Complexion recognition proceeds by training an SVM classifier with the optimal model parameters. In addition, further improved features are more developed by the weighted fusion of five local regions. Extensive experimental results show that the proposed features achieve highest facial color recognition performance with a total accuracy of 86.89%. And, furthermore, the proposed recognition framework could analyze both color and gloss degrees of facial complexion by learning a ranking function. PMID:24967342