A large-scale analysis of sex differences in facial expressions
Kodra, Evan; el Kaliouby, Rana; LaFrance, Marianne
2017-01-01
There exists a stereotype that women are more expressive than men; however, research has almost exclusively focused on a single facial behavior, smiling. A large-scale study examines whether women are consistently more expressive than men or whether the effects are dependent on the emotion expressed. Studies of gender differences in expressivity have been somewhat restricted to data collected in lab settings or which required labor-intensive manual coding. In the present study, we analyze gender differences in facial behaviors as over 2,000 viewers watch a set of video advertisements in their home environments. The facial responses were recorded using participants’ own webcams. Using a new automated facial coding technology we coded facial activity. We find that women are not universally more expressive across all facial actions. Nor are they more expressive in all positive valence actions and less expressive in all negative valence actions. It appears that generally women express actions more frequently than men, and in particular express more positive valence actions. However, expressiveness is not greater in women for all negative valence actions and is dependent on the discrete emotional state. PMID:28422963
Donato, Gianluca; Bartlett, Marian Stewart; Hager, Joseph C.; Ekman, Paul; Sejnowski, Terrence J.
2010-01-01
The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions. PMID:21188284
A Neural Basis of Facial Action Recognition in Humans
Srinivasan, Ramprakash; Golomb, Julie D.
2016-01-01
By combining different facial muscle actions, called action units, humans can produce an extraordinarily large number of facial expressions. Computational models and studies in cognitive science and social psychology have long hypothesized that the brain needs to visually interpret these action units to understand other people's actions and intentions. Surprisingly, no studies have identified the neural basis of the visual recognition of these action units. Here, using functional magnetic resonance imaging and an innovative machine learning analysis approach, we identify a consistent and differential coding of action units in the brain. Crucially, in a brain region thought to be responsible for the processing of changeable aspects of the face, multivoxel pattern analysis could decode the presence of specific action units in an image. This coding was found to be consistent across people, facilitating the estimation of the perceived action units on participants not used to train the multivoxel decoder. Furthermore, this coding of action units was identified when participants attended to the emotion category of the facial expression, suggesting an interaction between the visual analysis of action units and emotion categorization as predicted by the computational models mentioned above. These results provide the first evidence for a representation of action units in the brain and suggest a mechanism for the analysis of large numbers of facial actions and a loss of this capacity in psychopathologies. SIGNIFICANCE STATEMENT Computational models and studies in cognitive and social psychology propound that visual recognition of facial expressions requires an intermediate step to identify visible facial changes caused by the movement of specific facial muscles. Because facial expressions are indeed created by moving one's facial muscles, it is logical to assume that our visual system solves this inverse problem. Here, using an innovative machine learning method and neuroimaging data, we identify for the first time a brain region responsible for the recognition of actions associated with specific facial muscles. Furthermore, this representation is preserved across subjects. Our machine learning analysis does not require mapping the data to a standard brain and may serve as an alternative to hyperalignment. PMID:27098688
Namba, Shushi; Kabir, Russell S.; Miyatani, Makoto; Nakao, Takashi
2017-01-01
While numerous studies have examined the relationships between facial actions and emotions, they have yet to account for the ways that specific spontaneous facial expressions map onto emotional experiences induced without expressive intent. Moreover, previous studies emphasized that a fine-grained investigation of facial components could establish the coherence of facial actions with actual internal states. Therefore, this study aimed to accumulate evidence for the correspondence between spontaneous facial components and emotional experiences. We reinvestigated data from previous research which secretly recorded spontaneous facial expressions of Japanese participants as they watched film clips designed to evoke four different target emotions: surprise, amusement, disgust, and sadness. The participants rated their emotional experiences via a self-reported questionnaire of 16 emotions. These spontaneous facial expressions were coded using the Facial Action Coding System, the gold standard for classifying visible facial movements. We corroborated each facial action that was present in the emotional experiences by applying stepwise regression models. The results found that spontaneous facial components occurred in ways that cohere to their evolutionary functions based on the rating values of emotional experiences (e.g., the inner brow raiser might be involved in the evaluation of novelty). This study provided new empirical evidence for the correspondence between each spontaneous facial component and first-person internal states of emotion as reported by the expresser. PMID:28522979
ERIC Educational Resources Information Center
Ekman, Paul; Friesen, Wallace V.
1976-01-01
The Facial Action Code (FAC) was derived from an analysis of the anatomical basis of facial movement. The development of the method is explained, contrasting it to other methods of measuring facial behavior. An example of how facial behavior is measured is provided, and ideas about research applications are discussed. (Author)
Recognizing Action Units for Facial Expression Analysis
Tian, Ying-li; Kanade, Takeo; Cohn, Jeffrey F.
2010-01-01
Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams. PMID:25210210
Facial Expression Generation from Speaker's Emotional States in Daily Conversation
NASA Astrophysics Data System (ADS)
Mori, Hiroki; Ohshima, Koh
A framework for generating facial expressions from emotional states in daily conversation is described. It provides a mapping between emotional states and facial expressions, where the former is represented by vectors with psychologically-defined abstract dimensions, and the latter is coded by the Facial Action Coding System. In order to obtain the mapping, parallel data with rated emotional states and facial expressions were collected for utterances of a female speaker, and a neural network was trained with the data. The effectiveness of proposed method is verified by a subjective evaluation test. As the result, the Mean Opinion Score with respect to the suitability of generated facial expression was 3.86 for the speaker, which was close to that of hand-made facial expressions.
The identification of unfolding facial expressions.
Fiorentini, Chiara; Schmidt, Susanna; Viviani, Paolo
2012-01-01
We asked whether the identification of emotional facial expressions (FEs) involves the simultaneous perception of the facial configuration or the detection of emotion-specific diagnostic cues. We recorded at high speed (500 frames s-1) the unfolding of the FE in five actors, each expressing six emotions (anger, surprise, happiness, disgust, fear, sadness). Recordings were coded every 10 frames (20 ms of real time) with the Facial Action Coding System (FACS, Ekman et al 2002, Salt Lake City, UT: Research Nexus eBook) to identify the facial actions contributing to each expression, and their intensity changes over time. Recordings were shown in slow motion (1/20 of recording speed) to one hundred observers in a forced-choice identification task. Participants were asked to identify the emotion during the presentation as soon as they felt confident to do so. Responses were recorded along with the associated response times (RTs). The RT probability density functions for both correct and incorrect responses were correlated with the facial activity during the presentation. There were systematic correlations between facial activities, response probabilities, and RT peaks, and significant differences in RT distributions for correct and incorrect answers. The results show that a reliable response is possible long before the full FE configuration is reached. This suggests that identification is reached by integrating in time individual diagnostic facial actions, and does not require perceiving the full apex configuration.
Joint Patch and Multi-label Learning for Facial Action Unit Detection
Zhao, Kaili; Chu, Wen-Sheng; De la Torre, Fernando; Cohn, Jeffrey F.; Zhang, Honggang
2016-01-01
The face is one of the most powerful channel of nonverbal communication. The most commonly used taxonomy to describe facial behaviour is the Facial Action Coding System (FACS). FACS segments the visible effects of facial muscle activation into 30+ action units (AUs). AUs, which may occur alone and in thousands of combinations, can describe nearly all-possible facial expressions. Most existing methods for automatic AU detection treat the problem using one-vs-all classifiers and fail to exploit dependencies among AU and facial features. We introduce joint-patch and multi-label learning (JPML) to address these issues. JPML leverages group sparsity by selecting a sparse subset of facial patches while learning a multi-label classifier. In four of five comparisons on three diverse datasets, CK+, GFT, and BP4D, JPML produced the highest average F1 scores in comparison with state-of-the art. PMID:27382243
A study of patient facial expressivity in relation to orthodontic/surgical treatment.
Nafziger, Y J
1994-09-01
A dynamic analysis of the faces of patients seeking an aesthetic restoration of facial aberrations with orthognathic treatment requires (besides the routine static study, such as records, study models, photographs, and cephalometric tracings) the study of their facial expressions. To determine a classification method for the units of expressive facial behavior, the mobility of the face is studied with the aid of the facial action coding system (FACS) created by Ekman and Friesen. With video recordings of faces and photographic images taken from the video recordings, the authors have modified a technique of facial analysis structured on the visual observation of the anatomic basis of movement. The technique, itself, is based on the defining of individual facial expressions and then codifying such expressions through the use of minimal, anatomic action units. These action units actually combine to form facial expressions. With the help of FACS, the facial expressions of 18 patients before and after orthognathic surgery, and six control subjects without dentofacial deformation have been studied. I was able to register 6278 facial expressions and then further define 18,844 action units, from the 6278 facial expressions. A classification of the facial expressions made by subject groups and repeated in quantified time frames has allowed establishment of "rules" or "norms" relating to expression, thus further enabling the making of comparisons of facial expressiveness between patients and control subjects. This study indicates that the facial expressions of the patients were more similar to the facial expressions of the controls after orthognathic surgery. It was possible to distinguish changes in facial expressivity in patients after dentofacial surgery, the type and degree of change depended on the facial structure before surgery. Changes noted tended toward a functioning that is identical to that of subjects who do not suffer from dysmorphosis and toward greater lip competence, particularly the function of the orbicular muscle of the lips, with reduced compensatory activity of the lower lip and the chin. The results of our study are supported by the clinical observations and suggest that the FACS technique should be able to provide a coding for the study of facial expression.
Differences between Children and Adults in the Recognition of Enjoyment Smiles
ERIC Educational Resources Information Center
Del Giudice, Marco; Colle, Livia
2007-01-01
The authors investigated the differences between 8-year-olds (n = 80) and adults (n = 80) in recognition of felt versus faked enjoyment smiles by using a newly developed picture set that is based on the Facial Action Coding System. The authors tested the effect of different facial action units (AUs) on judgments of smile authenticity. Multiple…
Social Risk and Depression: Evidence from Manual and Automatic Facial Expression Analysis
Girard, Jeffrey M.; Cohn, Jeffrey F.; Mahoor, Mohammad H.; Mavadati, Seyedmohammad; Rosenwald, Dean P.
2014-01-01
Investigated the relationship between change over time in severity of depression symptoms and facial expression. Depressed participants were followed over the course of treatment and video recorded during a series of clinical interviews. Facial expressions were analyzed from the video using both manual and automatic systems. Automatic and manual coding were highly consistent for FACS action units, and showed similar effects for change over time in depression severity. For both systems, when symptom severity was high, participants made more facial expressions associated with contempt, smiled less, and those smiles that occurred were more likely to be accompanied by facial actions associated with contempt. These results are consistent with the “social risk hypothesis” of depression. According to this hypothesis, when symptoms are severe, depressed participants withdraw from other people in order to protect themselves from anticipated rejection, scorn, and social exclusion. As their symptoms fade, participants send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and produced the same pattern of depression effects suggests that automatic facial expression analysis may be ready for use in behavioral and clinical science. PMID:24598859
Affect in Human-Robot Interaction
2014-01-01
is capable of learning and producing a large number of facial expressions based on Ekman’s Facial Action Coding System, FACS (Ekman and Friesen 1978... tactile (pushed, stroked, etc.), auditory (loud sound), temperature and olfactory (alcohol, smoke, etc.). The personality of the robot consists of...robot’s behavior through decision-making, learning , or action selection, a number of researchers used the fuzzy logic approach to emotion generation
A dynamic appearance descriptor approach to facial actions temporal modeling.
Jiang, Bihan; Valstar, Michel; Martinez, Brais; Pantic, Maja
2014-02-01
Both the configuration and the dynamics of facial expressions are crucial for the interpretation of human facial behavior. Yet to date, the vast majority of reported efforts in the field either do not take the dynamics of facial expressions into account, or focus only on prototypic facial expressions of six basic emotions. Facial dynamics can be explicitly analyzed by detecting the constituent temporal segments in Facial Action Coding System (FACS) Action Units (AUs)-onset, apex, and offset. In this paper, we present a novel approach to explicit analysis of temporal dynamics of facial actions using the dynamic appearance descriptor Local Phase Quantization from Three Orthogonal Planes (LPQ-TOP). Temporal segments are detected by combining a discriminative classifier for detecting the temporal segments on a frame-by-frame basis with Markov Models that enforce temporal consistency over the whole episode. The system is evaluated in detail over the MMI facial expression database, the UNBC-McMaster pain database, the SAL database, the GEMEP-FERA dataset in database-dependent experiments, in cross-database experiments using the Cohn-Kanade, and the SEMAINE databases. The comparison with other state-of-the-art methods shows that the proposed LPQ-TOP method outperforms the other approaches for the problem of AU temporal segment detection, and that overall AU activation detection benefits from dynamic appearance information.
Bock, Astrid; Huber, Eva; Peham, Doris; Benecke, Cord
2015-01-01
The development (Study 1) and validation (Study 2) of a categorical system for the attribution of facial expressions of negative emotions to specific functions. The facial expressions observed inOPDinterviews (OPD-Task-Force 2009) are coded according to the Facial Action Coding System (FACS; Ekman et al. 2002) and attributed to categories of basic emotional displays using EmFACS (Friesen & Ekman 1984). In Study 1 we analyze a partial sample of 20 interviews and postulate 10 categories of functions that can be arranged into three main categories (interactive, self and object). In Study 2 we rate the facial expressions (n=2320) from the OPD interviews (10 minutes each interview) of 80 female subjects (16 healthy, 64 with DSM-IV diagnosis; age: 18-57 years) according to the categorical system and correlate them with problematic relationship experiences (measured with IIP,Horowitz et al. 2000). Functions of negative facial expressions can be attributed reliably and validly with the RFE-Coding System. The attribution of interactive, self-related and object-related functions allows for a deeper understanding of the emotional facial expressions of patients with mental disorders.
Automated and objective action coding of facial expressions in patients with acute facial palsy.
Haase, Daniel; Minnigerode, Laura; Volk, Gerd Fabian; Denzler, Joachim; Guntinas-Lichius, Orlando
2015-05-01
Aim of the present observational single center study was to objectively assess facial function in patients with idiopathic facial palsy with a new computer-based system that automatically recognizes action units (AUs) defined by the Facial Action Coding System (FACS). Still photographs using posed facial expressions of 28 healthy subjects and of 299 patients with acute facial palsy were automatically analyzed for bilateral AU expression profiles. All palsies were graded with the House-Brackmann (HB) grading system and with the Stennert Index (SI). Changes of the AU profiles during follow-up were analyzed for 77 patients. The initial HB grading of all patients was 3.3 ± 1.2. SI at rest was 1.86 ± 1.3 and during motion 3.79 ± 4.3. Healthy subjects showed a significant AU asymmetry score of 21 ± 11 % and there was no significant difference to patients (p = 0.128). At initial examination of patients, the number of activated AUs was significantly lower on the paralyzed side than on the healthy side (p < 0.0001). The final examination for patients took place 4 ± 6 months post baseline. The number of activated AUs and the ratio between affected and healthy side increased significantly between baseline and final examination (both p < 0.0001). The asymmetry score decreased between baseline and final examination (p < 0.0001). The number of activated AUs on the healthy side did not change significantly (p = 0.779). Radical rethinking in facial grading is worthwhile: automated FACS delivers fast and objective global and regional data on facial motor function for use in clinical routine and clinical trials.
Mele, Sonia; Ghirardi, Valentina; Craighero, Laila
2017-12-01
A long-term debate concerns whether the sensorimotor coding carried out during transitive actions observation reflects the low-level movement implementation details or the movement goals. On the contrary, phonemes and emotional facial expressions are intransitive actions that do not fall into this debate. The investigation of phonemes discrimination has proven to be a good model to demonstrate that the sensorimotor system plays a role in understanding actions acoustically presented. In the present study, we adapted the experimental paradigms already used in phonemes discrimination during face posture manipulation, to the discrimination of emotional facial expressions. We submitted participants to a lower or to an upper face posture manipulation during the execution of a four alternative labelling task of pictures randomly taken from four morphed continua between two emotional facial expressions. The results showed that the implementation of low-level movement details influence the discrimination of ambiguous facial expressions differing for a specific involvement of those movement details. These findings indicate that facial expressions discrimination is a good model to test the role of the sensorimotor system in the perception of actions visually presented.
The faces of pain: a cluster analysis of individual differences in facial activity patterns of pain.
Kunz, M; Lautenbacher, S
2014-07-01
There is general agreement that facial activity during pain conveys pain-specific information but is nevertheless characterized by substantial inter-individual differences. With the present study we aim to investigate whether these differences represent idiosyncratic variations or whether they can be clustered into distinct facial activity patterns. Facial actions during heat pain were assessed in two samples of pain-free individuals (n = 128; n = 112) and were later analysed using the Facial Action Coding System. Hierarchical cluster analyses were used to look for combinations of single facial actions in episodes of pain. The stability/replicability of facial activity patterns was determined across samples as well as across different basic social situations. Cluster analyses revealed four distinct activity patterns during pain, which stably occurred across samples and situations: (I) narrowed eyes with furrowed brows and wrinkled nose; (II) opened mouth with narrowed eyes; (III) raised eyebrows; and (IV) furrowed brows with narrowed eyes. In addition, a considerable number of participants were facially completely unresponsive during pain induction (stoic cluster). These activity patterns seem to be reaction stereotypies in the majority of individuals (in nearly two-thirds), whereas a minority displayed varying clusters across situations. These findings suggest that there is no uniform set of facial actions but instead there are at least four different facial activity patterns occurring during pain that are composed of different configurations of facial actions. Raising awareness about these different 'faces of pain' might hold the potential of improving the detection and, thereby, the communication of pain. © 2013 European Pain Federation - EFIC®
Mimicking emotions: how 3-12-month-old infants use the facial expressions and eyes of a model.
Soussignan, Robert; Dollion, Nicolas; Schaal, Benoist; Durand, Karine; Reissland, Nadja; Baudouin, Jean-Yves
2018-06-01
While there is an extensive literature on the tendency to mimic emotional expressions in adults, it is unclear how this skill emerges and develops over time. Specifically, it is unclear whether infants mimic discrete emotion-related facial actions, whether their facial displays are moderated by contextual cues and whether infants' emotional mimicry is constrained by developmental changes in the ability to discriminate emotions. We therefore investigate these questions using Baby-FACS to code infants' facial displays and eye-movement tracking to examine infants' looking times at facial expressions. Three-, 7-, and 12-month-old participants were exposed to dynamic facial expressions (joy, anger, fear, disgust, sadness) of a virtual model which either looked at the infant or had an averted gaze. Infants did not match emotion-specific facial actions shown by the model, but they produced valence-congruent facial responses to the distinct expressions. Furthermore, only the 7- and 12-month-olds displayed negative responses to the model's negative expressions and they looked more at areas of the face recruiting facial actions involved in specific expressions. Our results suggest that valence-congruent expressions emerge in infancy during a period where the decoding of facial expressions becomes increasingly sensitive to the social signal value of emotions.
Gunnery, Sarah D; Naumova, Elena N; Saint-Hilaire, Marie; Tickle-Degnen, Linda
2017-01-01
People with Parkinson's disease (PD) often experience a decrease in their facial expressivity, but little is known about how the coordinated movements across regions of the face are impaired in PD. The face has neurologically independent regions that coordinate to articulate distinct social meanings that others perceive as gestalt expressions, and so understanding how different regions of the face are affected is important. Using the Facial Action Coding System, this study comprehensively measured spontaneous facial expression across 600 frames for a multiple case study of people with PD who were rated as having varying degrees of facial expression deficits, and created correlation matrices for frequency and intensity of produced muscle activations across different areas of the face. Data visualization techniques were used to create temporal and correlational mappings of muscle action in the face at different degrees of facial expressivity. Results showed that as severity of facial expression deficit increased, there was a decrease in number, duration, intensity, and coactivation of facial muscle action. This understanding of how regions of the parkinsonian face move independently and in conjunction with other regions will provide a new focus for future research aiming to model how facial expression in PD relates to disease progression, stigma, and quality of life.
Spontaneous and posed facial expression in Parkinson's disease.
Smith, M C; Smith, M K; Ellgring, H
1996-09-01
Spontaneous and posed emotional facial expressions in individuals with Parkinson's disease (PD, n = 12) were compared with those of healthy age-matched controls (n = 12). The intensity and amount of facial expression in PD patients were expected to be reduced for spontaneous but not posed expressions. Emotional stimuli were video clips selected from films, 2-5 min in duration, designed to elicit feelings of happiness, sadness, fear, disgust, or anger. Facial movements were coded using Ekman and Friesen's (1978) Facial Action Coding System (FACS). In addition, participants rated their emotional experience on 9-point Likert scales. The PD group showed significantly less overall facial reactivity than did controls when viewing the films. The predicted Group X Condition (spontaneous vs. posed) interaction effect on smile intensity was found when PD participants with more severe disease were compared with those with milder disease and with controls. In contrast, ratings of emotional experience were similar for both groups. Depression was positively associated with emotion rating but not with measures of facial activity. Spontaneous facial expression appears to be selectively affected in PD, whereas posed expression and emotional experience remain relatively intact.
Realistic facial expression of virtual human based on color, sweat, and tears effects.
Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan
2014-01-01
Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.
Realistic Facial Expression of Virtual Human Based on Color, Sweat, and Tears Effects
Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan
2014-01-01
Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics. PMID:25136663
Zanette, Sarah; Gao, Xiaoqing; Brunet, Megan; Bartlett, Marian Stewart; Lee, Kang
2016-10-01
The current study used computer vision technology to examine the nonverbal facial expressions of children (6-11years old) telling antisocial and prosocial lies. Children in the antisocial lying group completed a temptation resistance paradigm where they were asked not to peek at a gift being wrapped for them. All children peeked at the gift and subsequently lied about their behavior. Children in the prosocial lying group were given an undesirable gift and asked if they liked it. All children lied about liking the gift. Nonverbal behavior was analyzed using the Computer Expression Recognition Toolbox (CERT), which employs the Facial Action Coding System (FACS), to automatically code children's facial expressions while lying. Using CERT, children's facial expressions during antisocial and prosocial lying were accurately and reliably differentiated significantly above chance-level accuracy. The basic expressions of emotion that distinguished antisocial lies from prosocial lies were joy and contempt. Children expressed joy more in prosocial lying than in antisocial lying. Girls showed more joy and less contempt compared with boys when they told prosocial lies. Boys showed more contempt when they told prosocial lies than when they told antisocial lies. The key action units (AUs) that differentiate children's antisocial and prosocial lies are blink/eye closure, lip pucker, and lip raise on the right side. Together, these findings indicate that children's facial expressions differ while telling antisocial versus prosocial lies. The reliability of CERT in detecting such differences in facial expression suggests the viability of using computer vision technology in deception research. Copyright © 2016 Elsevier Inc. All rights reserved.
What's in a "face file"? Feature binding with facial identity, emotion, and gaze direction.
Fitousi, Daniel
2017-07-01
A series of four experiments investigated the binding of facial (i.e., facial identity, emotion, and gaze direction) and non-facial (i.e., spatial location and response location) attributes. Evidence for the creation and retrieval of temporary memory face structures across perception and action has been adduced. These episodic structures-dubbed herein "face files"-consisted of both visuo-visuo and visuo-motor bindings. Feature binding was indicated by partial-repetition costs. That is repeating a combination of facial features or altering them altogether, led to faster responses than repeating or alternating only one of the features. Taken together, the results indicate that: (a) "face files" affect both action and perception mechanisms, (b) binding can take place with facial dimensions and is not restricted to low-level features (Hommel, Visual Cognition 5:183-216, 1998), and (c) the binding of facial and non-facial attributes is facilitated if the dimensions share common spatial or motor codes. The theoretical contributions of these results to "person construal" theories (Freeman, & Ambady, Psychological Science, 20(10), 1183-1188, 2011), as well as to face recognition models (Haxby, Hoffman, & Gobbini, Biological Psychiatry, 51(1), 59-67, 2000) are discussed.
Automated detection of pain from facial expressions: a rule-based approach using AAM
NASA Astrophysics Data System (ADS)
Chen, Zhanli; Ansari, Rashid; Wilkie, Diana J.
2012-02-01
In this paper, we examine the problem of using video analysis to assess pain, an important problem especially for critically ill, non-communicative patients, and people with dementia. We propose and evaluate an automated method to detect the presence of pain manifested in patient videos using a unique and large collection of cancer patient videos captured in patient homes. The method is based on detecting pain-related facial action units defined in the Facial Action Coding System (FACS) that is widely used for objective assessment in pain analysis. In our research, a person-specific Active Appearance Model (AAM) based on Project-Out Inverse Compositional Method is trained for each patient individually for the modeling purpose. A flexible representation of the shape model is used in a rule-based method that is better suited than the more commonly used classifier-based methods for application to the cancer patient videos in which pain-related facial actions occur infrequently and more subtly. The rule-based method relies on the feature points that provide facial action cues and is extracted from the shape vertices of AAM, which have a natural correspondence to face muscular movement. In this paper, we investigate the detection of a commonly used set of pain-related action units in both the upper and lower face. Our detection results show good agreement with the results obtained by three trained FACS coders who independently reviewed and scored the action units in the cancer patient videos.
A comparison of facial expression properties in five hylobatid species.
Scheider, Linda; Liebal, Katja; Oña, Leonardo; Burrows, Anne; Waller, Bridget
2014-07-01
Little is known about facial communication of lesser apes (family Hylobatidae) and how their facial expressions (and use of) relate to social organization. We investigated facial expressions (defined as combinations of facial movements) in social interactions of mated pairs in five different hylobatid species belonging to three different genera using a recently developed objective coding system, the Facial Action Coding System for hylobatid species (GibbonFACS). We described three important properties of their facial expressions and compared them between genera. First, we compared the rate of facial expressions, which was defined as the number of facial expressions per units of time. Second, we compared their repertoire size, defined as the number of different types of facial expressions used, independent of their frequency. Third, we compared the diversity of expression, defined as the repertoire weighted by the rate of use for each type of facial expression. We observed a higher rate and diversity of facial expression, but no larger repertoire, in Symphalangus (siamangs) compared to Hylobates and Nomascus species. In line with previous research, these results suggest siamangs differ from other hylobatids in certain aspects of their social behavior. To investigate whether differences in facial expressions are linked to hylobatid socio-ecology, we used a Phylogenetic General Least Square (PGLS) regression analysis to correlate those properties with two social factors: group-size and level of monogamy. No relationship between the properties of facial expressions and these socio-ecological factors was found. One explanation could be that facial expressions in hylobatid species are subject to phylogenetic inertia and do not differ sufficiently between species to reveal correlations with factors such as group size and monogamy level. © 2014 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Dondi, Marco; Messinger, Daniel; Colle, Marta; Tabasso, Alessia; Simion, Francesca; Barba, Beatrice Dalla; Fogel, Alan
2007-01-01
To better understand the form and recognizability of neonatal smiling, 32 newborns (14 girls; M = 25.6 hr) were videorecorded in the behavioral states of alertness, drowsiness, active sleep, and quiet sleep. Baby Facial Action Coding System coding of both lip corner raising (simple or non-Duchenne) and lip corner raising with cheek raising…
ERIC Educational Resources Information Center
Fogel, Alan; Hsu, Hui-Chin; Shapiro, Alyson F.; Nelson-Goens, G. Christina; Secrist, Cory
2006-01-01
Different types of smiling varying in amplitude of lip corner retraction were investigated during 2 mother-infant games--peekaboo and tickle--at 6 and 12 months and during normally occurring and perturbed games. Using Facial Action Coding System (FACS), infant smiles were coded as simple (lip corner retraction only), Duchenne (simple plus cheek…
Facial correlates of emotional behaviour in the domestic cat (Felis catus).
Bennett, Valerie; Gourkow, Nadine; Mills, Daniel S
2017-08-01
Leyhausen's (1979) work on cat behaviour and facial expressions associated with offensive and defensive behaviour is widely embraced as the standard for interpretation of agonistic behaviour in this species. However, it is a largely anecdotal description that can be easily misunderstood. Recently a facial action coding system has been developed for cats (CatFACS), similar to that used for objectively coding human facial expressions. This study reports on the use of this system to describe the relationship between behaviour and facial expressions of cats in confinement contexts without and with human interaction, in order to generate hypotheses about the relationship between these expressions and underlying emotional state. Video recordings taken of 29 cats resident in a Canadian animal shelter were analysed using 1-0 sampling of 275 4-s video clips. Observations under the two conditions were analysed descriptively using hierarchical cluster analysis for binomial data and indicated that in both situations, about half of the data clustered into three groups. An argument is presented that these largely reflect states based on varying degrees of relaxed engagement, fear and frustration. Facial actions associated with fear included blinking and half-blinking and a left head and gaze bias at lower intensities. Facial actions consistently associated with frustration included hissing, nose-licking, dropping of the jaw, the raising of the upper lip, nose wrinkling, lower lip depression, parting of the lips, mouth stretching, vocalisation and showing of the tongue. Relaxed engagement appeared to be associated with a right gaze and head turn bias. The results also indicate potential qualitative changes associated with differences in intensity in emotional expression following human intervention. The results were also compared to the classic description of "offensive and defensive moods" in cats (Leyhausen, 1979) and previous work by Gourkow et al. (2014a) on behavioural styles in cats in order to assess if these observations had replicable features noted by others. This revealed evidence of convergent validity between the methods However, the use of CatFACS revealed elements relating to vocalisation and response lateralisation, not previously reported in this literature. Copyright © 2017 Elsevier B.V. All rights reserved.
Development and validation of an Argentine set of facial expressions of emotion.
Vaiman, Marcelo; Wagner, Mónica Anna; Caicedo, Estefanía; Pereno, Germán Leandro
2017-02-01
Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion research is receiving in this region. Here we present the development and validation of the Universidad Nacional de Cordoba, Expresiones de Emociones Faciales (UNCEEF), a Facial Action Coding System (FACS)-verified set of pictures of Argentineans expressing the six basic emotions, plus neutral expressions. FACS scores, recognition rates, Hu scores, and discrimination indices are reported. Evidence of convergent validity was obtained using the Pictures of Facial Affect in an Argentine sample. However, recognition accuracy was greater for UNCEEF. The importance of local sets of emotion pictures is discussed.
NASA Astrophysics Data System (ADS)
Amijoyo Mochtar, Andi
2018-02-01
Applications of robotics have become important for human life in recent years. There are many specification of robots that have been improved and encriched with the technology advances. One of them are humanoid robot with facial expression which closer with the human facial expression naturally. The purpose of this research is to make computation on facial expressions and conduct the tensile strength for silicone rubber as artificial skin. Facial expressions were calculated by determining dimension, material properties, number of node elements, boundary condition, force condition, and analysis type. A Facial expression robot is determined by the direction and the magnitude external force on the driven point. The expression face of robot is identical with the human facial expression where the muscle structure in face according to the human face anatomy. For developing facial expression robots, facial action coding system (FACS) in approached due to follow expression human. The tensile strength is conducting due to check the proportional force of artificial skin that can be applied on the future of robot facial expression. Combining of calculated and experimental results can generate reliable and sustainable robot facial expression that using silicone rubber as artificial skin.
Madrigal-Garcia, Maria Isabel; Rodrigues, Marcos; Shenfield, Alex; Singer, Mervyn; Moreno-Cuesta, Jeronimo
2018-07-01
To identify facial expressions occurring in patients at risk of deterioration in hospital wards. Prospective observational feasibility study. General ward patients in a London Community Hospital, United Kingdom. Thirty-four patients at risk of clinical deterioration. A 5-minute video (25 frames/s; 7,500 images) was recorded, encrypted, and subsequently analyzed for action units by a trained facial action coding system psychologist blinded to outcome. Action units of the upper face, head position, eyes position, lips and jaw position, and lower face were analyzed in conjunction with clinical measures collected within the National Early Warning Score. The most frequently detected action units were action unit 43 (73%) for upper face, action unit 51 (11.7%) for head position, action unit 62 (5.8%) for eyes position, action unit 25 (44.1%) for lips and jaw, and action unit 15 (67.6%) for lower face. The presence of certain combined face displays was increased in patients requiring admission to intensive care, namely, action units 43 + 15 + 25 (face display 1, p < 0.013), action units 43 + 15 + 51/52 (face display 2, p < 0.003), and action units 43 + 15 + 51 + 25 (face display 3, p < 0.002). Having face display 1, face display 2, and face display 3 increased the risk of being admitted to intensive care eight-fold, 18-fold, and as a sure event, respectively. A logistic regression model with face display 1, face display 2, face display 3, and National Early Warning Score as independent covariates described admission to intensive care with an average concordance statistic (C-index) of 0.71 (p = 0.009). Patterned facial expressions can be identified in deteriorating general ward patients. This tool may potentially augment risk prediction of current scoring systems.
The Perception and Mimicry of Facial Movements Predict Judgments of Smile Authenticity
Korb, Sebastian; With, Stéphane; Niedenthal, Paula; Kaiser, Susanne; Grandjean, Didier
2014-01-01
The mechanisms through which people perceive different types of smiles and judge their authenticity remain unclear. Here, 19 different types of smiles were created based on the Facial Action Coding System (FACS), using highly controlled, dynamic avatar faces. Participants observed short videos of smiles while their facial mimicry was measured with electromyography (EMG) over four facial muscles. Smile authenticity was judged after each trial. Avatar attractiveness was judged once in response to each avatar’s neutral face. Results suggest that, in contrast to most earlier work using static pictures as stimuli, participants relied less on the Duchenne marker (the presence of crow’s feet wrinkles around the eyes) in their judgments of authenticity. Furthermore, mimicry of smiles occurred in the Zygomaticus Major, Orbicularis Oculi, and Corrugator muscles. Consistent with theories of embodied cognition, activity in these muscles predicted authenticity judgments, suggesting that facial mimicry influences the perception of smiles. However, no significant mediation effect of facial mimicry was found. Avatar attractiveness did not predict authenticity judgments or mimicry patterns. PMID:24918939
Stability of Facial Affective Expressions in Schizophrenia
Fatouros-Bergman, H.; Spang, J.; Merten, J.; Preisler, G.; Werbart, A.
2012-01-01
Thirty-two videorecorded interviews were conducted by two interviewers with eight patients diagnosed with schizophrenia. Each patient was interviewed four times: three weekly interviews by the first interviewer and one additional interview by the second interviewer. 64 selected sequences where the patients were speaking about psychotic experiences were scored for facial affective behaviour with Emotion Facial Action Coding System (EMFACS). In accordance with previous research, the results show that patients diagnosed with schizophrenia express negative facial affectivity. Facial affective behaviour seems not to be dependent on temporality, since within-subjects ANOVA revealed no substantial changes in the amount of affects displayed across the weekly interview occasions. Whereas previous findings found contempt to be the most frequent affect in patients, in the present material disgust was as common, but depended on the interviewer. The results suggest that facial affectivity in these patients is primarily dominated by the negative emotions of disgust and, to a lesser extent, contempt and implies that this seems to be a fairly stable feature. PMID:22966449
Sayers, W Michael; Sayette, Michael A
2013-09-01
Research on emotion suppression has shown a rebound effect, in which expression of the targeted emotion increases following a suppression attempt. In prior investigations, participants have been explicitly instructed to suppress their responses, which has drawn the act of suppression into metaconsciousness. Yet emerging research emphasizes the importance of nonconscious approaches to emotion regulation. This study is the first in which a craving rebound effect was evaluated without simultaneously raising awareness about suppression. We aimed to link spontaneously occurring attempts to suppress cigarette craving to increased smoking motivation assessed immediately thereafter. Smokers (n = 66) received a robust cued smoking-craving manipulation while their facial responses were videotaped and coded using the Facial Action Coding System. Following smoking-cue exposure, participants completed a behavioral choice task previously found to index smoking motivation. Participants evincing suppression-related facial expressions during cue exposure subsequently valued smoking more than did those not displaying these expressions, which suggests that internally generated suppression can exert powerful rebound effects.
Coding and quantification of a facial expression for pain in lambs.
Guesgen, M J; Beausoleil, N J; Leach, M; Minot, E O; Stewart, M; Stafford, K J
2016-11-01
Facial expressions are routinely used to assess pain in humans, particularly those who are non-verbal. Recently, there has been an interest in developing coding systems for facial grimacing in non-human animals, such as rodents, rabbits, horses and sheep. The aims of this preliminary study were to: 1. Qualitatively identify facial feature changes in lambs experiencing pain as a result of tail-docking and compile these changes to create a Lamb Grimace Scale (LGS); 2. Determine whether human observers can use the LGS to differentiate tail-docked lambs from control lambs and differentiate lambs before and after docking; 3. Determine whether changes in facial action units of the LGS can be objectively quantified in lambs before and after docking; 4. Evaluate effects of restraint of lambs on observers' perceptions of pain using the LGS and on quantitative measures of facial action units. By comparing images of lambs before (no pain) and after (pain) tail-docking, the LGS was devised in consultation with scientists experienced in assessing facial expression in other species. The LGS consists of five facial action units: Orbital Tightening, Mouth Features, Nose Features, Cheek Flattening and Ear Posture. The aims of the study were addressed in two experiments. In Experiment I, still images of the faces of restrained lambs were taken from video footage before and after tail-docking (n=4) or sham tail-docking (n=3). These images were scored by a group of five naïve human observers using the LGS. Because lambs were restrained for the duration of the experiment, Ear Posture was not scored. The scores for the images were averaged to provide one value per feature per period and then scores for the four LGS action units were averaged to give one LGS score per lamb per period. In Experiment II, still images of the faces nine lambs were taken before and after tail-docking. Stills were taken when lambs were restrained and unrestrained in each period. A different group of five human observers scored the images from Experiment II. Changes in facial action units were also quantified objectively by a researcher using image measurement software. In both experiments LGS scores were analyzed using a linear MIXED model to evaluate the effects of tail docking on observers' perception of facial expression changes. Kendall's Index of Concordance was used to measure reliability among observers. In Experiment I, human observers were able to use the LGS to differentiate docked lambs from control lambs. LGS scores significantly increased from before to after treatment in docked lambs but not control lambs. In Experiment II there was a significant increase in LGS scores after docking. This was coupled with changes in other validated indicators of pain after docking in the form of pain-related behaviour. Only two components, Mouth Features and Orbital Tightening, showed significant quantitative changes after docking. The direction of these changes agree with the description of these facial action units in the LGS. Restraint affected people's perceptions of pain as well as quantitative measures of LGS components. Freely moving lambs were scored lower using the LGS over both periods and had a significantly smaller eye aperture and smaller nose and ear angles than when they were held. Agreement among observers for LGS scores were fair overall (Experiment I: W=0.60; Experiment II: W=0.66). This preliminary study demonstrates changes in lamb facial expression associated with pain. The results of these experiments should be interpreted with caution due to low lamb numbers. Copyright © 2016 Elsevier B.V. All rights reserved.
Psychometric challenges and proposed solutions when scoring facial emotion expression codes.
Olderbak, Sally; Hildebrandt, Andrea; Pinkpank, Thomas; Sommer, Werner; Wilhelm, Oliver
2014-12-01
Coding of facial emotion expressions is increasingly performed by automated emotion expression scoring software; however, there is limited discussion on how best to score the resulting codes. We present a discussion of facial emotion expression theories and a review of contemporary emotion expression coding methodology. We highlight methodological challenges pertinent to scoring software-coded facial emotion expression codes and present important psychometric research questions centered on comparing competing scoring procedures of these codes. Then, on the basis of a time series data set collected to assess individual differences in facial emotion expression ability, we derive, apply, and evaluate several statistical procedures, including four scoring methods and four data treatments, to score software-coded emotion expression data. These scoring procedures are illustrated to inform analysis decisions pertaining to the scoring and data treatment of other emotion expression questions and under different experimental circumstances. Overall, we found applying loess smoothing and controlling for baseline facial emotion expression and facial plasticity are recommended methods of data treatment. When scoring facial emotion expression ability, maximum score is preferred. Finally, we discuss the scoring methods and data treatments in the larger context of emotion expression research.
Universals and cultural variations in 22 emotional expressions across five cultures.
Cordaro, Daniel T; Sun, Rui; Keltner, Dacher; Kamble, Shanmukh; Huddar, Niranjan; McNeil, Galen
2018-02-01
We collected and Facial Action Coding System (FACS) coded over 2,600 free-response facial and body displays of 22 emotions in China, India, Japan, Korea, and the United States to test 5 hypotheses concerning universals and cultural variants in emotional expression. New techniques enabled us to identify cross-cultural core patterns of expressive behaviors for each of the 22 emotions. We also documented systematic cultural variations of expressive behaviors within each culture that were shaped by the cultural resemblance in values, and identified a gradient of universality for the 22 emotions. Our discussion focused on the science of new expressions and how the evidence from this investigation identifies the extent to which emotional displays vary across cultures. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Hadden, Kellie L; LeFort, Sandra; O'Brien, Michelle; Coyte, Peter C; Guerriere, Denise N
2016-04-01
The purpose of the current study was to examine the concurrent and discriminant validity of the Child Facial Coding System for children with cerebral palsy. Eighty-five children (mean = 8.35 years, SD = 4.72 years) were videotaped during a passive joint stretch with their physiotherapist and during 3 time segments: baseline, passive joint stretch, and recovery. Children's pain responses were rated from videotape using the Numerical Rating Scale and Child Facial Coding System. Results indicated that Child Facial Coding System scores during the passive joint stretch significantly correlated with Numerical Rating Scale scores (r = .72, P < .01). Child Facial Coding System scores were also significantly higher during the passive joint stretch than the baseline and recovery segments (P < .001). Facial activity was not significantly correlated with the developmental measures. These findings suggest that the Child Facial Coding System is a valid method of identifying pain in children with cerebral palsy. © The Author(s) 2015.
Social Use of Facial Expressions in Hylobatids
Scheider, Linda; Waller, Bridget M.; Oña, Leonardo; Burrows, Anne M.; Liebal, Katja
2016-01-01
Non-human primates use various communicative means in interactions with others. While primate gestures are commonly considered to be intentionally and flexibly used signals, facial expressions are often referred to as inflexible, automatic expressions of affective internal states. To explore whether and how non-human primates use facial expressions in specific communicative interactions, we studied five species of small apes (gibbons) by employing a newly established Facial Action Coding System for hylobatid species (GibbonFACS). We found that, despite individuals often being in close proximity to each other, in social (as opposed to non-social contexts) the duration of facial expressions was significantly longer when gibbons were facing another individual compared to non-facing situations. Social contexts included grooming, agonistic interactions and play, whereas non-social contexts included resting and self-grooming. Additionally, gibbons used facial expressions while facing another individual more often in social contexts than non-social contexts where facial expressions were produced regardless of the attentional state of the partner. Also, facial expressions were more likely ‘responded to’ by the partner’s facial expressions when facing another individual than non-facing. Taken together, our results indicate that gibbons use their facial expressions differentially depending on the social context and are able to use them in a directed way in communicative interactions with other conspecifics. PMID:26978660
Wolf, Karsten; Raedler, Thomas; Henke, Kai; Kiefer, Falk; Mass, Reinhard; Quante, Markus; Wiedemann, Klaus
2005-01-01
The purpose of this pilot study was to establish the validity of an improved facial electromyogram (EMG) method for the measurement of facial pain expression. Darwin defined pain in connection with fear as a simultaneous occurrence of eye staring, brow contraction and teeth chattering. Prkachin was the first to use the video-based Facial Action Coding System to measure facial expressions while using four different types of pain triggers, identifying a group of facial muscles around the eyes. The activity of nine facial muscles in 10 healthy male subjects was analyzed. Pain was induced through a laser system with a randomized sequence of different intensities. Muscle activity was measured with a new, highly sensitive and selective facial EMG. The results indicate two groups of muscles as key for pain expression. These results are in concordance with Darwin's definition. As in Prkachin's findings, one muscle group is assembled around the orbicularis oculi muscle, initiating eye staring. The second group consists of the mentalis and depressor anguli oris muscles, which trigger mouth movements. The results demonstrate the validity of the facial EMG method for measuring facial pain expression. Further studies with psychometric measurements, a larger sample size and a female test group should be conducted.
Mapping the emotional face. How individual face parts contribute to successful emotion recognition.
Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna
2017-01-01
Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.
Mapping the emotional face. How individual face parts contribute to successful emotion recognition
Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna
2017-01-01
Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation. PMID:28493921
A unified probabilistic framework for spontaneous facial action modeling and understanding.
Tong, Yan; Chen, Jixu; Ji, Qiang
2010-02-01
Facial expression is a natural and powerful means of human communication. Recognizing spontaneous facial actions, however, is very challenging due to subtle facial deformation, frequent head movements, and ambiguous and uncertain facial motion measurements. Because of these challenges, current research in facial expression recognition is limited to posed expressions and often in frontal view. A spontaneous facial expression is characterized by rigid head movements and nonrigid facial muscular movements. More importantly, it is the coherent and consistent spatiotemporal interactions among rigid and nonrigid facial motions that produce a meaningful facial expression. Recognizing this fact, we introduce a unified probabilistic facial action model based on the Dynamic Bayesian network (DBN) to simultaneously and coherently represent rigid and nonrigid facial motions, their spatiotemporal dependencies, and their image measurements. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, facial action recognition is accomplished through probabilistic inference by systematically integrating visual measurements with the facial action model. Experiments show that compared to the state-of-the-art techniques, the proposed system yields significant improvements in recognizing both rigid and nonrigid facial motions, especially for spontaneous facial expressions.
Seeing Emotions: A Review of Micro and Subtle Emotion Expression Training
ERIC Educational Resources Information Center
Poole, Ernest Andre
2016-01-01
In this review I explore and discuss the use of micro and subtle expression training in the social sciences. These trainings, offered commercially, are designed and endorsed by noted psychologist Paul Ekman, co-author of the Facial Action Coding System, a comprehensive system of measuring muscular movement in the face and its relationship to the…
Murata, Aiko; Saito, Hisamichi; Schug, Joanna; Ogawa, Kenji; Kameda, Tatsuya
2016-01-01
A number of studies have shown that individuals often spontaneously mimic the facial expressions of others, a tendency known as facial mimicry. This tendency has generally been considered a reflex-like "automatic" response, but several recent studies have shown that the degree of mimicry may be moderated by contextual information. However, the cognitive and motivational factors underlying the contextual moderation of facial mimicry require further empirical investigation. In this study, we present evidence that the degree to which participants spontaneously mimic a target's facial expressions depends on whether participants are motivated to infer the target's emotional state. In the first study we show that facial mimicry, assessed by facial electromyography, occurs more frequently when participants are specifically instructed to infer a target's emotional state than when given no instruction. In the second study, we replicate this effect using the Facial Action Coding System to show that participants are more likely to mimic facial expressions of emotion when they are asked to infer the target's emotional state, rather than make inferences about a physical trait unrelated to emotion. These results provide convergent evidence that the explicit goal of understanding a target's emotional state affects the degree of facial mimicry shown by the perceiver, suggesting moderation of reflex-like motor activities by higher cognitive processes.
Murata, Aiko; Saito, Hisamichi; Schug, Joanna; Ogawa, Kenji; Kameda, Tatsuya
2016-01-01
A number of studies have shown that individuals often spontaneously mimic the facial expressions of others, a tendency known as facial mimicry. This tendency has generally been considered a reflex-like “automatic” response, but several recent studies have shown that the degree of mimicry may be moderated by contextual information. However, the cognitive and motivational factors underlying the contextual moderation of facial mimicry require further empirical investigation. In this study, we present evidence that the degree to which participants spontaneously mimic a target’s facial expressions depends on whether participants are motivated to infer the target’s emotional state. In the first study we show that facial mimicry, assessed by facial electromyography, occurs more frequently when participants are specifically instructed to infer a target’s emotional state than when given no instruction. In the second study, we replicate this effect using the Facial Action Coding System to show that participants are more likely to mimic facial expressions of emotion when they are asked to infer the target’s emotional state, rather than make inferences about a physical trait unrelated to emotion. These results provide convergent evidence that the explicit goal of understanding a target’s emotional state affects the degree of facial mimicry shown by the perceiver, suggesting moderation of reflex-like motor activities by higher cognitive processes. PMID:27055206
[Expression of the emotions in the drawing of a man by the child from 5 to 11 years of age].
Brechet, Claire; Picard, Delphine; Baldy, René
2007-06-01
This study examines the development of children's ability to express emotions in their human figure drawing. Sixty children of 5, 8, and 11 years were asked to draw "a man," and then a "sad", "happy," "angry" and "surprised" man. Expressivity of the drawings was assessed by means of two procedures: a limited choice and a free labelling procedure. Emotionally expressive drawings were then evaluated in terms of the number and the type of graphic cues that were used to express emotion. It was found that children are able to depict happiness and sadness at 8, anger and surprise at 11. With age, children use increasingly numerous and complex graphic cues for each emotion (i.e., facial expression, body position, and contextual cues). Graphic cues for facial expression (e.g., concave mouth, curved eyebrows, wide opened eyes) share strong similarities with specific "action units" described by Ekman and Friesen (1978) in their Facial Action Coding System. Children's ability to depict emotion in their human figure drawing is discussed in relation to perceptual, conceptual, and graphic abilities.
Blend Shape Interpolation and FACS for Realistic Avatar
NASA Astrophysics Data System (ADS)
Alkawaz, Mohammed Hazim; Mohamad, Dzulkifli; Basori, Ahmad Hoirul; Saba, Tanzila
2015-03-01
The quest of developing realistic facial animation is ever-growing. The emergence of sophisticated algorithms, new graphical user interfaces, laser scans and advanced 3D tools imparted further impetus towards the rapid advancement of complex virtual human facial model. Face-to-face communication being the most natural way of human interaction, the facial animation systems became more attractive in the information technology era for sundry applications. The production of computer-animated movies using synthetic actors are still challenging issues. Proposed facial expression carries the signature of happiness, sadness, angry or cheerful, etc. The mood of a particular person in the midst of a large group can immediately be identified via very subtle changes in facial expressions. Facial expressions being very complex as well as important nonverbal communication channel are tricky to synthesize realistically using computer graphics. Computer synthesis of practical facial expressions must deal with the geometric representation of the human face and the control of the facial animation. We developed a new approach by integrating blend shape interpolation (BSI) and facial action coding system (FACS) to create a realistic and expressive computer facial animation design. The BSI is used to generate the natural face while the FACS is employed to reflect the exact facial muscle movements for four basic natural emotional expressions such as angry, happy, sad and fear with high fidelity. The results in perceiving the realistic facial expression for virtual human emotions based on facial skin color and texture may contribute towards the development of virtual reality and game environment of computer aided graphics animation systems.
Automated Measurement of Facial Expression in Infant-Mother Interaction: A Pilot Study
Messinger, Daniel S.; Mahoor, Mohammad H.; Chow, Sy-Miin; Cohn, Jeffrey F.
2009-01-01
Automated facial measurement using computer vision has the potential to objectively document continuous changes in behavior. To examine emotional expression and communication, we used automated measurements to quantify smile strength, eye constriction, and mouth opening in two six-month-old/mother dyads who each engaged in a face-to-face interaction. Automated measurements showed high associations with anatomically based manual coding (concurrent validity); measurements of smiling showed high associations with mean ratings of positive emotion made by naive observers (construct validity). For both infants and mothers, smile strength and eye constriction (the Duchenne marker) were correlated over time, creating a continuous index of smile intensity. Infant and mother smile activity exhibited changing (nonstationary) local patterns of association, suggesting the dyadic repair and dissolution of states of affective synchrony. The study provides insights into the potential and limitations of automated measurement of facial action. PMID:19885384
Novel dynamic Bayesian networks for facial action element recognition and understanding
NASA Astrophysics Data System (ADS)
Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong
2011-12-01
In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.
Skilful communication: Emotional facial expressions recognition in very old adults.
María Sarabia-Cobo, Carmen; Navas, María José; Ellgring, Heiner; García-Rodríguez, Beatriz
2016-02-01
The main objective of this study was to assess the changes associated with ageing in the ability to identify emotional facial expressions and to what extent such age-related changes depend on the intensity with which each basic emotion is manifested. A randomised controlled trial carried out on 107 subjects who performed a six alternative forced-choice emotional expressions identification task. The stimuli consisted of 270 virtual emotional faces expressing the six basic emotions (happiness, sadness, surprise, fear, anger and disgust) at three different levels of intensity (low, pronounced and maximum). The virtual faces were generated by facial surface changes, as described in the Facial Action Coding System (FACS). A progressive age-related decline in the ability to identify emotional facial expressions was detected. The ability to recognise the intensity of expressions was one of the most strongly impaired variables associated with age, although the valence of emotion was also poorly identified, particularly in terms of recognising negative emotions. Nurses should be mindful of how ageing affects communication with older patients. In this study, very old adults displayed more difficulties in identifying emotional facial expressions, especially low intensity expressions and those associated with difficult emotions like disgust or fear. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bologna, Matteo; Berardelli, Isabella; Paparella, Giulia; Marsili, Luca; Ricciardi, Lucia; Fabbrini, Giovanni; Berardelli, Alfredo
2016-01-01
Altered emotional processing, including reduced emotion facial expression and defective emotion recognition, has been reported in patients with Parkinson's disease (PD). However, few studies have objectively investigated facial expression abnormalities in PD using neurophysiological techniques. It is not known whether altered facial expression and recognition in PD are related. To investigate possible deficits in facial emotion expression and emotion recognition and their relationship, if any, in patients with PD. Eighteen patients with PD and 16 healthy controls were enrolled in this study. Facial expressions of emotion were recorded using a 3D optoelectronic system and analyzed using the facial action coding system. Possible deficits in emotion recognition were assessed using the Ekman test. Participants were assessed in one experimental session. Possible relationship between the kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients were evaluated using the Spearman's test and multiple regression analysis. The facial expression of all six basic emotions had slower velocity and lower amplitude in patients in comparison to healthy controls (all P s < 0.05). Patients also yielded worse Ekman global score and disgust, sadness, and fear sub-scores than healthy controls (all P s < 0.001). Altered facial expression kinematics and emotion recognition deficits were unrelated in patients (all P s > 0.05). Finally, no relationship emerged between kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients (all P s > 0.05). The results in this study provide further evidence of altered emotional processing in PD. The lack of any correlation between altered facial emotion expression kinematics and emotion recognition deficits in patients suggests that these abnormalities are mediated by separate pathophysiological mechanisms.
The Facial Expression Coding System (FACES): Development, Validation, and Utility
ERIC Educational Resources Information Center
Kring, Ann M.; Sloan, Denise M.
2007-01-01
This article presents information on the development and validation of the Facial Expression Coding System (FACES; A. M. Kring & D. Sloan, 1991). Grounded in a dimensional model of emotion, FACES provides information on the valence (positive, negative) of facial expressive behavior. In 5 studies, reliability and validity data from 13 diverse…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-06
... Process To Develop Consumer Data Privacy Code of Conduct Concerning Facial Recognition Technology AGENCY... technology. This Notice announces the meetings to be held in February, March, April, May, and June 2014. The... promote trust regarding facial recognition technology in the commercial context.\\4\\ NTIA encourages...
Nine-year-old children use norm-based coding to visually represent facial expression.
Burton, Nichola; Jeffery, Linda; Skinner, Andrew L; Benton, Christopher P; Rhodes, Gillian
2013-10-01
Children are less skilled than adults at making judgments about facial expression. This could be because they have not yet developed adult-like mechanisms for visually representing faces. Adults are thought to represent faces in a multidimensional face-space, and have been shown to code the expression of a face relative to the norm or average face in face-space. Norm-based coding is economical and adaptive, and may be what makes adults more sensitive to facial expression than children. This study investigated the coding system that children use to represent facial expression. An adaptation aftereffect paradigm was used to test 24 adults and 18 children (9 years 2 months to 9 years 11 months old). Participants adapted to weak and strong antiexpressions. They then judged the expression of an average expression. Adaptation created aftereffects that made the test face look like the expression opposite that of the adaptor. Consistent with the predictions of norm-based but not exemplar-based coding, aftereffects were larger for strong than weak adaptors for both age groups. Results indicate that, like adults, children's coding of facial expressions is norm-based. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Fujiwara, Takeo; Mizuki, Rie; Miki, Takahiro; Chemtob, Claude
2015-01-01
"Emotional numbing" is a symptom of post-traumatic stress disorder (PTSD) characterized by a loss of interest in usually enjoyable activities, feeling detached from others, and an inability to express a full range of emotions. Emotional numbing is usually assessed through self-report, and is particularly difficult to ascertain among young children. We conducted a pilot study to explore the use of facial expression ratings in response to a comedy video clip to assess emotional reactivity among preschool children directly exposed to the Great East Japan Earthquake. This study included 23 child participants. Child PTSD symptoms were measured using a modified version of the Parent's Report of the Child's Reaction to Stress scale. Children were filmed while watching a 2-min video compilation of natural scenes ('baseline video') followed by a 2-min video clip from a television comedy ('comedy video'). Children's facial expressions were processed the using Noldus FaceReader software, which implements the Facial Action Coding System (FACS). We investigated the association between PTSD symptom scores and facial emotion reactivity using linear regression analysis. Children with higher PTSD symptom scores showed a significantly greater proportion of neutral facial expressions, controlling for sex, age, and baseline facial expression (p < 0.05). This pilot study suggests that facial emotion reactivity, measured using facial expression recognition software, has the potential to index emotional numbing in young children. This pilot study adds to the emerging literature on using experimental psychopathology methods to characterize children's reactions to disasters.
Schulte-Rüther, Martin; Otte, Ellen; Adigüzel, Kübra; Firk, Christine; Herpertz-Dahlmann, Beate; Koch, Iring; Konrad, Kerstin
2017-02-01
It has been suggested that an early deficit in the human mirror neuron system (MNS) is an important feature of autism. Recent findings related to simple hand and finger movements do not support a general dysfunction of the MNS in autism. Studies investigating facial actions (e.g., emotional expressions) have been more consistent, however, mostly relied on passive observation tasks. We used a new variant of a compatibility task for the assessment of automatic facial mimicry responses that allowed for simultaneous control of attention to facial stimuli. We used facial electromyography in 18 children and adolescents with Autism spectrum disorder (ASD) and 18 typically developing controls (TDCs). We observed a robust compatibility effect in ASD, that is, the execution of a facial expression was facilitated if a congruent facial expression was observed. Time course analysis of RT distributions and comparison to a classic compatibility task (symbolic Simon task) revealed that the facial compatibility effect appeared early and increased with time, suggesting fast and sustained activation of motor codes during observation of facial expressions. We observed a negative correlation of the compatibility effect with age across participants and in ASD, and a positive correlation between self-rated empathy and congruency for smiling faces in TDC but not in ASD. This pattern of results suggests that basic motor mimicry is intact in ASD, but is not associated with complex social cognitive abilities such as emotion understanding and empathy. Autism Res 2017, 10: 298-310. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.
Fujiwara, Takeo; Mizuki, Rie; Miki, Takahiro; Chemtob, Claude
2015-01-01
“Emotional numbing” is a symptom of post-traumatic stress disorder (PTSD) characterized by a loss of interest in usually enjoyable activities, feeling detached from others, and an inability to express a full range of emotions. Emotional numbing is usually assessed through self-report, and is particularly difficult to ascertain among young children. We conducted a pilot study to explore the use of facial expression ratings in response to a comedy video clip to assess emotional reactivity among preschool children directly exposed to the Great East Japan Earthquake. This study included 23 child participants. Child PTSD symptoms were measured using a modified version of the Parent’s Report of the Child’s Reaction to Stress scale. Children were filmed while watching a 2-min video compilation of natural scenes (‘baseline video’) followed by a 2-min video clip from a television comedy (‘comedy video’). Children’s facial expressions were processed the using Noldus FaceReader software, which implements the Facial Action Coding System (FACS). We investigated the association between PTSD symptom scores and facial emotion reactivity using linear regression analysis. Children with higher PTSD symptom scores showed a significantly greater proportion of neutral facial expressions, controlling for sex, age, and baseline facial expression (p < 0.05). This pilot study suggests that facial emotion reactivity, measured using facial expression recognition software, has the potential to index emotional numbing in young children. This pilot study adds to the emerging literature on using experimental psychopathology methods to characterize children’s reactions to disasters. PMID:26528206
Sparse coding for flexible, robust 3D facial-expression synthesis.
Lin, Yuxu; Song, Mingli; Quynh, Dao Thi Phuong; He, Ying; Chen, Chun
2012-01-01
Computer animation researchers have been extensively investigating 3D facial-expression synthesis for decades. However, flexible, robust production of realistic 3D facial expressions is still technically challenging. A proposed modeling framework applies sparse coding to synthesize 3D expressive faces, using specified coefficients or expression examples. It also robustly recovers facial expressions from noisy and incomplete data. This approach can synthesize higher-quality expressions in less time than the state-of-the-art techniques.
Support vector machine-based facial-expression recognition method combining shape and appearance
NASA Astrophysics Data System (ADS)
Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun
2010-11-01
Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.
Sayette, Michael A.; Creswell, Kasey G.; Dimoff, John D.; Fairbairn, Catharine E.; Cohn, Jeffrey F.; Heckman, Bryan W.; Kirchner, Thomas R.; Levine, John M.; Moreland, Richard L.
2017-01-01
We integrated research on emotion and on small groups to address a fundamental and enduring question facing alcohol researchers: What are the specific mechanisms that underlie the reinforcing effects of drinking? In one of the largest alcohol-administration studies yet conducted, we employed a novel group-formation paradigm to evaluate the socioemotional effects of alcohol. Seven hundred twenty social drinkers (360 male, 360 female) were assembled into groups of 3 unacquainted persons each and given a moderate dose of an alcoholic, placebo, or control beverage, which they consumed over 36 min. These groups’ social interactions were video recorded, and the duration and sequence of interaction partners’ facial and speech behaviors were systematically coded (e.g., using the Facial Action Coding System). Alcohol consumption enhanced individual- and group-level behaviors associated with positive affect, reduced individual-level behaviors associated with negative affect, and elevated self-reported bonding. Our results indicate that alcohol facilitates bonding during group formation. Assessing nonverbal responses in social contexts offers new directions for evaluating the effects of alcohol. PMID:22760882
Joint Facial Action Unit Detection and Feature Fusion: A Multi-conditional Learning Approach.
Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja
2016-10-05
Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in specific patterns, and rarely in isolation. Yet, most existing methods for automatic action unit detection fail to exploit dependencies among them, and the corresponding facial features. To address this, we propose a novel multi-conditional latent variable model for simultaneous fusion of facial features and joint action unit detection. Specifically, the proposed model performs feature fusion in a generative fashion via a low-dimensional shared subspace, while simultaneously performing action unit detection using a discriminative classification approach. We show that by combining the merits of both approaches, the proposed methodology outperforms existing purely discriminative/generative methods for the target task. To reduce the number of parameters, and avoid overfitting, a novel Bayesian learning approach based on Monte Carlo sampling is proposed, to integrate out the shared subspace. We validate the proposed method on posed and spontaneous data from three publicly available datasets (CK+, DISFA and Shoulder-pain), and show that both feature fusion and joint learning of action units leads to improved performance compared to the state-of-the-art methods for the task.
Enjoying vs. smiling: Facial muscular activation in response to emotional language.
Fino, Edita; Menegatti, Michela; Avenanti, Alessio; Rubini, Monica
2016-07-01
The present study examined whether emotionally congruent facial muscular activation - a somatic index of emotional language embodiment can be elicited by reading subject-verb sentences composed of action verbs, that refer directly to facial expressions (e.g., Mario smiles), but also by reading more abstract state verbs, which provide more direct access to the emotions felt by the agent (e.g., Mario enjoys). To address this issue, we measured facial electromyography (EMG) while participants evaluated state and action verb sentences. We found emotional sentences including both verb categories to have valence-congruent effects on emotional ratings and corresponding facial muscle activations. As expected, state verb-sentences were judged with higher valence ratings than action verb-sentences. Moreover, despite emotional congruent facial activations were similar for the two linguistic categories, in a late temporal window we found a tendency for greater EMG modulation when reading action relative to state verb sentences. These results support embodied theories of language comprehension and suggest that understanding emotional action and state verb sentences relies on partially dissociable motor and emotional processes. Copyright © 2016 Elsevier B.V. All rights reserved.
Toyoda, Aru; Maruhashi, Tamaki; Malaivijitnond, Suchinda; Koda, Hiroki
2017-10-01
Speech is unique to humans and characterized by facial actions of ∼5 Hz oscillations of lip, mouth or jaw movements. Lip-smacking, a facial display of primates characterized by oscillatory actions involving the vertical opening and closing of the jaw and lips, exhibits stable 5-Hz oscillation patterns, matching that of speech, suggesting that lip-smacking is a precursor of speech. We tested if facial or vocal actions exhibiting the same rate of oscillation are found in wide forms of facial or vocal displays in various social contexts, exhibiting diversity among species. We observed facial and vocal actions of wild stump-tailed macaques (Macaca arctoides), and selected video clips including facial displays (teeth chattering; TC), panting calls, and feeding. Ten open-to-open mouth durations during TC and feeding and five amplitude peak-to-peak durations in panting were analyzed. Facial display (TC) and vocalization (panting) oscillated within 5.74 ± 1.19 and 6.71 ± 2.91 Hz, respectively, similar to the reported lip-smacking of long-tailed macaques and the speech of humans. These results indicated a common mechanism for the central pattern generator underlying orofacial movements, which would evolve to speech. Similar oscillations in panting, which evolved from different muscular control than the orofacial action, suggested the sensory foundations for perceptual saliency particular to 5-Hz rhythms in macaques. This supports the pre-adaptation hypothesis of speech evolution, which states a central pattern generator for 5-Hz facial oscillation and perceptual background tuned to 5-Hz actions existed in common ancestors of macaques and humans, before the emergence of speech. © 2017 Wiley Periodicals, Inc.
Contextual influences on pain communication in couples with and without a partner with chronic pain.
Gagnon, Michelle M; Hadjistavropoulos, Thomas; MacNab, Ying C
2017-10-01
This is an experimental study of pain communication in couples. Despite evidence that chronic pain in one partner impacts both members of the dyad, dyadic influences on pain communication have not been sufficiently examined and are typically studied based on retrospective reports. Our goal was to directly study contextual influences (ie, presence of chronic pain, gender, relationship quality, and pain catastrophizing) on self-reported and nonverbal (ie, facial expressions) pain responses. Couples with (n = 66) and without (n = 65) an individual with chronic pain (ICP) completed relationship and pain catastrophizing questionnaires. Subsequently, one partner underwent a pain task (pain target, PT), while the other partner observed (pain observer, PO). In couples with an ICP, the ICP was assigned to be the PT. Pain intensity and PO perceived pain intensity ratings were recorded at multiple intervals. Facial expressions were video recorded throughout the pain task. Pain-related facial expression was quantified using the Facial Action Coding System. The most consistent predictor of either partner's pain-related facial expression was the pain-related facial expression of the other partner. Pain targets provided higher pain ratings than POs and female PTs reported and showed more pain, regardless of chronic pain status. Gender and the interaction between gender and relationship satisfaction were predictors of pain-related facial expression among PTs, but not POs. None of the examined variables predicted self-reported pain. Results suggest that contextual variables influence pain communication in couples, with distinct influences for PTs and POs. Moreover, self-report and nonverbal responses are not displayed in a parallel manner.
Cross-domain expression recognition based on sparse coding and transfer learning
NASA Astrophysics Data System (ADS)
Yang, Yong; Zhang, Weiyi; Huang, Yong
2017-05-01
Traditional facial expression recognition methods usually assume that the training set and the test set are independent and identically distributed. However, in actual expression recognition applications, the conditions of independent and identical distribution are hardly satisfied for the training set and test set because of the difference of light, shade, race and so on. In order to solve this problem and improve the performance of expression recognition in the actual applications, a novel method based on transfer learning and sparse coding is applied to facial expression recognition. First of all, a common primitive model, that is, the dictionary is learnt. Then, based on the idea of transfer learning, the learned primitive pattern is transferred to facial expression and the corresponding feature representation is obtained by sparse coding. The experimental results in CK +, JAFFE and NVIE database shows that the transfer learning based on sparse coding method can effectively improve the expression recognition rate in the cross-domain expression recognition task and is suitable for the practical facial expression recognition applications.
Spontaneous Facial Mimicry in Response to Dynamic Facial Expressions
ERIC Educational Resources Information Center
Sato, Wataru; Yoshikawa, Sakiko
2007-01-01
Based on previous neuroscientific evidence indicating activation of the mirror neuron system in response to dynamic facial actions, we hypothesized that facial mimicry would occur while subjects viewed dynamic facial expressions. To test this hypothesis, dynamic/static facial expressions of anger/happiness were presented using computer-morphing…
Auerbach, Sarah
2017-01-01
Trait cheerfulness predicts individual differences in experiences and behavioral responses in various humor experiments and settings. The present study is the first to investigate whether trait cheerfulness also influences the impact of a hospital clown intervention on the emotional state of patients. Forty-two adults received a clown visit in a rehabilitation center and rated their emotional state and trait cheerfulness afterward. Facial expressions of patients during the clown visit were coded with the Facial Action Coding System. Looking at the total sample, the hospital clown intervention elicited more frequent facial expressions of genuine enjoyment (Duchenne smiles) than other smiles (Non-Duchenne smiles), and more Duchenne smiles went along with more perceived funniness, a higher level of global positive feelings and transcendence. This supports the notion that overall, hospital clown interventions are beneficial for patients. However, when considering individual differences in the receptiveness to humor, results confirmed that high trait cheerful patients showed more Duchenne smiles than low trait cheerful patients (with no difference in Non-Duchenne smiles), and reported a higher level of positive emotions than low trait cheerful individuals. In summary, although hospital clown interventions on average successfully raise the patients’ level of positive emotions, not all patients in hospitals are equally susceptible to respond to humor with amusement, and thus do not equally benefit from a hospital clown intervention. Implications for research and practitioners are discussed. PMID:29180976
Hofree, Galit; Ruvolo, Paul; Reinert, Audrey; Bartlett, Marian S; Winkielman, Piotr
2018-01-01
Facial actions are key elements of non-verbal behavior. Perceivers' reactions to others' facial expressions often represent a match or mirroring (e.g., they smile to a smile). However, the information conveyed by an expression depends on context. Thus, when shown by an opponent, a smile conveys bad news and evokes frowning. The availability of anthropomorphic agents capable of facial actions raises the question of how people respond to such agents in social context. We explored this issue in a study where participants played a strategic game with or against a facially expressive android. Electromyography (EMG) recorded participants' reactions over zygomaticus muscle (smiling) and corrugator muscle (frowning). We found that participants' facial responses to android's expressions reflect their informational value, rather than a direct match. Overall, participants smiled more, and frowned less, when winning than losing. Critically, participants' responses to the game outcome were similar regardless of whether it was conveyed via the android's smile or frown. Furthermore, the outcome had greater impact on people's facial reactions when it was conveyed through android's face than a computer screen. These findings demonstrate that facial actions of artificial agents impact human facial responding. They also suggest a sophistication in human-robot communication that highlights the signaling value of facial expressions.
Short Alleles, Bigger Smiles? The Effect of 5-HTTLPR on Positive Emotional Expressions
Haase, Claudia M.; Beermann, Ursula; Saslow, Laura R.; Shiota, Michelle N.; Saturn, Sarina R.; Lwi, Sandy J.; Casey, James J.; Nguyen, Nguyen K.; Whalen, Patrick K.; Keltner, Dacher J.; Levenson, Robert W.
2015-01-01
The present research examined the effect of the 5-HTTLPR polymorphism in the serotonin transporter gene on objectively coded positive emotional expressions (i.e., laughing and smiling behavior objectively coded using the Facial Action Coding System). Three studies with independent samples of participants were conducted. Study 1 examined young adults watching still cartoons. Study 2 examined young, middle-aged, and older adults watching a thematically ambiguous yet subtly amusing film clip. Study 3 examined middle-aged and older spouses discussing an area of marital conflict (which typically produces both positive and negative emotion). Aggregating data across studies, results showed that the short allele of 5-HTTLPR predicted heightened positive emotional expressions. Results remained stable when controlling for age, gender, ethnicity, and depressive symptoms. These findings are consistent with the notion that the short allele of 5-HTTLPR functions as an emotion amplifier, which may confer heightened susceptibility to environmental conditions. PMID:26029940
Sasaki, Ryo; Matsumine, Hajime; Watanabe, Yorikatsu; Takeuchi, Yuichi; Yamato, Masayuki; Okano, Teruo; Miyata, Mariko; Ando, Tomohiro
2014-11-01
Dental pulp tissue contains Schwann and neural progenitor cells. Tissue-engineered nerve conduits with dental pulp cells promote facial nerve regeneration in rats. However, no nerve functional or electrophysiologic evaluations were performed. This study investigated the compound muscle action potential recordings and facial functional analysis of dental pulp cell regenerated nerve in rats. A silicone tube containing rat dental pulp cells in type I collagen gel was transplanted into a 7-mm gap of the buccal branch of the facial nerve in Lewis rats; the same defect was created in the marginal mandibular branch, which was ligatured. Compound muscle action potential recordings of vibrissal muscles and facial functional analysis with facial palsy score of the nerve were performed. Tubulation with dental pulp cells showed significantly lower facial palsy scores than the autograft group between 3 and 10 weeks postoperatively. However, the dental pulp cell facial palsy scores showed no significant difference from those of autograft after 11 weeks. Amplitude and duration of compound muscle action potentials in the dental pulp cell group showed no significant difference from those of the intact and autograft groups, and there was no significant difference in the latency of compound muscle action potentials between the groups at 13 weeks postoperatively. However, the latency in the dental pulp cell group was prolonged more than that of the intact group. Tubulation with dental pulp cells could recover facial nerve defects functionally and electrophysiologically, and the recovery became comparable to that of nerve autografting in rats.
Proposal of Self-Learning and Recognition System of Facial Expression
NASA Astrophysics Data System (ADS)
Ogawa, Yukihiro; Kato, Kunihito; Yamamoto, Kazuhiko
We describe realization of more complicated function by using the information acquired from some equipped unripe functions. The self-learning and recognition system of the human facial expression, which achieved under the natural relation between human and robot, are proposed. The robot with this system can understand human facial expressions and behave according to their facial expressions after the completion of learning process. The system modelled after the process that a baby learns his/her parents’ facial expressions. Equipping the robot with a camera the system can get face images and equipping the CdS sensors on the robot’s head the robot can get the information of human action. Using the information of these sensors, the robot can get feature of each facial expression. After self-learning is completed, when a person changed his facial expression in front of the robot, the robot operates actions under the relevant facial expression.
Norm-based coding of facial identity in adults with autism spectrum disorder.
Walsh, Jennifer A; Maurer, Daphne; Vida, Mark D; Rhodes, Gillian; Jeffery, Linda; Rutherford, M D
2015-03-01
It is unclear whether reported deficits in face processing in individuals with autism spectrum disorders (ASD) can be explained by deficits in perceptual face coding mechanisms. In the current study, we examined whether adults with ASD showed evidence of norm-based opponent coding of facial identity, a perceptual process underlying the recognition of facial identity in typical adults. We began with an original face and an averaged face and then created an anti-face that differed from the averaged face in the opposite direction from the original face by a small amount (near adaptor) or a large amount (far adaptor). To test for norm-based coding, we adapted participants on different trials to the near versus far adaptor, then asked them to judge the identity of the averaged face. We varied the size of the test and adapting faces in order to reduce any contribution of low-level adaptation. Consistent with the predictions of norm-based coding, high functioning adults with ASD (n = 27) and matched typical participants (n = 28) showed identity aftereffects that were larger for the far than near adaptor. Unlike results with children with ASD, the strength of the aftereffects were similar in the two groups. This is the first study to demonstrate norm-based coding of facial identity in adults with ASD. Copyright © 2015 Elsevier Ltd. All rights reserved.
Boyd, Hope; Murnen, Sarah K
2017-06-01
We examined the extent to which popular dolls and action figures were portrayed with gendered body proportions, and the extent to which these gendered ideals were associated with heterosexual "success." We coded internet depictions of 72 popular female dolls and 71 popular male action figures from the websites of three national stores in the United States. Sixty-two percent of dolls had a noticeably thin body, while 42.3% of action figures had noticeably muscular bodies. Further, more thin dolls were portrayed with more sex object features than less thin dolls, including revealing, tight clothing and high-heeled shoes; bodies positioned with a curved spine, bent knee, and head cant; and with a sexually appealing facial expression. More muscular male action figures were more likely than less muscular ones to be shown with hands in fists and with an angry, emotional expression, suggesting male dominance. Copyright © 2017 Elsevier Ltd. All rights reserved.
Predictive codes of familiarity and context during the perceptual learning of facial identities
NASA Astrophysics Data System (ADS)
Apps, Matthew A. J.; Tsakiris, Manos
2013-11-01
Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.
Automatic Detection of Frontal Face Midline by Chain-coded Merlin-Farber Hough Trasform
NASA Astrophysics Data System (ADS)
Okamoto, Daichi; Ohyama, Wataru; Wakabayashi, Tetsushi; Kimura, Fumitaka
We propose a novel approach for detection of the facial midline (facial symmetry axis) from a frontal face image. The facial midline has several applications, for instance reducing computational cost required for facial feature extraction (FFE) and postoperative assessment for cosmetic or dental surgery. The proposed method detects the facial midline of a frontal face from an edge image as the symmetry axis using the Merlin-Faber Hough transformation. And a new performance improvement scheme for midline detection by MFHT is present. The main concept of the proposed scheme is suppression of redundant vote on the Hough parameter space by introducing chain code representation for the binary edge image. Experimental results on the image dataset containing 2409 images from FERET database indicate that the proposed algorithm can improve the accuracy of midline detection from 89.9% to 95.1 % for face images with different scales and rotation.
Hofree, Galit; Ruvolo, Paul; Reinert, Audrey; Bartlett, Marian S.; Winkielman, Piotr
2018-01-01
Facial actions are key elements of non-verbal behavior. Perceivers’ reactions to others’ facial expressions often represent a match or mirroring (e.g., they smile to a smile). However, the information conveyed by an expression depends on context. Thus, when shown by an opponent, a smile conveys bad news and evokes frowning. The availability of anthropomorphic agents capable of facial actions raises the question of how people respond to such agents in social context. We explored this issue in a study where participants played a strategic game with or against a facially expressive android. Electromyography (EMG) recorded participants’ reactions over zygomaticus muscle (smiling) and corrugator muscle (frowning). We found that participants’ facial responses to android’s expressions reflect their informational value, rather than a direct match. Overall, participants smiled more, and frowned less, when winning than losing. Critically, participants’ responses to the game outcome were similar regardless of whether it was conveyed via the android’s smile or frown. Furthermore, the outcome had greater impact on people’s facial reactions when it was conveyed through android’s face than a computer screen. These findings demonstrate that facial actions of artificial agents impact human facial responding. They also suggest a sophistication in human-robot communication that highlights the signaling value of facial expressions. PMID:29740307
Richoz, Anne-Raphaëlle; Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G; Caldara, Roberto
2015-04-01
The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus--pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits obtained from patients by using static images of facial expressions, and offer novel routes for patient rehabilitation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Altering sensorimotor feedback disrupts visual discrimination of facial expressions.
Wood, Adrienne; Lupyan, Gary; Sherrin, Steven; Niedenthal, Paula
2016-08-01
Looking at another person's facial expression of emotion can trigger the same neural processes involved in producing the expression, and such responses play a functional role in emotion recognition. Disrupting individuals' facial action, for example, interferes with verbal emotion recognition tasks. We tested the hypothesis that facial responses also play a functional role in the perceptual processing of emotional expressions. We altered the facial action of participants with a gel facemask while they performed a task that involved distinguishing target expressions from highly similar distractors. Relative to control participants, participants in the facemask condition demonstrated inferior perceptual discrimination of facial expressions, but not of nonface stimuli. The findings suggest that somatosensory/motor processes involving the face contribute to the visual perceptual-and not just conceptual-processing of facial expressions. More broadly, our study contributes to growing evidence for the fundamentally interactive nature of the perceptual inputs from different sensory modalities.
Action recognition is sensitive to the identity of the actor.
Ferstl, Ylva; Bülthoff, Heinrich; de la Rosa, Stephan
2017-09-01
Recognizing who is carrying out an action is essential for successful human interaction. The cognitive mechanisms underlying this ability are little understood and have been subject of discussions in embodied approaches to action recognition. Here we examine one solution, that visual action recognition processes are at least partly sensitive to the actor's identity. We investigated the dependency between identity information and action related processes by testing the sensitivity of neural action recognition processes to clothing and facial identity information with a behavioral adaptation paradigm. Our results show that action adaptation effects are in fact modulated by both clothing information and the actor's facial identity. The finding demonstrates that neural processes underlying action recognition are sensitive to identity information (including facial identity) and thereby not exclusively tuned to actions. We suggest that such response properties are useful to help humans in knowing who carried out an action. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Brain Systems for Assessing Facial Attractiveness
ERIC Educational Resources Information Center
Winston, Joel S.; O'Doherty, John; Kilner, James M.; Perrett, David I.; Dolan, Raymond J.
2007-01-01
Attractiveness is a facial attribute that shapes human affiliative behaviours. In a previous study we reported a linear response to facial attractiveness in orbitofrontal cortex (OFC), a region involved in reward processing. There are strong theoretical grounds for the hypothesis that coding stimulus reward value also involves the amygdala. The…
Grossman, Ruth B; Edelson, Lisa R; Tager-Flusberg, Helen
2013-06-01
People with high-functioning autism (HFA) have qualitative differences in facial expression and prosody production, which are rarely systematically quantified. The authors' goals were to qualitatively and quantitatively analyze prosody and facial expression productions in children and adolescents with HFA. Participants were 22 male children and adolescents with HFA and 18 typically developing (TD) controls (17 males, 1 female). The authors used a story retelling task to elicit emotionally laden narratives, which were analyzed through the use of acoustic measures and perceptual codes. Naïve listeners coded all productions for emotion type, degree of expressiveness, and awkwardness. The group with HFA was not significantly different in accuracy or expressiveness of facial productions, but was significantly more awkward than the TD group. Participants with HFA were significantly more expressive in their vocal productions, with a trend for greater awkwardness. Severity of social communication impairment, as captured by the Autism Diagnostic Observation Schedule (ADOS; Lord, Rutter, DiLavore, & Risi, 1999), was correlated with greater vocal and facial awkwardness. Facial and vocal expressions of participants with HFA were as recognizable as those of their TD peers but were qualitatively different, particularly when listeners coded samples with intact dynamic properties. These preliminary data show qualitative differences in nonverbal communication that may have significant negative impact on the social communication success of children and adolescents with HFA.
NASA Astrophysics Data System (ADS)
Samad, Manar D.; Bobzien, Jonna L.; Harrington, John W.; Iftekharuddin, Khan M.
2016-03-01
Autism Spectrum Disorders (ASD) can impair non-verbal communication including the variety and extent of facial expressions in social and interpersonal communication. These impairments may appear as differential traits in the physiology of facial muscles of an individual with ASD when compared to a typically developing individual. The differential traits in the facial expressions as shown by facial muscle-specific changes (also known as 'facial oddity' for subjects with ASD) may be measured visually. However, this mode of measurement may not discern the subtlety in facial oddity distinctive to ASD. Earlier studies have used intrusive electrophysiological sensors on the facial skin to gauge facial muscle actions from quantitative physiological data. This study demonstrates, for the first time in the literature, novel quantitative measures for facial oddity recognition using non-intrusive facial imaging sensors such as video and 3D optical cameras. An Institutional Review Board (IRB) approved that pilot study has been conducted on a group of individuals consisting of eight participants with ASD and eight typically developing participants in a control group to capture their facial images in response to visual stimuli. The proposed computational techniques and statistical analyses reveal higher mean of actions in the facial muscles of the ASD group versus the control group. The facial muscle-specific evaluation reveals intense yet asymmetric facial responses as facial oddity in participants with ASD. This finding about the facial oddity may objectively define measurable differential markers in the facial expressions of individuals with ASD.
Intact imitation of emotional facial actions in autism spectrum conditions.
Press, Clare; Richardson, Daniel; Bird, Geoffrey
2010-09-01
It has been proposed that there is a core impairment in autism spectrum conditions (ASC) to the mirror neuron system (MNS): If observed actions cannot be mapped onto the motor commands required for performance, higher order sociocognitive functions that involve understanding another person's perspective, such as theory of mind, may be impaired. However, evidence of MNS impairment in ASC is mixed. The present study used an 'automatic imitation' paradigm to assess MNS functioning in adults with ASC and matched controls, when observing emotional facial actions. Participants performed a pre-specified angry or surprised facial action in response to observed angry or surprised facial actions, and the speed of their action was measured with motion tracking equipment. Both the ASC and control groups demonstrated automatic imitation of the facial actions, such that responding was faster when they acted with the same emotional expression that they had observed. There was no difference between the two groups in the magnitude of the effect. These findings suggest that previous apparent demonstrations of impairments to the MNS in ASC may be driven by a lack of visual attention to the stimuli or motor sequencing impairments, and therefore that there is, in fact, no MNS impairment in ASC. We discuss these findings with reference to the literature on MNS functioning and imitation in ASC, as well as theories of the role of the MNS in sociocognitive functioning in typical development. Copyright 2010 Elsevier Ltd. All rights reserved.
Coll, Sélim Yahia; Ceravolo, Leonardo; Frühholz, Sascha; Grandjean, Didier
2018-05-02
Different parts of our brain code the perceptual features and actions related to an object, causing a binding problem, in which the brain has to integrate information related to an event without any interference regarding the features and actions involved in other concurrently processed events. Using a paradigm similar to Hommel, who revealed perception-action bindings, we showed that emotion could bind with motor actions when relevant, and in specific conditions, irrelevant for the task. By adapting our protocol to a functional Magnetic Resonance Imaging paradigm we investigated, in the present study, the neural bases of the emotion-action binding with task-relevant angry faces. Our results showed that emotion bound with motor responses. This integration revealed increased activity in distributed brain areas involved in: (i) memory, including the hippocampi; (ii) motor actions with the precentral gyri; (iii) and emotion processing with the insula. Interestingly, increased activations in the cingulate gyri and putamen, highlighted their potential key role in the emotion-action binding, due to their involvement in emotion processing, motor actions, and memory. The present study confirmed our previous results and point out for the first time the functional brain activity related to the emotion-action association.
Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation
Cid, Felipe; Moreno, Jose; Bustos, Pablo; Núñez, Pedro
2014-01-01
This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions. PMID:24787636
Bodily action penetrates affective perception
Rigutti, Sara; Gerbino, Walter
2016-01-01
Fantoni & Gerbino (2014) showed that subtle postural shifts associated with reaching can have a strong hedonic impact and affect how actors experience facial expressions of emotion. Using a novel Motor Action Mood Induction Procedure (MAMIP), they found consistent congruency effects in participants who performed a facial emotion identification task after a sequence of visually-guided reaches: a face perceived as neutral in a baseline condition appeared slightly happy after comfortable actions and slightly angry after uncomfortable actions. However, skeptics about the penetrability of perception (Zeimbekis & Raftopoulos, 2015) would consider such evidence insufficient to demonstrate that observer’s internal states induced by action comfort/discomfort affect perception in a top-down fashion. The action-modulated mood might have produced a back-end memory effect capable of affecting post-perceptual and decision processing, but not front-end perception. Here, we present evidence that performing a facial emotion detection (not identification) task after MAMIP exhibits systematic mood-congruent sensitivity changes, rather than response bias changes attributable to cognitive set shifts; i.e., we show that observer’s internal states induced by bodily action can modulate affective perception. The detection threshold for happiness was lower after fifty comfortable than uncomfortable reaches; while the detection threshold for anger was lower after fifty uncomfortable than comfortable reaches. Action valence induced an overall sensitivity improvement in detecting subtle variations of congruent facial expressions (happiness after positive comfortable actions, anger after negative uncomfortable actions), in the absence of significant response bias shifts. Notably, both comfortable and uncomfortable reaches impact sensitivity in an approximately symmetric way relative to a baseline inaction condition. All of these constitute compelling evidence of a genuine top-down effect on perception: specifically, facial expressions of emotion are penetrable by action-induced mood. Affective priming by action valence is a candidate mechanism for the influence of observer’s internal states on properties experienced as phenomenally objective and yet loaded with meaning. PMID:26893964
The Influence of Facial Signals on the Automatic Imitation of Hand Actions
Butler, Emily E.; Ward, Robert; Ramsey, Richard
2016-01-01
Imitation and facial signals are fundamental social cues that guide interactions with others, but little is known regarding the relationship between these behaviors. It is clear that during expression detection, we imitate observed expressions by engaging similar facial muscles. It is proposed that a cognitive system, which matches observed and performed actions, controls imitation and contributes to emotion understanding. However, there is little known regarding the consequences of recognizing affective states for other forms of imitation, which are not inherently tied to the observed emotion. The current study investigated the hypothesis that facial cue valence would modulate automatic imitation of hand actions. To test this hypothesis, we paired different types of facial cue with an automatic imitation task. Experiments 1 and 2 demonstrated that a smile prompted greater automatic imitation than angry and neutral expressions. Additionally, a meta-analysis of this and previous studies suggests that both happy and angry expressions increase imitation compared to neutral expressions. By contrast, Experiments 3 and 4 demonstrated that invariant facial cues, which signal trait-levels of agreeableness, had no impact on imitation. Despite readily identifying trait-based facial signals, levels of agreeableness did not differentially modulate automatic imitation. Further, a Bayesian analysis showed that the null effect was between 2 and 5 times more likely than the experimental effect. Therefore, we show that imitation systems are more sensitive to prosocial facial signals that indicate “in the moment” states than enduring traits. These data support the view that a smile primes multiple forms of imitation including the copying actions that are not inherently affective. The influence of expression detection on wider forms of imitation may contribute to facilitating interactions between individuals, such as building rapport and affiliation. PMID:27833573
The Influence of Facial Signals on the Automatic Imitation of Hand Actions.
Butler, Emily E; Ward, Robert; Ramsey, Richard
2016-01-01
Imitation and facial signals are fundamental social cues that guide interactions with others, but little is known regarding the relationship between these behaviors. It is clear that during expression detection, we imitate observed expressions by engaging similar facial muscles. It is proposed that a cognitive system, which matches observed and performed actions, controls imitation and contributes to emotion understanding. However, there is little known regarding the consequences of recognizing affective states for other forms of imitation, which are not inherently tied to the observed emotion. The current study investigated the hypothesis that facial cue valence would modulate automatic imitation of hand actions. To test this hypothesis, we paired different types of facial cue with an automatic imitation task. Experiments 1 and 2 demonstrated that a smile prompted greater automatic imitation than angry and neutral expressions. Additionally, a meta-analysis of this and previous studies suggests that both happy and angry expressions increase imitation compared to neutral expressions. By contrast, Experiments 3 and 4 demonstrated that invariant facial cues, which signal trait-levels of agreeableness, had no impact on imitation. Despite readily identifying trait-based facial signals, levels of agreeableness did not differentially modulate automatic imitation. Further, a Bayesian analysis showed that the null effect was between 2 and 5 times more likely than the experimental effect. Therefore, we show that imitation systems are more sensitive to prosocial facial signals that indicate "in the moment" states than enduring traits. These data support the view that a smile primes multiple forms of imitation including the copying actions that are not inherently affective. The influence of expression detection on wider forms of imitation may contribute to facilitating interactions between individuals, such as building rapport and affiliation.
Brown, William; Liu, Connie; John, Rita Marie; Ford, Phoebe
2014-01-01
Developing gross and fine motor skills and expressing complex emotion is critical for child development. We introduce "StorySense", an eBook-integrated mobile app prototype that can sense face and sound topologies and identify movement and expression to promote children's motor skills and emotional developmental. Currently, most interactive eBooks on mobile devices only leverage "low-motor" interaction (i.e. tapping or swiping). Our app senses a greater breath of motion (e.g. clapping, snapping, and face tracking), and dynamically alters the storyline according to physical responses in ways that encourage the performance of predetermined motor skills ideal for a child's gross and fine motor development. In addition, our app can capture changes in facial topology, which can later be mapped using the Facial Action Coding System (FACS) for later interpretation of emotion. StorySense expands the human computer interaction vocabulary for mobile devices. Potential clinical applications include child development, physical therapy, and autism.
Short alleles, bigger smiles? The effect of 5-HTTLPR on positive emotional expressions.
Haase, Claudia M; Beermann, Ursula; Saslow, Laura R; Shiota, Michelle N; Saturn, Sarina R; Lwi, Sandy J; Casey, James J; Nguyen, Nguyen K; Whalen, Patrick K; Keltner, Dacher; Levenson, Robert W
2015-08-01
The present research examined the effect of the 5-HTTLPR polymorphism in the serotonin transporter gene on objectively coded positive emotional expressions (i.e., laughing and smiling behavior objectively coded using the Facial Action Coding System). Three studies with independent samples of participants were conducted. Study 1 examined young adults watching still cartoons. Study 2 examined young, middle-aged, and older adults watching a thematically ambiguous yet subtly amusing film clip. Study 3 examined middle-aged and older spouses discussing an area of marital conflict (that typically produces both positive and negative emotion). Aggregating data across studies, results showed that the short allele of 5-HTTLPR predicted heightened positive emotional expressions. Results remained stable when controlling for age, gender, ethnicity, and depressive symptoms. These findings are consistent with the notion that the short allele of 5-HTTLPR functions as an emotion amplifier, which may confer heightened susceptibility to environmental conditions. (c) 2015 APA, all rights reserved).
Humor and laughter in patients with cerebellar degeneration.
Frank, B; Propson, B; Göricke, S; Jacobi, H; Wild, B; Timmann, D
2012-06-01
Humor is a complex behavior which includes cognitive, affective and motor responses. Based on observations of affective changes in patients with cerebellar lesions, the cerebellum may support cerebral and brainstem areas involved in understanding and appreciation of humorous stimuli and expression of laughter. The aim of the present study was to examine if humor appreciation, perception of humorous stimuli, and the succeeding facial reaction differ between patients with cerebellar degeneration and healthy controls. Twenty-three adults with pure cerebellar degeneration were compared with 23 age-, gender-, and education-matched healthy control subjects. No significant difference in humor appreciation and perception of humorous stimuli could be found between groups using the 3 Witz-Dimensionen Test, a validated test asking for funniness and aversiveness of jokes and cartoons. Furthermore, while observing jokes, humorous cartoons, and video sketches, facial expressions of subjects were videotaped and afterwards analysed using the Facial Action Coding System. Using depression as a covariate, the number, and to a lesser degree, the duration of facial expressions during laughter were reduced in cerebellar patients compared to healthy controls. In sum, appreciation of humor appears to be largely preserved in patients with chronic cerebellar degeneration. Cerebellar circuits may contribute to the expression of laughter. Findings add to the literature that non-motor disorders in patients with chronic cerebellar disease are generally mild, but do not exclude that more marked disorders may show up in acute cerebellar disease and/or in more specific tests of humor appreciation.
Cost-Sensitive Local Binary Feature Learning for Facial Age Estimation.
Lu, Jiwen; Liong, Venice Erin; Zhou, Jie
2015-12-01
In this paper, we propose a cost-sensitive local binary feature learning (CS-LBFL) method for facial age estimation. Unlike the conventional facial age estimation methods that employ hand-crafted descriptors or holistically learned descriptors for feature representation, our CS-LBFL method learns discriminative local features directly from raw pixels for face representation. Motivated by the fact that facial age estimation is a cost-sensitive computer vision problem and local binary features are more robust to illumination and expression variations than holistic features, we learn a series of hashing functions to project raw pixel values extracted from face patches into low-dimensional binary codes, where binary codes with similar chronological ages are projected as close as possible, and those with dissimilar chronological ages are projected as far as possible. Then, we pool and encode these local binary codes within each face image as a real-valued histogram feature for face representation. Moreover, we propose a cost-sensitive local binary multi-feature learning method to jointly learn multiple sets of hashing functions using face patches extracted from different scales to exploit complementary information. Our methods achieve competitive performance on four widely used face aging data sets.
ERIC Educational Resources Information Center
Camras, Linda A.; Oster, Harriet; Bakeman, Roger; Meng, Zhaolan; Ujiie, Tatsuo; Campos, Joseph J.
2007-01-01
Do infants show distinct negative facial expressions for different negative emotions? To address this question, European American, Chinese, and Japanese 11-month-olds were videotaped during procedures designed to elicit mild anger or frustration and fear. Facial behavior was coded using Baby FACS, an anatomically based scoring system. Infants'…
2017-01-01
Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models. PMID:28742816
Hosoya, Haruo; Hyvärinen, Aapo
2017-07-01
Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models.
Emotion categories and dimensions in the facial communication of affect: An integrated approach.
Mehu, Marc; Scherer, Klaus R
2015-12-01
We investigated the role of facial behavior in emotional communication, using both categorical and dimensional approaches. We used a corpus of enacted emotional expressions (GEMEP) in which professional actors are instructed, with the help of scenarios, to communicate a variety of emotional experiences. The results of Study 1 replicated earlier findings showing that only a minority of facial action units are associated with specific emotional categories. Likewise, facial behavior did not show a specific association with particular emotional dimensions. Study 2 showed that facial behavior plays a significant role both in the detection of emotions and in the judgment of their dimensional aspects, such as valence, arousal, dominance, and unpredictability. In addition, a mediation model revealed that the association between facial behavior and recognition of the signaler's emotional intentions is mediated by perceived emotional dimensions. We conclude that, from a production perspective, facial action units convey neither specific emotions nor specific emotional dimensions, but are associated with several emotions and several dimensions. From the perceiver's perspective, facial behavior facilitated both dimensional and categorical judgments, and the former mediated the effect of facial behavior on recognition accuracy. The classification of emotional expressions into discrete categories may, therefore, rely on the perception of more general dimensions such as valence and arousal and, presumably, the underlying appraisals that are inferred from facial movements. (c) 2015 APA, all rights reserved).
Anderson, Craig L; Monroy, Maria; Keltner, Dacher
2018-04-01
Emotional expressions communicate information about the individual's internal state and evoke responses in others that enable coordinated action. The current work investigated the informative and evocative properties of fear vocalizations in a sample of youth from underserved communities and military veterans while white-water rafting. Video-taped footage of participants rafting through white-water rapids was coded for vocal and facial expressions of fear, amusement, pride, and awe, yielding more than 1,300 coded expressions, which were then related to measures of subjective emotion and cortisol response. Consistent with informative properties of emotional expressions, fear vocalizations were positively and significantly related to facial expressions of fear, subjective reports of fear, and individuals' cortisol levels measured after the rafting trip. It is important to note that this coherent pattern was unique to fear vocalizations; vocalizations of amusement, pride, and awe were not significantly related to fear expressions in the face, subjective reports of fear, or cortisol levels. Demonstrating the evocative properties of emotional expression, fear vocalizations of individuals appeared to evoke fear vocalizations in other people in their raft, and cortisol levels of individuals within rafts similarly converged at the end of the trip. We discuss how the study of spontaneous emotion expressions in naturalistic settings can help address basic yet controversial questions about emotions. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
[Contribution of botulinum toxin to maxillo-facial surgery].
Batifol, D; de Boutray, M; Goudot, P; Lorenzo, S
2013-04-01
Botulinum toxin has a wide range of use in maxillo-facial surgery due to its action on muscles, on the glandular system, and against pain. It already has been given several market authorizations as indicated for: blepharospasm, spasmodic stiff neck, and glabellar lines. Furthermore, several studies are ongoing to prove its effectiveness and usefulness for many other pathologies: treatment of pain following cervical spine surgery; action on salivary glands after trauma, hypertrophy, or hyper-salivation; analgesic action (acknowledged but still being experimented) on neuralgia, articular pain, and keloids scars due to its anti-inflammatory properties. Botulinum toxin injections in the cervico-facial area are more and more used and should be to be correctly assessed. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Hyvärinen, Antti; Tarkka, Ina M; Mervaala, Esa; Pääkkönen, Ari; Valtonen, Hannu; Nuutinen, Juhani
2008-12-01
The purpose of this study was to assess clinical and neurophysiological changes after 6 mos of transcutaneous electrical stimulation in patients with unresolved facial nerve paralysis. A pilot case series of 10 consecutive patients with chronic facial nerve paralysis either of idiopathic origin or because of herpes zoster oticus participated in this open study. All patients received below sensory threshold transcutaneous electrical stimulation for 6 mos for their facial nerve paralysis. The intervention consisted of gradually increasing the duration of electrical stimulation of three sites on the affected area for up to 6 hrs/day. Assessments of the facial nerve function were performed using the House-Brackmann clinical scale and neurophysiological measurements of compound motor action potential distal latencies on the affected and nonaffected sides. Patients were tested before and after the intervention. A significant improvement was observed in the facial nerve upper branch compound motor action potential distal latency on the affected side in all patients. An improvement of one grade in House-Brackmann scale was observed and some patients also reported subjective improvement. Transcutaneous electrical stimulation treatment may have a positive effect on unresolved facial nerve paralysis. This study illustrates a possibly effective treatment option for patients with the chronic facial paresis with no other expectations of recovery.
Brown, William; Liu, Connie; John, Rita Marie; Ford, Phoebe
2014-01-01
Developing gross and fine motor skills and expressing complex emotion is critical for child development. We introduce “StorySense”, an eBook-integrated mobile app prototype that can sense face and sound topologies and identify movement and expression to promote children’s motor skills and emotional developmental. Currently, most interactive eBooks on mobile devices only leverage “low-motor” interaction (i.e. tapping or swiping). Our app senses a greater breath of motion (e.g. clapping, snapping, and face tracking), and dynamically alters the storyline according to physical responses in ways that encourage the performance of predetermined motor skills ideal for a child’s gross and fine motor development. In addition, our app can capture changes in facial topology, which can later be mapped using the Facial Action Coding System (FACS) for later interpretation of emotion. StorySense expands the human computer interaction vocabulary for mobile devices. Potential clinical applications include child development, physical therapy, and autism. PMID:25954336
A model for production, perception, and acquisition of actions in face-to-face communication.
Kröger, Bernd J; Kopp, Stefan; Lowit, Anja
2010-08-01
The concept of action as basic motor control unit for goal-directed movement behavior has been used primarily for private or non-communicative actions like walking, reaching, or grasping. In this paper, literature is reviewed indicating that this concept can also be used in all domains of face-to-face communication like speech, co-verbal facial expression, and co-verbal gesturing. Three domain-specific types of actions, i.e. speech actions, facial actions, and hand-arm actions, are defined in this paper and a model is proposed that elucidates the underlying biological mechanisms of action production, action perception, and action acquisition in all domains of face-to-face communication. This model can be used as theoretical framework for empirical analysis or simulation with embodied conversational agents, and thus for advanced human-computer interaction technologies.
Seeing emotions: a review of micro and subtle emotion expression training
NASA Astrophysics Data System (ADS)
Poole, Ernest Andre
2016-09-01
In this review I explore and discuss the use of micro and subtle expression training in the social sciences. These trainings, offered commercially, are designed and endorsed by noted psychologist Paul Ekman, co-author of the Facial Action Coding System, a comprehensive system of measuring muscular movement in the face and its relationship to the expression of emotions. The trainings build upon that seminal work and present them in a way for either the layperson or researcher to easily add to their personal toolbox for a variety of purposes. Outlined are my experiences across the training products, how they could be used in social science research, a brief comparison to automated systems, and possible next steps.
Eye contact modulates facial mimicry in 4-month-old infants: An EMG and fNIRS study.
de Klerk, Carina C J M; Hamilton, Antonia F de C; Southgate, Victoria
2018-05-16
Mimicry, the tendency to spontaneously and unconsciously copy others' behaviour, plays an important role in social interactions. It facilitates rapport between strangers, and is flexibly modulated by social signals, such as eye contact. However, little is known about the development of this phenomenon in infancy, and it is unknown whether mimicry is modulated by social signals from early in life. Here we addressed this question by presenting 4-month-old infants with videos of models performing facial actions (e.g., mouth opening, eyebrow raising) and hand actions (e.g., hand opening and closing, finger actions) accompanied by direct or averted gaze, while we measured their facial and hand muscle responses using electromyography to obtain an index of mimicry (Experiment 1). In Experiment 2 the infants observed the same stimuli while we used functional near-infrared spectroscopy to investigate the brain regions involved in modulating mimicry by eye contact. We found that 4-month-olds only showed evidence of mimicry when they observed facial actions accompanied by direct gaze. Experiment 2 suggests that this selective facial mimicry may have been associated with activation over posterior superior temporal sulcus. These findings provide the first demonstration of modulation of mimicry by social signals in young human infants, and suggest that mimicry plays an important role in social interactions from early in life. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Production of Emotional Facial Expressions in European American, Japanese, and Chinese Infants.
ERIC Educational Resources Information Center
Camras, Linda A.; And Others
1998-01-01
European American, Japanese, and Chinese 11-month-olds participated in emotion-inducing laboratory procedures. Facial responses were scored with BabyFACS, an anatomically based coding system. Overall, Chinese infants were less expressive than European American and Japanese infants, suggesting that differences in expressivity between European…
Social communication in siamangs (Symphalangus syndactylus): use of gestures and facial expressions.
Liebal, Katja; Pika, Simone; Tomasello, Michael
2004-01-01
The current study represents the first systematic investigation of the social communication of captive siamangs (Symphalangus syndactylus). The focus was on intentional signals, including tactile and visual gestures, as well as facial expressions and actions. Fourteen individuals from different groups were observed and the signals used by individuals were recorded. Thirty-one different signals, consisting of 12 tactile gestures, 8 visual gestures, 7 actions, and 4 facial expressions, were observed, with tactile gestures and facial expressions appearing most frequently. The range of the signal repertoire increased steadily until the age of six, but declined afterwards in adults. The proportions of the different signal categories used within communicative interactions, in particular actions and facial expressions, also varied depending on age. Group differences could be traced back mainly to social factors or housing conditions. Differences in the repertoire of males and females were most obvious in the sexual context. Overall, most signals were used flexibly, with the majority performed in three or more social contexts and almost one-third of signals used in combination with other signals. Siamangs also adjusted their signals appropriately for the recipient, for example, using visual signals most often when the recipient was already attending (audience effects). These observations are discussed in the context of siamang ecology, social structure, and cognition.
Facial Displays Are Tools for Social Influence.
Crivelli, Carlos; Fridlund, Alan J
2018-05-01
Based on modern theories of signal evolution and animal communication, the behavioral ecology view of facial displays (BECV) reconceives our 'facial expressions of emotion' as social tools that serve as lead signs to contingent action in social negotiation. BECV offers an externalist, functionalist view of facial displays that is not bound to Western conceptions about either expressions or emotions. It easily accommodates recent findings of diversity in facial displays, their public context-dependency, and the curious but common occurrence of solitary facial behavior. Finally, BECV restores continuity of human facial behavior research with modern functional accounts of non-human communication, and provides a non-mentalistic account of facial displays well-suited to new developments in artificial intelligence and social robotics. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
The interaction between embodiment and empathy in facial expression recognition
Jospe, Karine; Flöel, Agnes; Lavidor, Michal
2018-01-01
Abstract Previous research has demonstrated that the Action-Observation Network (AON) is involved in both emotional-embodiment (empathy) and action-embodiment mechanisms. In this study, we hypothesized that interfering with the AON will impair action recognition and that this impairment will be modulated by empathy levels. In Experiment 1 (n = 90), participants were asked to recognize facial expressions while their facial motion was restricted. In Experiment 2 (n = 50), we interfered with the AON by applying transcranial Direct Current Stimulation to the motor cortex. In both experiments, we found that interfering with the AON impaired the performance of participants with high empathy levels; however, for the first time, we demonstrated that the interference enhanced the performance of participants with low empathy. This novel finding suggests that the embodiment module may be flexible, and that it can be enhanced in individuals with low empathy by simple manipulation of motor activation. PMID:29378022
Fanti, Kostas A; Kyranides, Melina Nicole; Panayiotou, Georgia
2017-02-01
The current study adds to prior research by investigating specific (happiness, sadness, surprise, disgust, anger and fear) and general (corrugator and zygomatic muscle activity) facial reactions to violent and comedy films among individuals with varying levels of callous-unemotional (CU) traits and impulsive aggression (IA). Participants at differential risk of CU traits and IA were selected from a sample of 1225 young adults. In Experiment 1, participants (N = 82) facial expressions were recorded while they watched violent and comedy films. Video footage of participants' facial expressions was analysed using FaceReader, a facial coding software that classifies facial reactions. Findings suggested that individuals with elevated CU traits showed reduced facial reactions of sadness and disgust to violent films, indicating low empathic concern in response to victims' distress. In contrast, impulsive aggressors produced specifically more angry facial expressions when viewing violent and comedy films. In Experiment 2 (N = 86), facial reactions were measured by monitoring facial electromyography activity. FaceReader findings were verified by the reduced facial electromyography at the corrugator, but not the zygomatic, muscle in response to violent films shown by individuals high in CU traits. Additional analysis suggested that sympathy to victims explained the association between CU traits and reduced facial reactions to violent films.
Fairbairn, Catharine E.; Sayette, Michael A.; Aalen, Odd O.; Frigessi, Arnoldo
2014-01-01
Researchers have hypothesized that men gain greater reward from alcohol than women. However, alcohol-administration studies testing participants drinking alone have offered weak support for this hypothesis. Research suggests that social processes may be implicated in gender differences in drinking patterns. We examined the impact of gender and alcohol on “emotional contagion”—a social mechanism central to bonding and cohesion. Social drinkers (360 male, 360 female) consumed alcohol, placebo, or control beverages in groups of three. Social interactions were video recorded, and both Duchenne and non-Duchenne smiling were continuously coded using the Facial Action Coding System. Results revealed that Duchenne smiling (but not non-Duchenne smiling) contagion correlated with self-reported reward and typical drinking patterns. Importantly, Duchenne smiles were significantly less “infectious” among sober male versus female groups, and alcohol eliminated these gender differences in smiling contagion. Findings identify new directions for research exploring social-reward processes in the etiology of alcohol problems. PMID:26504673
Fairbairn, Catharine E; Sayette, Michael A; Aalen, Odd O; Frigessi, Arnoldo
2015-09-01
Researchers have hypothesized that men gain greater reward from alcohol than women. However, alcohol-administration studies testing participants drinking alone have offered weak support for this hypothesis. Research suggests that social processes may be implicated in gender differences in drinking patterns. We examined the impact of gender and alcohol on "emotional contagion"-a social mechanism central to bonding and cohesion. Social drinkers (360 male, 360 female) consumed alcohol, placebo, or control beverages in groups of three. Social interactions were video recorded, and both Duchenne and non-Duchenne smiling were continuously coded using the Facial Action Coding System . Results revealed that Duchenne smiling (but not non-Duchenne smiling) contagion correlated with self-reported reward and typical drinking patterns. Importantly, Duchenne smiles were significantly less "infectious" among sober male versus female groups, and alcohol eliminated these gender differences in smiling contagion. Findings identify new directions for research exploring social-reward processes in the etiology of alcohol problems.
Matsumoto, David; Willingham, Bob
2006-09-01
Facial behaviors of medal winners of the judo competition at the 2004 Athens Olympic Games were coded with P. Ekman and W. V. Friesen's (1978) Facial Affect Coding System (FACS) and interpreted using their Emotion FACS dictionary. Winners' spontaneous expressions were captured immediately when they completed medal matches, when they received their medal from a dignitary, and when they posed on the podium. The 84 athletes who contributed expressions came from 35 countries. The findings strongly supported the notion that expressions occur in relation to emotionally evocative contexts in people of all cultures, that these expressions correspond to the facial expressions of emotion considered to be universal, that expressions provide information that can reliably differentiate the antecedent situations that produced them, and that expressions that occur without inhibition are different than those that occur in social and interactive settings. ((c) 2006 APA, all rights reserved).
The shared neural basis of empathy and facial imitation accuracy.
Braadbaart, L; de Grauw, H; Perrett, D I; Waiter, G D; Williams, J H G
2014-01-01
Empathy involves experiencing emotion vicariously, and understanding the reasons for those emotions. It may be served partly by a motor simulation function, and therefore share a neural basis with imitation (as opposed to mimicry), as both involve sensorimotor representations of intentions based on perceptions of others' actions. We recently showed a correlation between imitation accuracy and Empathy Quotient (EQ) using a facial imitation task and hypothesised that this relationship would be mediated by the human mirror neuron system. During functional Magnetic Resonance Imaging (fMRI), 20 adults observed novel 'blends' of facial emotional expressions. According to instruction, they either imitated (i.e. matched) the expressions or executed alternative, pre-prescribed mismatched actions as control. Outside the scanner we replicated the association between imitation accuracy and EQ. During fMRI, activity was greater during mismatch compared to imitation, particularly in the bilateral insula. Activity during imitation correlated with EQ in somatosensory cortex, intraparietal sulcus and premotor cortex. Imitation accuracy correlated with activity in insula and areas serving motor control. Overlapping voxels for the accuracy and EQ correlations occurred in premotor cortex. We suggest that both empathy and facial imitation rely on formation of action plans (or a simulation of others' intentions) in the premotor cortex, in connection with representations of emotional expressions based in the somatosensory cortex. In addition, the insula may play a key role in the social regulation of facial expression. © 2013.
Objectifying facial expressivity assessment of Parkinson's patients: preliminary study.
Wu, Peng; Gonzalez, Isabel; Patsis, Georgios; Jiang, Dongmei; Sahli, Hichem; Kerckhofs, Eric; Vandekerckhove, Marie
2014-01-01
Patients with Parkinson's disease (PD) can exhibit a reduction of spontaneous facial expression, designated as "facial masking," a symptom in which facial muscles become rigid. To improve clinical assessment of facial expressivity of PD, this work attempts to quantify the dynamic facial expressivity (facial activity) of PD by automatically recognizing facial action units (AUs) and estimating their intensity. Spontaneous facial expressivity was assessed by comparing 7 PD patients with 8 control participants. To voluntarily produce spontaneous facial expressions that resemble those typically triggered by emotions, six emotions (amusement, sadness, anger, disgust, surprise, and fear) were elicited using movie clips. During the movie clips, physiological signals (facial electromyography (EMG) and electrocardiogram (ECG)) and frontal face video of the participants were recorded. The participants were asked to report on their emotional states throughout the experiment. We first examined the effectiveness of the emotion manipulation by evaluating the participant's self-reports. Disgust-induced emotions were significantly higher than the other emotions. Thus we focused on the analysis of the recorded data during watching disgust movie clips. The proposed facial expressivity assessment approach captured differences in facial expressivity between PD patients and controls. Also differences between PD patients with different progression of Parkinson's disease have been observed.
The Effects of Alcohol on the Emotional Displays of Whites in Interracial Groups
Fairbairn, Catharine E.; Sayette, Michael A.; Levine, John M.; Cohn, Jeffrey F.; Creswell, Kasey G.
2017-01-01
Discomfort during interracial interactions is common among Whites in the U.S. and is linked to avoidance of interracial encounters. While the negative consequences of interracial discomfort are well-documented, understanding of its causes is still incomplete. Alcohol consumption has been shown to decrease negative emotions caused by self-presentational concern but increase negative emotions associated with racial prejudice. Using novel behavioral-expressive measures of emotion, we examined the impact of alcohol on displays of discomfort among 92 White individuals interacting in all-White or interracial groups. We used the Facial Action Coding System and comprehensive content-free speech analyses to examine affective and behavioral dynamics during these 36-minute exchanges (7.9 million frames of video data). Among Whites consuming nonalcoholic beverages, those assigned to interracial groups evidenced more facial and speech displays of discomfort than those in all-White groups. In contrast, among intoxicated Whites there were no differences in displays of discomfort between interracial and all-White groups. Results highlight the central role of self-presentational concerns in interracial discomfort and offer new directions for applying theory and methods from emotion science to the examination of intergroup relations. PMID:23356562
The effects of alcohol on the emotional displays of Whites in interracial groups.
Fairbairn, Catharine E; Sayette, Michael A; Levine, John M; Cohn, Jeffrey F; Creswell, Kasey G
2013-06-01
Discomfort during interracial interactions is common among Whites in the U.S. and is linked to avoidance of interracial encounters. While the negative consequences of interracial discomfort are well-documented, understanding of its causes is still incomplete. Alcohol consumption has been shown to decrease negative emotions caused by self-presentational concern but increase negative emotions associated with racial prejudice. Using novel behavioral-expressive measures of emotion, we examined the impact of alcohol on displays of discomfort among 92 White individuals interacting in all-White or interracial groups. We used the Facial Action Coding System and comprehensive content-free speech analyses to examine affective and behavioral dynamics during these 36-min exchanges (7.9 million frames of video data). Among Whites consuming nonalcoholic beverages, those assigned to interracial groups evidenced more facial and speech displays of discomfort than those in all-White groups. In contrast, among intoxicated Whites there were no differences in displays of discomfort between interracial and all-White groups. Results highlight the central role of self-presentational concerns in interracial discomfort and offer new directions for applying theory and methods from emotion science to the examination of intergroup relations.
Liu, Chengwei; Liu, Ying; Iqbal, Zahida; Li, Wenhui; Lv, Bo; Jiang, Zhongqing
2017-01-01
To investigate the interaction between facial expressions and facial gender information during face perception, the present study matched the intensities of the two types of information in face images and then adopted the orthogonal condition of the Garner Paradigm to present the images to participants who were required to judge the gender and expression of the faces; the gender and expression presentations were varied orthogonally. Gender and expression processing displayed a mutual interaction. On the one hand, the judgment of angry expressions occurred faster when presented with male facial images; on the other hand, the classification of the female gender occurred faster when presented with a happy facial expression than when presented with an angry facial expression. According to the evoked-related potential results, the expression classification was influenced by gender during the face structural processing stage (as indexed by N170), which indicates the promotion or interference of facial gender with the coding of facial expression features. However, gender processing was affected by facial expressions in more stages, including the early (P1) and late (LPC) stages of perceptual processing, reflecting that emotional expression influences gender processing mainly by directing attention.
Another Scale for the Assessment of Facial Paralysis? ADS Scale: Our Proposition, How to Use It.
Di Stadio, Arianna
2015-12-01
Several authors in the years propose different methods to evaluate areas and specific movement's disease in patient affected by facial palsy. Despite these efforts the House Brackmann is anyway the most used assessment in medical community. The aims of our study is the proposition and assessing a new rating Arianna Disease Scale (ADS) for the clinical evaluation of facial paralysis. Sixty patients affected by unilateral facial Bell paralysis were enrolled in a prospective study from 2012 to 2014. Their facial nerve function was evaluated with our assessment analysing facial district divided in upper, middle and lower third. We analysed different facial expressions. Each movement corresponded to the action of different muscles. The action of each muscle was scored from 0 to 1, with 0 corresponding from complete flaccid paralysis to muscle's normal function ending with a score of 1. Synkinesis was considered and evaluated also in the scale with a fixed 0.5 score. Our results considered ease and speed of evaluation of the assessment, the accuracy of muscle deficit and the ability to calculate synkinesis using a score. All the three observers agreed 100% in the highest degree of deficit. We found some discrepancies in intermediate score with 92% agreement in upper face, 87% in middle and 80% in lower face, where there were more muscles involved in movements. Our scale had some limitations linked to the small group of patients evaluated and we had a little difficulty understanding the intermediate score of 0.3 and 0.7. However, this was an accurate tool to quickly evaluate facial nerve function. This has potential as an alternative scale to and to diagnose facial nerve disorders.
Reading Faces: From Features to Recognition.
Guntupalli, J Swaroop; Gobbini, M Ida
2017-12-01
Chang and Tsao recently reported that the monkey face patch system encodes facial identity in a space of facial features as opposed to exemplars. Here, we discuss how such coding might contribute to face recognition, emphasizing the critical role of learning and interactions with other brain areas for optimizing the recognition of familiar faces. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Functional Role of the Periphery in Emotional Language Comprehension
Havas, David A.; Matheson, James
2013-01-01
Language can impact emotion, even when it makes no reference to emotion states. For example, reading sentences with positive meanings (“The water park is refreshing on the hot summer day”) induces patterns of facial feedback congruent with the sentence emotionality (smiling), whereas sentences with negative meanings induce a frown. Moreover, blocking facial afference with botox selectively slows comprehension of emotional sentences. Therefore, theories of cognition should account for emotion-language interactions above the level of explicit emotion words, and the role of peripheral feedback in comprehension. For this special issue exploring frontiers in the role of the body and environment in cognition, we propose a theory in which facial feedback provides a context-sensitive constraint on the simulation of actions described in language. Paralleling the role of emotions in real-world behavior, our account proposes that (1) facial expressions accompany sudden shifts in wellbeing as described in language; (2) facial expressions modulate emotional action systems during reading; and (3) emotional action systems prepare the reader for an effective simulation of the ensuing language content. To inform the theory and guide future research, we outline a framework based on internal models for motor control. To support the theory, we assemble evidence from diverse areas of research. Taking a functional view of emotion, we tie the theory to behavioral and neural evidence for a role of facial feedback in cognition. Our theoretical framework provides a detailed account that can guide future research on the role of emotional feedback in language processing, and on interactions of language and emotion. It also highlights the bodily periphery as relevant to theories of embodied cognition. PMID:23750145
Role of temporal processing stages by inferior temporal neurons in facial recognition.
Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji
2011-01-01
In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition.
Role of Temporal Processing Stages by Inferior Temporal Neurons in Facial Recognition
Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji
2011-01-01
In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition. PMID:21734904
Simoni, Payman; Ostendorf, Robert; Cox, Artemus J
2003-01-01
To examine the relationship between the use of restraining devices and the incidence of specific facial fractures in motor vehicle crashes. Retrospective analysis of patients with facial fractures following a motor vehicle crash. University of Alabama at Birmingham Hospital level I trauma center from 1996 to 2000. Of 3731 patients involved in motor vehicle crashes, a total of 497 patients were found to have facial fractures as determined by International Classification of Diseases, Ninth Revision (ICD-9) codes. Facial fractures were categorized as mandibular, orbital, zygomaticomaxillary complex (ZMC), and nasal. Use of seat belts alone was more effective in decreasing the chance of facial fractures in this population (from 17% to 8%) compared with the use of air bags alone (17% to 11%). The use of seat belts and air bags together decreased the incidence of facial fractures from 17% to 5%. Use of restraining devices in vehicles significantly reduces the chance of incurring facial fractures in a severe motor vehicle crash. However, use of air bags and seat belts does not change the pattern of facial fractures greatly except for ZMC fractures. Air bags are least effective in preventing ZMC fractures. Improving the mechanics of restraining devices might be needed to minimize facial fractures.
Atkinson, Jo; Campbell, Ruth; Marshall, Jane; Thacker, Alice; Woll, Bencie
2004-01-01
Simple negation in natural languages represents a complex interrelationship of syntax, prosody, semantics and pragmatics, and may be realised in various ways: lexically, morphologically and prosodically. In almost all spoken languages, the first two of these are the primary realisations of syntactic negation. In contrast, in many signed languages negation can occur without lexical or morphological marking. Thus, in British Sign Language (BSL), negation is obligatorily expressed using face-head actions alone (facial negation) with the option of articulating a manual form alongside the required face-head actions (lexical negation). What are the processes underlying facial negation? Here, we explore this question neuropsychologically. If facial negation reflects lexico-syntactic processing in BSL, it may be relatively spared in people with unilateral right hemisphere (RH) lesions, as has been suggested for other 'grammatical facial actions' [Language and Speech 42 (1999) 307; Emmorey, K. (2002). Language, cognition and the brain: Insights from sign language research. Mahwah, NJ: Erlbaum (Lawrence)]. Three BSL users with RH lesions were specifically impaired in perceiving facial compared with manual (lexical and morphological) negation. This dissociation was absent in three users of BSL with left hemisphere lesions and different degrees of language disorder, who also showed relative sparing of negation comprehension. We conclude that, in contrast to some analyses [Applied Psycholinguistics 18 (1997) 411; Emmorey, K. (2002). Language, cognition and the brain: Insights from sign language research. Mahwah, NJ: Erlbaum (Lawrence); Archives of Neurology 36 (1979) 837], non-manual negation in sign may not be a direct surface realisation of syntax [Language and Speech 42 (1999) 143; Language and Speech 42 (1999) 127]. Difficulties with facial negation in the RH-lesion group were associated with specific impairments in processing facial images, including facial expressions. However, they did not reflect generalised 'face-blindness', since the reading of (English) speech patterns from faces was spared in this group. We propose that some aspects of the linguistic analysis of sign language are achieved by prosodic analysis systems (analysis of face and head gestures), which are lateralised to the minor hemisphere.
Face to face: blocking facial mimicry can selectively impair recognition of emotional expressions.
Oberman, Lindsay M; Winkielman, Piotr; Ramachandran, Vilayanur S
2007-01-01
People spontaneously mimic a variety of behaviors, including emotional facial expressions. Embodied cognition theories suggest that mimicry reflects internal simulation of perceived emotion in order to facilitate its understanding. If so, blocking facial mimicry should impair recognition of expressions, especially of emotions that are simulated using facial musculature. The current research tested this hypothesis using four expressions (happy, disgust, fear, and sad) and two mimicry-interfering manipulations (1) biting on a pen and (2) chewing gum, as well as two control conditions. Experiment 1 used electromyography over cheek, mouth, and nose regions. The bite manipulation consistently activated assessed muscles, whereas the chew manipulation activated muscles only intermittently. Further, expressing happiness generated most facial action. Experiment 2 found that the bite manipulation interfered most with recognition of happiness. These findings suggest that facial mimicry differentially contributes to recognition of specific facial expressions, thus allowing for more refined predictions from embodied cognition theories.
Transformative science education through action research and self-study practices
NASA Astrophysics Data System (ADS)
Calderon, Olga
The research studies human emotions through diverse methods and theoretical lenses. My intention in using this approach is to provide alternative ways of perceiving and interpreting emotions being experienced in the moment of arousal. Emotions are fundamental in human interactions because they are essential in the development of effective relationships of any kind and they can also mediate hostility towards others. I begin by presenting an impressionist auto-ethnography, which narrates a personal account of how science and scientific inquiry has been entrenched in me since childhood. I describe how emotions are an important part of how I perceive and respond to the world around me. I describe science in my life in terms of natural environments, which were the initial source of scientific wonder and bafflement for me. In this auto-ethnography, I recount how social interactions shaped my perceptions about people, the world, and my education trajectory. Furthermore, I illustrate how sociocultural structures are used in different contexts to mediate several life decisions that enable me to pursue a career in science and science education. I also reflect on how some of those sociocultural aspects mediated my emotional wellness. I reveal how my life and science are interconnected and I present my story as a segue to the remainder of the dissertation. In chapters 2 and 3, I address a methodology and associated methods for research on facial expression of emotion. I use a facial action coding system developed by Paul Ekman in the 1970s (Ekman, 2002) to study facial representation of emotions. In chapters 4 and 5, I review the history of oximetry and ways in which an oximeter can be used to obtain information on the physiological expression of emotions. I examine oximetry data in relation to emotional physiology in three different aspects; pulse rate, oxygenation of the blood, and plethysmography (i.e., strength of pulse). In chapters 3 and 5, I include data and observations collected in a science education course for science teachers at Brooklyn College. These observations are only a small part on a larger study of emotions and mindfulness in the science classroom by a group of researchers of the City University of New York. In this context, I explore how, while teaching and learning science, emotions are represented facially and physiologically in terms of oxygenation of the blood and pulse rate and strength.
Objectifying Facial Expressivity Assessment of Parkinson's Patients: Preliminary Study
Patsis, Georgios; Jiang, Dongmei; Sahli, Hichem; Kerckhofs, Eric; Vandekerckhove, Marie
2014-01-01
Patients with Parkinson's disease (PD) can exhibit a reduction of spontaneous facial expression, designated as “facial masking,” a symptom in which facial muscles become rigid. To improve clinical assessment of facial expressivity of PD, this work attempts to quantify the dynamic facial expressivity (facial activity) of PD by automatically recognizing facial action units (AUs) and estimating their intensity. Spontaneous facial expressivity was assessed by comparing 7 PD patients with 8 control participants. To voluntarily produce spontaneous facial expressions that resemble those typically triggered by emotions, six emotions (amusement, sadness, anger, disgust, surprise, and fear) were elicited using movie clips. During the movie clips, physiological signals (facial electromyography (EMG) and electrocardiogram (ECG)) and frontal face video of the participants were recorded. The participants were asked to report on their emotional states throughout the experiment. We first examined the effectiveness of the emotion manipulation by evaluating the participant's self-reports. Disgust-induced emotions were significantly higher than the other emotions. Thus we focused on the analysis of the recorded data during watching disgust movie clips. The proposed facial expressivity assessment approach captured differences in facial expressivity between PD patients and controls. Also differences between PD patients with different progression of Parkinson's disease have been observed. PMID:25478003
Does Facial Amimia Impact the Recognition of Facial Emotions? An EMG Study in Parkinson’s Disease
Argaud, Soizic; Delplanque, Sylvain; Houvenaghel, Jean-François; Auffret, Manon; Duprez, Joan; Vérin, Marc; Grandjean, Didier; Sauleau, Paul
2016-01-01
According to embodied simulation theory, understanding other people’s emotions is fostered by facial mimicry. However, studies assessing the effect of facial mimicry on the recognition of emotion are still controversial. In Parkinson’s disease (PD), one of the most distinctive clinical features is facial amimia, a reduction in facial expressiveness, but patients also show emotional disturbances. The present study used the pathological model of PD to examine the role of facial mimicry on emotion recognition by investigating EMG responses in PD patients during a facial emotion recognition task (anger, joy, neutral). Our results evidenced a significant decrease in facial mimicry for joy in PD, essentially linked to the absence of reaction of the zygomaticus major and the orbicularis oculi muscles in response to happy avatars, whereas facial mimicry for expressions of anger was relatively preserved. We also confirmed that PD patients were less accurate in recognizing positive and neutral facial expressions and highlighted a beneficial effect of facial mimicry on the recognition of emotion. We thus provide additional arguments for embodied simulation theory suggesting that facial mimicry is a potential lever for therapeutic actions in PD even if it seems not to be necessarily required in recognizing emotion as such. PMID:27467393
Adkinson, Joshua M; Murphy, Robert X
2011-05-01
In 2009, the National Highway Traffic Safety Administration projected that 33,963 people would die and millions would be injured in motor vehicle collisions (MVC). Multiple studies have evaluated the impact of restraint devices in MVCs. This study examines longitudinal changes in facial fractures after MVC as result of utilization of restraint devices. The Pennsylvania Trauma Systems Foundation-Pennsylvania Trauma Outcomes Study database was queried for MVCs from 1989 to 2009. Restraint device use was noted, and facial fractures were identified by International Classification of Diseases-ninth revision codes. Surgeon cost data were extrapolated. More than 15,000 patients sustained ≥1 facial fracture. Only orbital blowout fractures increased over 20 years. Patients were 2.1% less likely every year to have ≥1 facial fracture, which translated into decreased estimated surgeon charges. Increased use of protective devices by patients involved in MVCs resulted in a change in incidence of different facial fractures with reduced need for reconstructive surgery.
Facial expressions of emotion and the course of conjugal bereavement.
Bonanno, G A; Keltner, D
1997-02-01
The common assumption that emotional expression mediates the course of bereavement is tested. Competing hypotheses about the direction of mediation were formulated from the grief work and social-functional accounts of emotional expression. Facial expressions of emotion in conjugally bereaved adults were coded at 6 months post-loss as they described their relationship with the deceased; grief and perceived health were measured at 6, 14, and 25 months. Facial expressions of negative emotion, in particular anger, predicted increased grief at 14 months and poorer perceived health through 25 months. Facial expressions of positive emotion predicted decreased grief through 25 months and a positive but nonsignificant relation to perceived health. Predictive relations between negative and positive emotional expression persisted when initial levels of self-reported emotion, grief, and health were statistically controlled, demonstrating the mediating role of facial expressions of emotion in adjustment to conjugal loss. Theoretical and clinical implications are discussed.
Rozin, P; Lowery, L; Imada, S; Haidt, J
1999-04-01
It is proposed that 3 emotions--contempt, anger, and disgust--are typically elicited, across cultures, by violations of 3 moral codes proposed by R. A. Shweder and his colleagues (R. A. Shweder, N. C. Much, M. Mahapatra, & L. Park, 1997). The proposed alignment links anger to autonomy (individual rights violations), contempt to community (violation of communal codes including hierarchy), and disgust to divinity (violations of purity-sanctity). This is the CAD triad hypothesis. Students in the United States and Japan were presented with descriptions of situations that involve 1 of the types of moral violations and asked to assign either an appropriate facial expression (from a set of 6) or an appropriate word (contempt, anger, disgust, or their translations). Results generally supported the CAD triad hypothesis. Results were further confirmed by analysis of facial expressions actually made by Americans to the descriptions of these situations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villard, L.; Lossi, A.M.; Fontes, M.
We have previously reported the isolation of a gene from Xq13 that codes for a putative regulator of transcription (XNP) and has now been shown to be the gene involved in the X-linked {alpha}-thalassemia with mental retardation (ATR-X) syndrome. The widespread expression and numerous domains present in the putative protein suggest that this gene could be involved in other phenotypes. The predominant expression of the gene in the developing brain, as well as its association with neuron differentiation, indicates that mutations of this gene might result in a mental retardation (MR) phenotype. In this paper we present a family withmore » a splice junction mutation in XNP that results in the skipping of an exon and in the introduction of a stop codon in the middle of the XNP-coding sequence. Only the abnormal transcript is expressed in two first cousins presenting the classic ATR-X phenotype (with {alpha}-thalassemia and HbH inclusions). In a distant cousin presenting a similar dysmorphic MR phenotype but not having thalassemia, {approximately}30% of the XNP transcripts are normal. These data demonstrate that the mode of action of the XNP gene product on globin expression is distinct from its mode of action in brain development and facial morphogenesis and suggest that other dysmorphic mental retardation phenotypes, such as Juberg-Marsidi or some sporadic cases of Coffin-Lowry, could be due to mutations in XNP. 20 refs., 5 figs., 2 tabs.« less
De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey
2016-01-01
Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/. PMID:27346987
De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey
2015-05-01
Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/.
Shih, Yu-Ling; Lin, Chia-Yen
2016-08-01
Action anticipation plays an important role in the successful performance of open skill sports, such as ball and combat sports. Evidence has shown that elite athletes of open sports excel in action anticipation. Most studies have targeted ball sports and agreed that information on body mechanics is one of the key determinants for successful action anticipation in open sports. However, less is known about combat sports, and whether facial emotions have an influence on athletes' action anticipation skill. It has been suggested that the understanding of intention in combat sports relies heavily on emotional context. Based on this suggestion, the present study compared the action anticipation performances of taekwondo athletes, weightlifting athletes, and non-athletes and then correlated these with their performances of emotion recognition. This study primarily found that accurate action anticipation does not necessarily rely on the dynamic information of movement, and that action anticipation performance is correlated with that of emotion recognition in taekwondo athletes, but not in weightlifting athletes. Our results suggest that the recognition of facial emotions plays a role in the action prediction in such combat sports as taekwondo.
NASA Technical Reports Server (NTRS)
2002-01-01
Goddard Space Flight Center and Triangle Research & Development Corporation collaborated to create "Smart Eyes," a charge coupled device camera that, for the first time, could read and measure bar codes without the use of lasers. The camera operated in conjunction with software and algorithms created by Goddard and Triangle R&D that could track bar code position and direction with speed and precision, as well as with software that could control robotic actions based on vision system input. This accomplishment was intended for robotic assembly of the International Space Station, helping NASA to increase production while using less manpower. After successfully completing the two- phase SBIR project with Goddard, Triangle R&D was awarded a separate contract from the U.S. Department of Transportation (DOT), which was interested in using the newly developed NASA camera technology to heighten automotive safety standards. In 1990, Triangle R&D and the DOT developed a mask made from a synthetic, plastic skin covering to measure facial lacerations resulting from automobile accidents. By pairing NASA's camera technology with Triangle R&D's and the DOT's newly developed mask, a system that could provide repeatable, computerized evaluations of laceration injury was born.
Real-time speech-driven animation of expressive talking faces
NASA Astrophysics Data System (ADS)
Liu, Jia; You, Mingyu; Chen, Chun; Song, Mingli
2011-05-01
In this paper, we present a real-time facial animation system in which speech drives mouth movements and facial expressions synchronously. Considering five basic emotions, a hierarchical structure with an upper layer of emotion classification is established. Based on the recognized emotion label, the under-layer classification at sub-phonemic level has been modelled on the relationship between acoustic features of frames and audio labels in phonemes. Using certain constraint, the predicted emotion labels of speech are adjusted to gain the facial expression labels which are combined with sub-phonemic labels. The combinations are mapped into facial action units (FAUs), and audio-visual synchronized animation with mouth movements and facial expressions is generated by morphing between FAUs. The experimental results demonstrate that the two-layer structure succeeds in both emotion and sub-phonemic classifications, and the synthesized facial sequences reach a comparative convincing quality.
Do facial movements express emotions or communicate motives?
Parkinson, Brian
2005-01-01
This article addresses the debate between emotion-expression and motive-communication approaches to facial movements, focusing on Ekman's (1972) and Fridlund's (1994) contrasting models and their historical antecedents. Available evidence suggests that the presence of others either reduces or increases facial responses, depending on the quality and strength of the emotional manipulation and on the nature of the relationship between interactants. Although both display rules and social motives provide viable explanations of audience "inhibition" effects, some audience facilitation effects are less easily accommodated within an emotion-expression perspective. In particular, emotion is not a sufficient condition for a corresponding "expression," even discounting explicit regulation, and, apparently, "spontaneous" facial movements may be facilitated by the presence of others. Further, there is no direct evidence that any particular facial movement provides an unambiguous expression of a specific emotion. However, information communicated by facial movements is not necessarily extrinsic to emotion. Facial movements not only transmit emotion-relevant information but also contribute to ongoing processes of emotional action in accordance with pragmatic theories.
The effect of facial expressions on peripersonal and interpersonal spaces.
Ruggiero, Gennaro; Frassinetti, Francesca; Coello, Yann; Rapuano, Mariachiara; di Cola, Armando Schiano; Iachini, Tina
2017-11-01
Identifying individuals' intent through the emotional valence conveyed by their facial expression influences our capacity to approach-avoid these individuals during social interactions. Here, we explore if and how the emotional valence of others' facial expressiveness modulates peripersonal-action and interpersonal-social spaces. Through Immersive Virtual Reality, participants determined reachability-distance (for peripersonal space) and comfort-distance (for interpersonal space) from male/female virtual confederates exhibiting happy, angry and neutral facial expressions while being approached by (passive-approach) or walking toward (active-approach) them. Results showed an increase of distance when seeing angry rather than happy confederates in both approach conditions of comfort-distance. The effect also appeared in reachability-distance, but only in the passive-approach. Anger prompts avoidant behaviors, and thus an expansion of distance, particularly with a potential violation of near body space by an intruder. Overall, the findings suggest that peripersonal-action space, in comparison with interpersonal-social space, is similarly sensitive to the emotional valence of stimuli. We propose that this similarity could reflect a common adaptive mechanism shared by these spaces, presumably at different degrees, for ensuring self-protection functions.
Bad to the bone: facial structure predicts unethical behaviour.
Haselhuhn, Michael P; Wong, Elaine M
2012-02-07
Researchers spanning many scientific domains, including primatology, evolutionary biology and psychology, have sought to establish an evolutionary basis for morality. While researchers have identified social and cognitive adaptations that support ethical behaviour, a consensus has emerged that genetically determined physical traits are not reliable signals of unethical intentions or actions. Challenging this view, we show that genetically determined physical traits can serve as reliable predictors of unethical behaviour if they are also associated with positive signals in intersex and intrasex selection. Specifically, we identify a key physical attribute, the facial width-to-height ratio, which predicts unethical behaviour in men. Across two studies, we demonstrate that men with wider faces (relative to facial height) are more likely to explicitly deceive their counterparts in a negotiation, and are more willing to cheat in order to increase their financial gain. Importantly, we provide evidence that the link between facial metrics and unethical behaviour is mediated by a psychological sense of power. Our results demonstrate that static physical attributes can indeed serve as reliable cues of immoral action, and provide additional support for the view that evolutionary forces shape ethical judgement and behaviour.
Woolley, J D; Chuang, B; Fussell, C; Scherer, S; Biagianti, B; Fulford, D; Mathalon, D H; Vinogradov, S
2017-05-01
Blunted facial affect is a common negative symptom of schizophrenia. Additionally, assessing the trustworthiness of faces is a social cognitive ability that is impaired in schizophrenia. Currently available pharmacological agents are ineffective at improving either of these symptoms, despite their clinical significance. The hypothalamic neuropeptide oxytocin has multiple prosocial effects when administered intranasally to healthy individuals and shows promise in decreasing negative symptoms and enhancing social cognition in schizophrenia. Although two small studies have investigated oxytocin's effects on ratings of facial trustworthiness in schizophrenia, its effects on facial expressivity have not been investigated in any population. We investigated the effects of oxytocin on facial emotional expressivity while participants performed a facial trustworthiness rating task in 33 individuals with schizophrenia and 35 age-matched healthy controls using a double-blind, placebo-controlled, cross-over design. Participants rated the trustworthiness of presented faces interspersed with emotionally evocative photographs while being video-recorded. Participants' facial expressivity in these videos was quantified by blind raters using a well-validated manualized approach (i.e. the Facial Expression Coding System; FACES). While oxytocin administration did not affect ratings of facial trustworthiness, it significantly increased facial expressivity in individuals with schizophrenia (Z = -2.33, p = 0.02) and at trend level in healthy controls (Z = -1.87, p = 0.06). These results demonstrate that oxytocin administration can increase facial expressivity in response to emotional stimuli and suggest that oxytocin may have the potential to serve as a treatment for blunted facial affect in schizophrenia.
Automatic 2.5-D Facial Landmarking and Emotion Annotation for Social Interaction Assistance.
Zhao, Xi; Zou, Jianhua; Li, Huibin; Dellandrea, Emmanuel; Kakadiaris, Ioannis A; Chen, Liming
2016-09-01
People with low vision, Alzheimer's disease, and autism spectrum disorder experience difficulties in perceiving or interpreting facial expression of emotion in their social lives. Though automatic facial expression recognition (FER) methods on 2-D videos have been extensively investigated, their performance was constrained by challenges in head pose and lighting conditions. The shape information in 3-D facial data can reduce or even overcome these challenges. However, high expenses of 3-D cameras prevent their widespread use. Fortunately, 2.5-D facial data from emerging portable RGB-D cameras provide a good balance for this dilemma. In this paper, we propose an automatic emotion annotation solution on 2.5-D facial data collected from RGB-D cameras. The solution consists of a facial landmarking method and a FER method. Specifically, we propose building a deformable partial face model and fit the model to a 2.5-D face for localizing facial landmarks automatically. In FER, a novel action unit (AU) space-based FER method has been proposed. Facial features are extracted using landmarks and further represented as coordinates in the AU space, which are classified into facial expressions. Evaluated on three publicly accessible facial databases, namely EURECOM, FRGC, and Bosphorus databases, the proposed facial landmarking and expression recognition methods have achieved satisfactory results. Possible real-world applications using our algorithms have also been discussed.
Facial color is an efficient mechanism to visually transmit emotion
Benitez-Quiroz, Carlos F.; Srinivasan, Ramprakash
2018-01-01
Facial expressions of emotion in humans are believed to be produced by contracting one’s facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion. PMID:29555780
Facial color is an efficient mechanism to visually transmit emotion.
Benitez-Quiroz, Carlos F; Srinivasan, Ramprakash; Martinez, Aleix M
2018-04-03
Facial expressions of emotion in humans are believed to be produced by contracting one's facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion. Copyright © 2018 the Author(s). Published by PNAS.
Lew, Timothy A; Walker, John A; Wenke, Joseph C; Blackbourne, Lorne H; Hale, Robert G
2010-01-01
To characterize and describe the craniomaxillofacial (CMF) battlefield injuries sustained by US Service Members in Operation Iraqi Freedom and Operation Enduring Freedom. The Joint Theater Trauma Registry was queried from October 19, 2001, to December 11, 2007, for CMF battlefield injuries. The CMF injuries were identified using the "International Classification of Diseases, Ninth Revision, Clinical Modification" codes and the data compiled for battlefield injury service members. Nonbattlefield injuries, killed in action, and return to duty cases were excluded. CMF battlefield injuries were found in 2,014 of the 7,770 battlefield-injured US service members. In the 2,014 injured service members were 4,783 CMF injuries (2.4 injuries per soldier). The incidence of CMF battlefield injuries by branch of service was Army, 72%; Marines, 24%; Navy, 2%; and Air Force, 1%. The incidence of penetrating soft-tissue injuries and fractures was 58% and 27%, respectively. Of the fractures, 76% were open. The location of the facial fractures was the mandible in 36%, maxilla/zygoma in 19%, nasal in 14%, and orbit in 11%. The remaining 20% were not otherwise specified. The primary mechanism of injury involved explosive devices (84%). Of the injured US service members, 26% had injuries to the CMF region in the Operation Iraqi Freedom/Operation Enduring Freedom conflicts during a 6-year period. Multiple penetrating soft-tissue injuries and fractures caused by explosive devices were frequently seen. Increased survivability because of body armor, advanced battlefield medicine, and the increased use of explosive devices is probably related to the elevated incidence of CMF battlefield injuries. The current use of "International Classification of Diseases, Ninth Revision, Clinical Modification" codes with the Joint Theater Trauma Registry failed to characterize the severity of facial wounds.
Differential patterns of implicit emotional processing in Alzheimer's disease and healthy aging.
García-Rodríguez, Beatriz; Fusari, Anna; Rodríguez, Beatriz; Hernández, José Martín Zurdo; Ellgring, Heiner
2009-01-01
Implicit memory for emotional facial expressions (EFEs) was investigated in young adults, healthy old adults, and mild Alzheimer's disease (AD) patients. Implicit memory is revealed by the effect of experience on performance by studying previously encoded versus novel stimuli, a phenomenon referred to as perceptual priming. The aim was to assess the changes in the patterns of priming as a function of aging and dementia. Participants identified EFEs taken from the Facial Action Coding System and the stimuli used represented the emotions of happiness, sadness, surprise, fear, anger, and disgust. In the study phase, participants rated the pleasantness of 36 faces using a Likert-type scale. Subsequently, the response to the 36 previously studied and 36 novel EFEs was tested when they were randomly presented in a cued naming task. The results showed that implicit memory for EFEs is preserved in AD and aging, and no specific age-related effects on implicit memory for EFEs were observed. However, different priming patterns were evident in AD patients that may reflect pathological brain damage and the effect of stimulus complexity. These findings provide evidence of how progressive neuropathological changes in the temporal and frontal areas may affect emotional processing in more advanced stages of the disease.
Humor, laughter, and the cerebellum: insights from patients with acute cerebellar stroke.
Frank, B; Andrzejewski, K; Göricke, S; Wondzinski, E; Siebler, M; Wild, B; Timmann, D
2013-12-01
Extent of cerebellar involvement in cognition and emotion is still a topic of ongoing research. In particular, the cerebellar role in humor processing and control of laughter is not well known. A hypermetric dysregulation of affective behavior has been assumed in cerebellar damage. Thus, we aimed at investigating humor comprehension and appreciation as well as the expression of laughter in 21 patients in the acute or subacute state after stroke restricted to the cerebellum, and in the same number of matched healthy control subjects. Patients with acute and subacute cerebellar damage showed preserved comprehension and appreciation of humor using a validated humor test evaluating comprehension, funniness and aversiveness of cartoons ("3WD Humor Test"). Additionally, there was no difference when compared to healthy controls in the number and intensity of facial reactions and laughter while observing jokes, humorous cartoons, or video sketches measured by the Facial Action Coding System. However, as depression scores were significantly increased in patients with cerebellar stroke, a concealing effect of accompanying depression cannot be excluded. Current findings add to descriptions in the literature that cognitive or affective disorders in patients with lesions restricted to the cerebellum, even in the acute state after damage, are frequently mild and might only be present in more sensitive or specific tests.
Tuncay, Figen; Borman, Pinar; Taşer, Burcu; Ünlü, İlhan; Samim, Erdal
2015-03-01
The aim of this study was to determine the efficacy of electrical stimulation when added to conventional physical therapy with regard to clinical and neurophysiologic changes in patients with Bell palsy. This was a randomized controlled trial. Sixty patients diagnosed with Bell palsy (39 right sided, 21 left sided) were included in the study. Patients were randomly divided into two therapy groups. Group 1 received physical therapy applying hot pack, facial expression exercises, and massage to the facial muscles, whereas group 2 received electrical stimulation treatment in addition to the physical therapy, 5 days per week for a period of 3 wks. Patients were evaluated clinically and electrophysiologically before treatment (at the fourth week of the palsy) and again 3 mos later. Outcome measures included the House-Brackmann scale and Facial Disability Index scores, as well as facial nerve latencies and amplitudes of compound muscle action potentials derived from the frontalis and orbicularis oris muscles. Twenty-nine men (48.3%) and 31 women (51.7%) with Bell palsy were included in the study. In group 1, 16 (57.1%) patients had no axonal degeneration and 12 (42.9%) had axonal degeneration, compared with 17 (53.1%) and 15 (46.9%) patients in group 2, respectively. The baseline House-Brackmann and Facial Disability Index scores were similar between the groups. At 3 mos after onset, the Facial Disability Index scores were improved similarly in both groups. The classification of patients according to House-Brackmann scale revealed greater improvement in group 2 than in group 1. The mean motor nerve latencies and compound muscle action potential amplitudes of both facial muscles were statistically shorter in group 2, whereas only the mean motor latency of the frontalis muscle decreased in group 1. The addition of 3 wks of daily electrical stimulation shortly after facial palsy onset (4 wks), improved functional facial movements and electrophysiologic outcome measures at the 3-mo follow-up in patients with Bell palsy. Further research focused on determining the most effective dosage and length of intervention with electrical stimulation is warranted.
NASA Astrophysics Data System (ADS)
Balbin, Jessie R.; Pinugu, Jasmine Nadja J.; Basco, Abigail Joy S.; Cabanada, Myla B.; Gonzales, Patrisha Melrose V.; Marasigan, Juan Carlos C.
2017-06-01
The research aims to build a tool in assessing patients for post-traumatic stress disorder or PTSD. The parameters used are heart rate, skin conductivity, and facial gestures. Facial gestures are recorded using OpenFace, an open-source face recognition program that uses facial action units in to track facial movements. Heart rate and skin conductivity is measured through sensors operated using Raspberry Pi. Results are stored in a database for easy and quick access. Databases to be used are uploaded to a cloud platform so that doctors have direct access to the data. This research aims to analyze these parameters and give accurate assessment of the patient.
The Impact of Experience on Affective Responses during Action Observation.
Kirsch, Louise P; Snagg, Arielle; Heerey, Erin; Cross, Emily S
2016-01-01
Perceiving others in action elicits affective and aesthetic responses in observers. The present study investigates the extent to which these responses relate to an observer's general experience with observed movements. Facial electromyographic (EMG) responses were recorded in experienced dancers and non-dancers as they watched short videos of movements performed by professional ballet dancers. Responses were recorded from the corrugator supercilii (CS) and zygomaticus major (ZM) muscles, both of which show engagement during the observation of affect-evoking stimuli. In the first part of the experiment, participants passively watched the videos while EMG data were recorded. In the second part, they explicitly rated how much they liked each movement. Results revealed a relationship between explicit affective judgments of the movements and facial muscle activation only among those participants who were experienced with the movements. Specifically, CS activity was higher for disliked movements and ZM activity was higher for liked movements among dancers but not among non-dancers. The relationship between explicit liking ratings and EMG data in experienced observers suggests that facial muscles subtly echo affective judgments even when viewing actions that are not intentionally emotional in nature, thus underscoring the potential of EMG as a method to examine subtle shifts in implicit affective responses during action observation.
Impaired holistic coding of facial expression and facial identity in congenital prosopagnosia.
Palermo, Romina; Willis, Megan L; Rivolta, Davide; McKone, Elinor; Wilson, C Ellie; Calder, Andrew J
2011-04-01
We test 12 individuals with congenital prosopagnosia (CP), who replicate a common pattern of showing severe difficulty in recognising facial identity in conjunction with normal recognition of facial expressions (both basic and 'social'). Strength of holistic processing was examined using standard expression composite and identity composite tasks. Compared to age- and sex-matched controls, group analyses demonstrated that CPs showed weaker holistic processing, for both expression and identity information. Implications are (a) normal expression recognition in CP can derive from compensatory strategies (e.g., over-reliance on non-holistic cues to expression); (b) the split between processing of expression and identity information may take place after a common stage of holistic processing; and (c) contrary to a recent claim, holistic processing of identity is functionally involved in face identification ability. Copyright © 2011 Elsevier Ltd. All rights reserved.
Impaired holistic coding of facial expression and facial identity in congenital prosopagnosia
Palermo, Romina; Willis, Megan L.; Rivolta, Davide; McKone, Elinor; Wilson, C. Ellie; Calder, Andrew J.
2011-01-01
We test 12 individuals with congenital prosopagnosia (CP), who replicate a common pattern of showing severe difficulty in recognising facial identity in conjunction with normal recognition of facial expressions (both basic and ‘social’). Strength of holistic processing was examined using standard expression composite and identity composite tasks. Compared to age- and sex-matched controls, group analyses demonstrated that CPs showed weaker holistic processing, for both expression and identity information. Implications are (a) normal expression recognition in CP can derive from compensatory strategies (e.g., over-reliance on non-holistic cues to expression); (b) the split between processing of expression and identity information may take place after a common stage of holistic processing; and (c) contrary to a recent claim, holistic processing of identity is functionally involved in face identification ability. PMID:21333662
Epidemiology and resource utilization in pediatric facial fractures.
Soleimani, Tahereh; Greathouse, Shawn Travis; Sood, Rajiv; Tahiri, Youssef H; Tholpady, Sunil S
2016-02-01
Pediatric facial fractures, although uncommon, have a significant impact on public health and the US economy by the coexistence of other injuries and developmental deformities. Violence is one of the most frequent mechanisms leading to facial fracture. Teaching hospitals, while educating future medical professionals, have been linked to greater resource utilization in differing scenarios. This study was designed to compare the differences in patient characteristics and outcomes between teaching and non-teaching hospitals for violence-related pediatric facial fractures. Using the 2000-2009 Kids' Inpatient Database, 3881 patients younger than 18 years were identified with facial fracture and external cause of injury code for assault, fight, or abuse. Patients admitted at teaching hospitals were compared to those admitted at non-teaching hospitals in terms of demographics, injuries, and outcomes. Overall, 76.2% of patients had been treated at teaching hospitals. Compared to those treated at non-teaching hospitals, these patients were more likely to be younger, non-white, covered by Medicaid, from lower income zip codes, and have thoracic injuries; but mortality rate was not significantly different. After adjusting for potential confounders, teaching status of the hospital was not found as a predictor of either longer lengths of stay (LOS) or charges. There is an insignificant difference between LOS and charges at teaching and non-teaching hospitals after controlling for patient demographics. This suggests that the longer LOS observed at teaching hospitals is related to these institutions being more often involved in the care of underserved populations and patients with more severe injuries. Copyright © 2016 Elsevier Inc. All rights reserved.
Helping the police with their inquiries
NASA Astrophysics Data System (ADS)
Kitson, Anthony J.
1995-09-01
The UK Home Office has held a long term interest in facial recognition. Work has concentrated upon providing the UK police with facilities to improve the use that can be made of the memory of victims and witnesses rather than automatically matching images. During the 1970s a psychological coding scheme and a search method were developed by Aberdeen University and Home Office. This has been incorporated into systems for searching prisoner photographs both experimentally and operationally. The coding scheme has also been incorporated in a facial likeness composition system. The Home Office is currenly implementing a national criminal record system (Phoenix) and work has been conducted to define and demonstrate standards for image enabled terminals for this application. Users have been consulted to establish suitable picture quality for the purpose, and a study of compression methods is in hand. Recently there has been increased use made by UK courts of expert testimony based upon the measurement of facial images. We are currently working with a group of practitioners to examine and improve the quality of such evidence and to develop a national standard.
Chechko, Natalya; Pagel, Alena; Otte, Ellen; Koch, Iring; Habel, Ute
2016-01-01
Spontaneous emotional expressions (rapid facial mimicry) perform both emotional and social functions. In the current study, we sought to test whether there were deficits in automatic mimic responses to emotional facial expressions in patients (15 of them) with stable schizophrenia compared to 15 controls. In a perception-action interference paradigm (the Simon task; first experiment), and in the context of a dual-task paradigm (second experiment), the task-relevant stimulus feature was the gender of a face, which, however, displayed a smiling or frowning expression (task-irrelevant stimulus feature). We measured the electromyographical activity in the corrugator supercilii and zygomaticus major muscle regions in response to either compatible or incompatible stimuli (i.e., when the required response did or did not correspond to the depicted facial expression). The compatibility effect based on interactions between the implicit processing of a task-irrelevant emotional facial expression and the conscious production of an emotional facial expression did not differ between the groups. In stable patients (in spite of a reduced mimic reaction), we observed an intact capacity to respond spontaneously to facial emotional stimuli. PMID:27303335
Automatic decoding of facial movements reveals deceptive pain expressions
Bartlett, Marian Stewart; Littlewort, Gwen C.; Frank, Mark G.; Lee, Kang
2014-01-01
Summary In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1–3]. Two motor pathways control facial movement [4–7]. A subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions. A cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8–11]. Machine vision may, however, be able to distinguish deceptive from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here we show that human observers could not discriminate real from faked expressions of pain better than chance, and after training, improved accuracy to a modest 55%. However a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system’s superiority is attributable to its ability to differentiate the dynamics of genuine from faked expressions. Thus by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling. PMID:24656830
Deliberately generated and imitated facial expressions of emotions in people with eating disorders.
Dapelo, Marcela Marin; Bodas, Sergio; Morris, Robin; Tchanturia, Kate
2016-02-01
People with eating disorders have difficulties in socio emotional functioning that could contribute to maintaining the functional consequences of the disorder. This study aimed to explore the ability to deliberately generate (i.e., pose) and imitate facial expressions of emotions in women with anorexia (AN) and bulimia nervosa (BN), compared to healthy controls (HC). One hundred and three participants (36 AN, 25 BN, and 42 HC) were asked to pose and imitate facial expressions of anger, disgust, fear, happiness, and sadness. Their facial expressions were recorded and coded. Participants with eating disorders (both AN and BN) were less accurate than HC when posing facial expressions of emotions. Participants with AN were less accurate compared to HC imitating facial expressions, whilst BN participants had a middle range performance. All results remained significant after controlling for anxiety, depression and autistic features. The relatively small number of BN participants recruited for this study. The study findings suggest that people with eating disorders, particularly those with AN, have difficulties posing and imitating facial expressions of emotions. These difficulties could have an impact in social communication and social functioning. This is the first study to investigate the ability to pose and imitate facial expressions of emotions in people with eating disorders, and the findings suggest this area should be further explored in future studies. Copyright © 2015. Published by Elsevier B.V.
When action meets emotions: how facial displays of emotion influence goal-related behavior.
Ferri, Francesca; Stoianov, Ivilin Peev; Gianelli, Claudia; D'Amico, Luigi; Borghi, Anna M; Gallese, Vittorio
2010-10-01
Many authors have proposed that facial expressions, by conveying emotional states of the person we are interacting with, influence the interaction behavior. We aimed at verifying how specific the effect is of the facial expressions of emotions of an individual (both their valence and relevance/specificity for the purpose of the action) with respect to how the action aimed at the same individual is executed. In addition, we investigated whether and how the effects of emotions on action execution are modulated by participants' empathic attitudes. We used a kinematic approach to analyze the simulation of feeding others, which consisted of recording the "feeding trajectory" by using a computer mouse. Actors could express different highly arousing emotions, namely happiness, disgust, anger, or a neutral expression. Response time was sensitive to the interaction between valence and relevance/specificity of emotion: disgust caused faster response. In addition, happiness induced slower feeding time and longer time to peak velocity, but only in blocks where it alternated with expressions of disgust. The kinematic profiles described how the effect of the specificity of the emotional context for feeding, namely a modulation of accuracy requirements, occurs. An early acceleration in kinematic relative-to-neutral feeding profiles occurred when actors expressed positive emotions (happiness) in blocks with specific-to-feeding negative emotions (disgust). On the other hand, the end-part of the action was slower when feeding happy with respect to neutral faces, confirming the increase of accuracy requirements and motor control. These kinematic effects were modulated by participants' empathic attitudes. In conclusion, the social dimension of emotions, that is, their ability to modulate others' action planning/execution, strictly depends on their relevance and specificity to the purpose of the action. This finding argues against a strict distinction between social and nonsocial emotions.
Facial nerve palsy: analysis of cases reported in children in a suburban hospital in Nigeria.
Folayan, M O; Arobieke, R I; Eziyi, E; Oyetola, E O; Elusiyan, J
2014-01-01
The study describes the epidemiology, treatment, and treatment outcomes of the 10 cases of facial nerve palsy seen in children managed at the Obafemi Awolowo University Teaching Hospitals Complex, Ile-Ife over a 10 year period. It also compares findings with report from developed countries. This was a retrospective cohort review of pediatric cases of facial nerve palsy encountered in all the clinics run by specialists in the above named hospital. A diagnosis of facial palsy was based on International Classification of Diseases, Ninth Revision, Clinical Modification codes. Information retrieved from the case note included sex, age, number of days with lesion prior to presentation in the clinic, diagnosis, treatment, treatment outcome, and referral clinic. Only 10 cases of facial nerve palsy were diagnosed in the institution during the study period. Prevalence of facial nerve palsy in this hospital was 0.01%. The lesion more commonly affected males and the right side of the face. All cases were associated with infections: Mainly mumps (70% of cases). Case management include the use of steroids and eye pads for cases that presented within 7 days; and steroids, eye pad, and physical therapy for cases that presented later. All cases of facial nerve palsy associated with mumps and malaria infection fully recovered. The two cases of facial nerve palsy associated with otitis media only partially recovered. Facial nerve palsy in pediatric patients is more commonly associated with mumps in the study environment. Successes are recorded with steroid therapy.
Umekawa, Motoyuki; Hatano, Keiko; Matsumoto, Hideyuki; Shimizu, Takahiro; Hashida, Hideji
2017-05-27
The patient was a 47-year-old man who presented with diplopia and gait instability with a gradual onset over the course of three days. Neurological examinations showed ophthalmoplegia, diminished tendon reflexes, and truncal ataxia. Tests for anti-GQ1b antibodies and several other antibodies to ganglioside complex were positive. We made a diagnosis of Fisher syndrome. After administration of intravenous immunoglobulin, the patient's symptoms gradually improved. However, bilateral facial palsy appeared during the recovery phase. Brain MRI showed intensive contrast enhancement of bilateral facial nerves. During the onset phase of facial palsy, the amplitude of the compound muscle action potential (CMAP) in the facial nerves was preserved. During the peak phase, the facial CMAP amplitude was within the lower limit of normal values, or mildly decreased. During the recovery phase, the CMAP amplitude was normalized, and the R1 and R2 responses of the blink reflex were prolonged. The delayed facial nerve palsy improved spontaneously, and the enhancement on brain MRI disappeared. Serial neurophysiological and neuroradiological examinations suggested that the main lesions existed in the proximal part of the facial nerves and the mild lesions existed in the facial nerve terminals, probably due to reversible conduction failure.
Selective Transfer Machine for Personalized Facial Action Unit Detection
Chu, Wen-Sheng; De la Torre, Fernando; Cohn, Jeffery F.
2014-01-01
Automatic facial action unit (AFA) detection from video is a long-standing problem in facial expression analysis. Most approaches emphasize choices of features and classifiers. They neglect individual differences in target persons. People vary markedly in facial morphology (e.g., heavy versus delicate brows, smooth versus deeply etched wrinkles) and behavior. Individual differences can dramatically influence how well generic classifiers generalize to previously unseen persons. While a possible solution would be to train person-specific classifiers, that often is neither feasible nor theoretically compelling. The alternative that we propose is to personalize a generic classifier in an unsupervised manner (no additional labels for the test subjects are required). We introduce a transductive learning method, which we refer to Selective Transfer Machine (STM), to personalize a generic classifier by attenuating person-specific biases. STM achieves this effect by simultaneously learning a classifier and re-weighting the training samples that are most relevant to the test subject. To evaluate the effectiveness of STM, we compared STM to generic classifiers and to cross-domain learning methods in three major databases: CK+ [20], GEMEP-FERA [32] and RU-FACS [2]. STM outperformed generic classifiers in all. PMID:25242877
Action Unit Models of Facial Expression of Emotion in the Presence of Speech
Shah, Miraj; Cooper, David G.; Cao, Houwei; Gur, Ruben C.; Nenkova, Ani; Verma, Ragini
2014-01-01
Automatic recognition of emotion using facial expressions in the presence of speech poses a unique challenge because talking reveals clues for the affective state of the speaker but distorts the canonical expression of emotion on the face. We introduce a corpus of acted emotion expression where speech is either present (talking) or absent (silent). The corpus is uniquely suited for analysis of the interplay between the two conditions. We use a multimodal decision level fusion classifier to combine models of emotion from talking and silent faces as well as from audio to recognize five basic emotions: anger, disgust, fear, happy and sad. Our results strongly indicate that emotion prediction in the presence of speech from action unit facial features is less accurate when the person is talking. Modeling talking and silent expressions separately and fusing the two models greatly improves accuracy of prediction in the talking setting. The advantages are most pronounced when silent and talking face models are fused with predictions from audio features. In this multi-modal prediction both the combination of modalities and the separate models of talking and silent facial expression of emotion contribute to the improvement. PMID:25525561
Bad to the bone: facial structure predicts unethical behaviour
Haselhuhn, Michael P.; Wong, Elaine M.
2012-01-01
Researchers spanning many scientific domains, including primatology, evolutionary biology and psychology, have sought to establish an evolutionary basis for morality. While researchers have identified social and cognitive adaptations that support ethical behaviour, a consensus has emerged that genetically determined physical traits are not reliable signals of unethical intentions or actions. Challenging this view, we show that genetically determined physical traits can serve as reliable predictors of unethical behaviour if they are also associated with positive signals in intersex and intrasex selection. Specifically, we identify a key physical attribute, the facial width-to-height ratio, which predicts unethical behaviour in men. Across two studies, we demonstrate that men with wider faces (relative to facial height) are more likely to explicitly deceive their counterparts in a negotiation, and are more willing to cheat in order to increase their financial gain. Importantly, we provide evidence that the link between facial metrics and unethical behaviour is mediated by a psychological sense of power. Our results demonstrate that static physical attributes can indeed serve as reliable cues of immoral action, and provide additional support for the view that evolutionary forces shape ethical judgement and behaviour. PMID:21733897
Responsibility and the sense of agency enhance empathy for pain
Lepron, Evelyne; Causse, Michaël; Farrer, Chlöé
2015-01-01
Being held responsible for our actions strongly determines our moral judgements and decisions. This study examined whether responsibility also influences our affective reaction to others' emotions. We conducted two experiments in order to assess the effect of responsibility and of a sense of agency (the conscious feeling of controlling an action) on the empathic response to pain. In both experiments, participants were presented with video clips showing an actor's facial expression of pain of varying intensity. The empathic response was assessed with behavioural (pain intensity estimation from facial expressions and unpleasantness for the observer ratings) and electrophysiological measures (facial electromyography). Experiment 1 showed enhanced empathic response (increased unpleasantness for the observer and facial electromyography responses) as participants' degree of responsibility for the actor's pain increased. This effect was mainly accounted for by the decisional component of responsibility (compared with the execution component). In addition, experiment 2 found that participants' unpleasantness rating also increased when they had a sense of agency over the pain, while controlling for decision and execution processes. The findings suggest that increased empathy induced by responsibility and a sense of agency may play a role in regulating our moral conduct. PMID:25473014
Infant Expressions in an Approach/Withdrawal Framework
Sullivan, Margaret Wolan
2014-01-01
Since the introduction of empirical methods for studying facial expression, the interpretation of infant facial expressions has generated much debate. The premise of this paper is that action tendencies of approach and withdrawal constitute a core organizational feature of emotion in humans, promoting coherence of behavior, facial signaling and physiological responses. The approach/withdrawal framework can provide a taxonomy of contexts and the neurobehavioral framework for the systematic, empirical study of individual differences in expression, physiology, and behavior within individuals as well as across contexts over time. By adopting this framework in developmental work on basic emotion processes, it may be possible to better understand the behavioral principles governing facial displays, and how individual differences in them are related to physiology and behavior, function in context. PMID:25412273
Facial expressions of emotion and psychopathology in adolescent boys.
Keltner, D; Moffitt, T E; Stouthamer-Loeber, M
1995-11-01
On the basis of the widespread belief that emotions underpin psychological adjustment, the authors tested 3 predicted relations between externalizing problems and anger, internalizing problems and fear and sadness, and the absence of externalizing problems and social-moral emotion (embarrassment). Seventy adolescent boys were classified into 1 of 4 comparison groups on the basis of teacher reports using a behavior problem checklist: internalizers, externalizers, mixed (both internalizers and externalizers), and nondisordered boys. The authors coded the facial expressions of emotion shown by the boys during a structured social interaction. Results supported the 3 hypotheses: (a) Externalizing adolescents showed increased facial expressions of anger, (b) on 1 measure internalizing adolescents showed increased facial expressions of fear, and (c) the absence of externalizing problems (or nondisordered classification) was related to increased displays of embarrassment. Discussion focused on the relations of these findings to hypotheses concerning the role of impulse control in antisocial behavior.
Soccer-Related Facial Trauma: A Nationwide Perspective.
Bobian, Michael R; Hanba, Curtis J; Svider, Peter F; Hojjat, Houmehr; Folbe, Adam J; Eloy, Jean Anderson; Shkoukani, Mahdi A
2016-12-01
Soccer participation continues to increase among all ages in the US. Our objective was to analyze trends in soccer-related facial injury epidemiology, demographics, and mechanisms of injury. The National Electronic Injury Surveillance System was evaluated for soccer-related facial injuries from 2010 through 2014. Results for product code "soccer" were filtered for injures to the face. Number of injuries was extrapolated, and data were analyzed for age, sex, specific injury diagnoses, locations, and mechanisms. In all, 2054 soccer-related facial trauma entries were analyzed. During this time, the number of injures remained relatively stable. Lacerations were the most common diagnosis (44.2%), followed by contusions and fractures. The most common sites of fracture were the nose (75.1%). Of fractures with a reported mechanism of injury, the most common was head-to-head collisions (39.0%). Patients <19 years accounted for 66.9% of injuries, and athletes over 18 years old had a higher risk of fractures. The incidence of soccer-related facial trauma has remained stable, but the severity of such injuries remain a danger. Facial protection in soccer is virtually absent, and our findings reinforce the need to educate athletes, families, and physicians on injury awareness and prevention. © The Author(s) 2016.
Téllez, Maria J; Ulkatan, Sedat; Urriza, Javier; Arranz-Arranz, Beatriz; Deletis, Vedran
2016-02-01
To improve the recognition and possibly prevent confounding peripheral activation of the facial nerve caused by leaking transcranial electrical stimulation (TES) current during corticobulbar tract monitoring. We applied a single stimulus and a short train of electrical stimuli directly to the extracranial portion of the facial nerve. We compared the peripherally elicited compound muscle action potential (CMAP) of the facial nerve with the responses elicited by TES during intraoperative monitoring of the corticobulbar tract. A single stimulus applied directly to the facial nerve at subthreshold intensities did not evoke a CMAP, whereas short trains of subthreshold stimuli repeatedly evoked CMAPs. This is due to the phenomenon of sub- or near-threshold super excitability of the cranial nerve. Therefore, the facial responses evoked by short trains TES, when the leaked current reaches the facial nerve at sub- or near-threshold intensity, could lead to false interpretation. Our results revealed a potential pitfall in the current methodology for facial corticobulbar tract monitoring that is due to the activation of the facial nerve by subthreshold trains of stimuli. This study proposes a new criterion to exclude peripheral activation during corticobulbar tract monitoring. The failure to recognize and avoid facial nerve activation due to leaking current in the peripheral portion of the facial nerve during TES decreases the reliability of corticobulbar tract monitoring by increasing the possibility of false interpretation. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
The contemptuous separation: Facial expressions of emotion and breakups in young adulthood
Heshmati, Saeideh; Sbarra, David A.; Mason, Ashley E.
2017-01-01
The importance of studying specific and expressed emotions after a stressful life event is well known, yet few studies have moved beyond assessing self-reported emotional responses to a romantic breakup. This study examined associations between computer-recognized facial expressions and self-reported breakup-related distress among recently separated college-aged young adults (N = 135; 37 men) on four visits across 9 weeks. Participants’ facial expressions were coded using the Computer Expression Recognition Toolbox while participants spoke about their breakups. Of the seven expressed emotions studied, only Contempt showed a unique association with breakup-related distress over time. At baseline, greater Contempt was associated with less breakup-related distress; however, over time, greater Contempt was associated with greater breakup-related distress. PMID:29249896
The contemptuous separation: Facial expressions of emotion and breakups in young adulthood.
Heshmati, Saeideh; Sbarra, David A; Mason, Ashley E
2017-06-01
The importance of studying specific and expressed emotions after a stressful life event is well known, yet few studies have moved beyond assessing self-reported emotional responses to a romantic breakup. This study examined associations between computer-recognized facial expressions and self-reported breakup-related distress among recently separated college-aged young adults ( N = 135; 37 men) on four visits across 9 weeks. Participants' facial expressions were coded using the Computer Expression Recognition Toolbox while participants spoke about their breakups. Of the seven expressed emotions studied, only Contempt showed a unique association with breakup-related distress over time. At baseline, greater Contempt was associated with less breakup-related distress; however, over time, greater Contempt was associated with greater breakup-related distress.
Historical Techniques of Lie Detection
Vicianova, Martina
2015-01-01
Since time immemorial, lying has been a part of everyday life. For this reason, it has become a subject of interest in several disciplines, including psychology. The purpose of this article is to provide a general overview of the literature and thinking to date about the evolution of lie detection techniques. The first part explores ancient methods recorded circa 1000 B.C. (e.g., God’s judgment in Europe). The second part describes technical methods based on sciences such as phrenology, polygraph and graphology. This is followed by an outline of more modern-day approaches such as FACS (Facial Action Coding System), functional MRI, and Brain Fingerprinting. Finally, after the familiarization with the historical development of techniques for lie detection, we discuss the scope for new initiatives not only in the area of designing new methods, but also for the research into lie detection itself, such as its motives and regulatory issues related to deception. PMID:27247675
Wang, Yamin; Zhou, Lu
2016-10-01
Most young Chinese people now learn about Caucasian individuals via media, especially American and European movies and television series (AEMT). The current study aimed to explore whether long-term exposure to AEMT facilitates Caucasian face perception in young Chinese watchers. Before the experiment, we created Chinese, Caucasian, and generic average faces (generic average face was created from both Chinese and Caucasian faces) and tested participants' ability to identify them. In the experiment, we asked AEMT watchers and Chinese movie and television series (CMT) watchers to complete a facial norm detection task. This task was developed recently to detect norms used in facial perception. The results indicated that AEMT watchers coded Caucasian faces relative to a Caucasian face norm better than they did to a generic face norm, whereas no such difference was found among CMT watchers. All watchers coded Chinese faces by referencing a Chinese norm better than they did relative to a generic norm. The results suggested that long-term exposure to AEMT has the same effect as daily other-race face contact in shaping facial perception. © The Author(s) 2016.
Spatio-temporal Event Classification using Time-series Kernel based Structured Sparsity
Jeni, László A.; Lőrincz, András; Szabó, Zoltán; Cohn, Jeffrey F.; Kanade, Takeo
2016-01-01
In many behavioral domains, such as facial expression and gesture, sparse structure is prevalent. This sparsity would be well suited for event detection but for one problem. Features typically are confounded by alignment error in space and time. As a consequence, high-dimensional representations such as SIFT and Gabor features have been favored despite their much greater computational cost and potential loss of information. We propose a Kernel Structured Sparsity (KSS) method that can handle both the temporal alignment problem and the structured sparse reconstruction within a common framework, and it can rely on simple features. We characterize spatio-temporal events as time-series of motion patterns and by utilizing time-series kernels we apply standard structured-sparse coding techniques to tackle this important problem. We evaluated the KSS method using both gesture and facial expression datasets that include spontaneous behavior and differ in degree of difficulty and type of ground truth coding. KSS outperformed both sparse and non-sparse methods that utilize complex image features and their temporal extensions. In the case of early facial event classification KSS had 10% higher accuracy as measured by F1 score over kernel SVM methods1. PMID:27830214
Marinelli, John P; Van Gompel, Jamie J; Link, Michael J; Carlson, Matthew L
2018-05-01
Secondary trigeminal neuralgia (TN) is uncommon. When a space-occupying lesion with mass effect is identified, the associated TN is often exclusively attributed to the tumor. This report illustrates the importance of considering coexistent actionable pathology when surgically treating secondary TN. A 51-year-old woman presented with abrupt-onset TN of the V2 and V3 nerve divisions with hypesthesia. She denied changes in hearing, balance, or facial nerve dysfunction. Magnetic resonance imaging revealed a 1.6-cm contrast-enhancing cerebellopontine angle tumor that effaced the trigeminal nerve, consistent with a vestibular schwannoma. In addition, a branch of the superior cerebellar artery abutted the cisternal segment of the trigeminal nerve on T2-weighted thin-slice magnetic resonance imaging. Intraoperative electrical stimulation of the tumor elicited a response from the facial nerve at low threshold over the entire accessible tumor surface, indicating that the tumor was a facial nerve schwannoma. Considering the patient's lack of facial nerve deficit and that the tumor exhibited no safe entry point for intracapsular debulking, tumor resection was not performed. Working between the tumor and tentorium, a branch of the superior cerebellar artery was identified and decompressed with a Teflon pad. At last follow-up, the patient exhibited resolution of her TN. Her hearing and facial nerve function remained intact. Despite obstruction from a medium-sized tumor, it is still possible to achieve microvascular decompression of the fifth cranial nerve. This emphasizes the importance of considering other actionable pathology during surgical management of presumed tumor-induced TN. Further, TN is relatively uncommon with medium-sized vestibular schwannomas and coexistent causes should be considered. Copyright © 2018 Elsevier Inc. All rights reserved.
The localization of facial motor impairment in sporadic Möbius syndrome.
Cattaneo, L; Chierici, E; Bianchi, B; Sesenna, E; Pavesi, G
2006-06-27
To investigate the neurophysiologic aspects of facial motor control in patients with sporadic Möbius syndrome defined as nonprogressive congenital facial and abducens palsy. The authors assessed 24 patients with sporadic Möbius syndrome by performing a complete clinical examination and neurophysiologic tests including facial nerve conduction studies, needle electromyography examination of facial muscles, and recording of the blink reflex and of the trigeminofacial inhibitory reflex. Two distinct groups of patients were identified according to neurophysiologic testing. The first group was characterized by increased facial distal motor latencies (DMLs) and poor recruitment of small and polyphasic motor unit action potentials (MUAPs). The second group was characterized by normal facial DMLs and neuropathic MUAPs. It is hypothesized that in the first group, the disorder is due to a rhombencephalic maldevelopment with selective sparing of small-size MUs, and in the second group, the disorder is related to an acquired nervous injury during intrauterine life, with subsequent neurogenic remodeling of MUs. The trigeminofacial reflexes showed that in most subjects of both groups, the functional impairment of facial movements was caused by a nuclear or peripheral site of lesion, with little evidence of brainstem interneuronal involvement. Two different neurophysiologically defined phenotypes can be distinguished in sporadic Möbius syndrome, with different pathogenetic implications.
Imitating expressions: emotion-specific neural substrates in facial mimicry.
Lee, Tien-Wen; Josephs, Oliver; Dolan, Raymond J; Critchley, Hugo D
2006-09-01
Intentionally adopting a discrete emotional facial expression can modulate the subjective feelings corresponding to that emotion; however, the underlying neural mechanism is poorly understood. We therefore used functional brain imaging (functional magnetic resonance imaging) to examine brain activity during intentional mimicry of emotional and non-emotional facial expressions and relate regional responses to the magnitude of expression-induced facial movement. Eighteen healthy subjects were scanned while imitating video clips depicting three emotional (sad, angry, happy), and two 'ingestive' (chewing and licking) facial expressions. Simultaneously, facial movement was monitored from displacement of fiducial markers (highly reflective dots) on each subject's face. Imitating emotional expressions enhanced activity within right inferior prefrontal cortex. This pattern was absent during passive viewing conditions. Moreover, the magnitude of facial movement during emotion-imitation predicted responses within right insula and motor/premotor cortices. Enhanced activity in ventromedial prefrontal cortex and frontal pole was observed during imitation of anger, in ventromedial prefrontal and rostral anterior cingulate during imitation of sadness and in striatal, amygdala and occipitotemporal during imitation of happiness. Our findings suggest a central role for right inferior frontal gyrus in the intentional imitation of emotional expressions. Further, by entering metrics for facial muscular change into analysis of brain imaging data, we highlight shared and discrete neural substrates supporting affective, action and social consequences of somatomotor emotional expression.
Psychocentricity and participant profiles: implications for lexical processing among multilinguals
Libben, Gary; Curtiss, Kaitlin; Weber, Silke
2014-01-01
Lexical processing among bilinguals is often affected by complex patterns of individual experience. In this paper we discuss the psychocentric perspective on language representation and processing, which highlights the centrality of individual experience in psycholinguistic experimentation. We discuss applications to the investigation of lexical processing among multilinguals and explore the advantages of using high-density experiments with multilinguals. High density experiments are designed to co-index measures of lexical perception and production, as well as participant profiles. We discuss the challenges associated with the characterization of participant profiles and present a new data visualization technique, that we term Facial Profiles. This technique is based on Chernoff faces developed over 40 years ago. The Facial Profile technique seeks to overcome some of the challenges associated with the use of Chernoff faces, while maintaining the core insight that recoding multivariate data as facial features can engage the human face recognition system and thus enhance our ability to detect and interpret patterns within multivariate datasets. We demonstrate that Facial Profiles can code participant characteristics in lexical processing studies by recoding variables such as reading ability, speaking ability, and listening ability into iconically-related relative sizes of eye, mouth, and ear, respectively. The balance of ability in bilinguals can be captured by creating composite facial profiles or Janus Facial Profiles. We demonstrate the use of Facial Profiles and Janus Facial Profiles in the characterization of participant effects in the study of lexical perception and production. PMID:25071614
FaceWarehouse: a 3D facial expression database for visual computing.
Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun
2014-03-01
We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.
Voluntary facial action generates emotion-specific autonomic nervous system activity.
Levenson, R W; Ekman, P; Friesen, W V
1990-07-01
Four experiments were conducted to determine whether voluntarily produced emotional facial configurations are associated with differentiated patterns of autonomic activity, and if so, how this might be mediated. Subjects received muscle-by-muscle instructions and coaching to produce facial configurations for anger, disgust, fear, happiness, sadness, and surprise while heart rate, skin conductance, finger temperature, and somatic activity were monitored. Results indicated that voluntary facial activity produced significant levels of subjective experience of the associated emotion, and that autonomic distinctions among emotions: (a) were found both between negative and positive emotions and among negative emotions, (b) were consistent between group and individual subjects' data, (c) were found in both male and female subjects, (d) were found in both specialized (actors, scientists) and nonspecialized populations, (e) were stronger when the voluntary facial configurations most closely resembled actual emotional expressions, and (f) were stronger when experience of the associated emotion was reported. The capacity of voluntary facial activity to generate emotion-specific autonomic activity: (a) did not require subjects to see facial expressions (either in a mirror or on an experimenter's face), and (b) could not be explained by differences in the difficulty of making the expressions or by differences in concomitant somatic activity.
Self-organized Evaluation of Dynamic Hand Gestures for Sign Language Recognition
NASA Astrophysics Data System (ADS)
Buciu, Ioan; Pitas, Ioannis
Two main theories exist with respect to face encoding and representation in the human visual system (HVS). The first one refers to the dense (holistic) representation of the face, where faces have "holon"-like appearance. The second one claims that a more appropriate face representation is given by a sparse code, where only a small fraction of the neural cells corresponding to face encoding is activated. Theoretical and experimental evidence suggest that the HVS performs face analysis (encoding, storing, face recognition, facial expression recognition) in a structured and hierarchical way, where both representations have their own contribution and goal. According to neuropsychological experiments, it seems that encoding for face recognition, relies on holistic image representation, while a sparse image representation is used for facial expression analysis and classification. From the computer vision perspective, the techniques developed for automatic face and facial expression recognition fall into the same two representation types. Like in Neuroscience, the techniques which perform better for face recognition yield a holistic image representation, while those techniques suitable for facial expression recognition use a sparse or local image representation. The proposed mathematical models of image formation and encoding try to simulate the efficient storing, organization and coding of data in the human cortex. This is equivalent with embedding constraints in the model design regarding dimensionality reduction, redundant information minimization, mutual information minimization, non-negativity constraints, class information, etc. The presented techniques are applied as a feature extraction step followed by a classification method, which also heavily influences the recognition results.
Spontaneous facial expressions of emotion of congenitally and noncongenitally blind individuals.
Matsumoto, David; Willingham, Bob
2009-01-01
The study of the spontaneous expressions of blind individuals offers a unique opportunity to understand basic processes concerning the emergence and source of facial expressions of emotion. In this study, the authors compared the expressions of congenitally and noncongenitally blind athletes in the 2004 Paralympic Games with each other and with those produced by sighted athletes in the 2004 Olympic Games. The authors also examined how expressions change from 1 context to another. There were no differences between congenitally blind, noncongenitally blind, and sighted athletes, either on the level of individual facial actions or in facial emotion configurations. Blind athletes did produce more overall facial activity, but these were isolated to head and eye movements. The blind athletes' expressions differentiated whether they had won or lost a medal match at 3 different points in time, and there were no cultural differences in expression. These findings provide compelling evidence that the production of spontaneous facial expressions of emotion is not dependent on observational learning but simultaneously demonstrates a learned component to the social management of expressions, even among blind individuals.
[Study on the indexes of forensic identification by the occlusal-facial digital radiology].
Gao, Dong; Wang, Hu; Hu, Jin-liang; Xu, Zhe; Deng, Zhen-hua
2006-02-01
To discuss the coding of full dentition with 32 locations and measure the characteristics of some bony indexes in occlusal-facial digital radiology (DR). To select randomly three hundred DR orthopantomogram and code the full dentition, then analyze the diversity of dental patterns. To select randomly one hundred DR lateral cephalogram and measure six indexes (N-S,N-Me,Cd-Gn,Cd-Go,NP-SN,MP-SN) separately by one odontologist and one trained forensic graduate student, then calculate the coefficient variation (CV) of every index and take a correlation analysis for the consistency between two measurements. (1) The total diversity of 300 dental patterns was 75%.It was a very high value. (2)All six quantitative variables had comparatively high CV value.(3) After the linear correlation analysis between two measurements, all six coefficient correlations were close to 1. This indicated that the measurements were stable and consistent. The method of coding full dentition in DR orthopantomogram and measuring six bony indexes in DR lateral cephalogram can be used to forensic identification.
Koudelová, J; Brůžek, J; Cagáňová, V; Krajíček, V; Velemínská, J
2015-08-01
To evaluate sexual dimorphism of facial form and shape and to describe differences between the average female and male face from 12 to 15 years. Overall 120 facial scans from healthy Caucasian children (17 boys, 13 girls) were longitudinally evaluated over a 4-year period between the ages of 12 and 15 years. Facial surface scans were obtained using a three-dimensional optical scanner Vectra-3D. Variation in facial shape and form was evaluated using geometric morphometric and statistical methods (DCA, PCA and permutation test). Average faces were superimposed, and the changes were evaluated using colour-coded maps. There were no significant sex differences (p > 0.05) in shape in any age category and no differences in form in the 12- and 13-year-olds, as the female faces were within the area of male variability. From the age of 14, a slight separation occurred, which was statistically confirmed. The differences were mainly associated with size. Generally boys had more prominent eyebrow ridges, more deeply set eyes, a flatter cheek area, and a more prominent nose and chin area. The development of facial sexual dimorphism during pubertal growth is connected with ontogenetic allometry. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Foroni, Francesco; Semin, Gün R
2009-08-01
Observing and producing a smile activate the very same facial muscles. In Experiment 1, we predicted and found that verbal stimuli (action verbs) that refer to emotional expressions elicit the same facial muscle activity (facial electromyography) as visual stimuli do. These results are evidence that language referring to facial muscular activity is not amodal, as traditionally assumed, but is instead bodily grounded. These findings were extended in Experiment 2, in which subliminally presented verbal stimuli were shown to drive muscle activation and to shape judgments, but not when muscle activation was blocked. These experiments provide an important bridge between research on the neurobiological basis of language and related behavioral research. The implications of these findings for theories of language and other domains of cognitive psychology (e.g., priming) are discussed.
Effect of an observer's presence on facial behavior during dyadic communication.
Yamamoto, K; Suzuki, N
2012-06-01
In everyday life, people communicate not only with another person but also in front of other people. How do people behave during communication when observed by others? Effects of an observer (presence vs absence) and interpersonal relationship (friends vs strangers vs alone) on facial behavior were examined. Participants viewed film clips that elicited positive affect (film presentation) and discussed their impressions about the clips (conversation). Participants rated their subjective emotions and social motives. Durations of smiles, gazes, and utterances of each participant were coded. The presence of an observer did not affect facial behavior during the film presentation, but did affect gazes during conversation. Whereas the presence of an observer seemed to facilitate affiliation in pairs of strangers, communication between friends was exclusive and not affected by an observer.
The extraction and use of facial features in low bit-rate visual communication.
Pearson, D
1992-01-29
A review is given of experimental investigations by the author and his collaborators into methods of extracting binary features from images of the face and hands. The aim of the research has been to enable deaf people to communicate by sign language over the telephone network. Other applications include model-based image coding and facial-recognition systems. The paper deals with the theoretical postulates underlying the successful experimental extraction of facial features. The basic philosophy has been to treat the face as an illuminated three-dimensional object and to identify features from characteristics of their Gaussian maps. It can be shown that in general a composite image operator linked to a directional-illumination estimator is required to accomplish this, although the latter can often be omitted in practice.
Emotion unfolded by motion: a role for parietal lobe in decoding dynamic facial expressions.
Sarkheil, Pegah; Goebel, Rainer; Schneider, Frank; Mathiak, Klaus
2013-12-01
Facial expressions convey important emotional and social information and are frequently applied in investigations of human affective processing. Dynamic faces may provide higher ecological validity to examine perceptual and cognitive processing of facial expressions. Higher order processing of emotional faces was addressed by varying the task and virtual face models systematically. Blood oxygenation level-dependent activation was assessed using functional magnetic resonance imaging in 20 healthy volunteers while viewing and evaluating either emotion or gender intensity of dynamic face stimuli. A general linear model analysis revealed that high valence activated a network of motion-responsive areas, indicating that visual motion areas support perceptual coding for the motion-based intensity of facial expressions. The comparison of emotion with gender discrimination task revealed increased activation of inferior parietal lobule, which highlights the involvement of parietal areas in processing of high level features of faces. Dynamic emotional stimuli may help to emphasize functions of the hypothesized 'extended' over the 'core' system for face processing.
Wickert, Natasha M; Wong Riff, Karen W Y; Mansour, Mark; Forrest, Christopher R; Goodacre, Timothy E E; Pusic, Andrea L; Klassen, Anne F
2018-01-01
Objective The aim of this systematic review was to identify patient-reported outcome (PRO) instruments used in research with children/youth with conditions associated with facial differences to identify the health concepts measured. Design MEDLINE, EMBASE, CINAHL, and PsycINFO were searched from 2004 to 2016 to identify PRO instruments used in acne vulgaris, birthmarks, burns, ear anomalies, facial asymmetries, and facial paralysis patients. We performed a content analysis whereby the items were coded to identify concepts and categorized as positive or negative content or phrasing. Results A total of 7,835 articles were screened; 6 generic and 11 condition-specific PRO instruments were used in 96 publications. Condition-specific instruments were for acne (four), oral health (two), dermatology (one), facial asymmetries (two), microtia (one), and burns (one). The PRO instruments provided 554 items (295 generic; 259 condition specific) that were sorted into 4 domains, 11 subdomains, and 91 health concepts. The most common domain was psychological (n = 224 items). Of the identified items, 76% had negative content or phrasing (e.g., "Because of the way my face looks I wish I had never been born"). Given the small number of items measuring facial appearance (n = 19) and function (n = 22), the PRO instruments reviewed lacked content validity for patients whose condition impacted facial function and/or appearance. Conclusions Treatments can change facial appearance and function. This review draws attention to a problem with content validity in existing PRO instruments. Our team is now developing a new PRO instrument called FACE-Q Kids to address this problem.
Event-related theta synchronization predicts deficit in facial affect recognition in schizophrenia.
Csukly, Gábor; Stefanics, Gábor; Komlósi, Sarolta; Czigler, István; Czobor, Pál
2014-02-01
Growing evidence suggests that abnormalities in the synchronized oscillatory activity of neurons in schizophrenia may lead to impaired neural activation and temporal coding and thus lead to neurocognitive dysfunctions, such as deficits in facial affect recognition. To gain an insight into the neurobiological processes linked to facial affect recognition, we investigated both induced and evoked oscillatory activity by calculating the Event Related Spectral Perturbation (ERSP) and the Inter Trial Coherence (ITC) during facial affect recognition. Fearful and neutral faces as well as nonface patches were presented to 24 patients with schizophrenia and 24 matched healthy controls while EEG was recorded. The participants' task was to recognize facial expressions. Because previous findings with healthy controls showed that facial feature decoding was associated primarily with oscillatory activity in the theta band, we analyzed ERSP and ITC in this frequency band in the time interval of 140-200 ms, which corresponds to the N170 component. Event-related theta activity and phase-locking to facial expressions, but not to nonface patches, predicted emotion recognition performance in both controls and patients. Event-related changes in theta amplitude and phase-locking were found to be significantly weaker in patients compared with healthy controls, which is in line with previous investigations showing decreased neural synchronization in the low frequency bands in patients with schizophrenia. Neural synchrony is thought to underlie distributed information processing. Our results indicate a less effective functioning in the recognition process of facial features, which may contribute to a less effective social cognition in schizophrenia. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Perceptual integration of kinematic components in the recognition of emotional facial expressions.
Chiovetto, Enrico; Curio, Cristóbal; Endres, Dominik; Giese, Martin
2018-04-01
According to a long-standing hypothesis in motor control, complex body motion is organized in terms of movement primitives, reducing massively the dimensionality of the underlying control problems. For body movements, this low-dimensional organization has been convincingly demonstrated by the learning of low-dimensional representations from kinematic and EMG data. In contrast, the effective dimensionality of dynamic facial expressions is unknown, and dominant analysis approaches have been based on heuristically defined facial "action units," which reflect contributions of individual face muscles. We determined the effective dimensionality of dynamic facial expressions by learning of a low-dimensional model from 11 facial expressions. We found an amazingly low dimensionality with only two movement primitives being sufficient to simulate these dynamic expressions with high accuracy. This low dimensionality is confirmed statistically, by Bayesian model comparison of models with different numbers of primitives, and by a psychophysical experiment that demonstrates that expressions, simulated with only two primitives, are indistinguishable from natural ones. In addition, we find statistically optimal integration of the emotion information specified by these primitives in visual perception. Taken together, our results indicate that facial expressions might be controlled by a very small number of independent control units, permitting very low-dimensional parametrization of the associated facial expression.
French-speaking children’s freely produced labels for facial expressions
Maassarani, Reem; Gosselin, Pierre; Montembeault, Patricia; Gagnon, Mathieu
2014-01-01
In this study, we investigated the labeling of facial expressions in French-speaking children. The participants were 137 French-speaking children, between the ages of 5 and 11 years, recruited from three elementary schools in Ottawa, Ontario, Canada. The facial expressions included expressions of happiness, sadness, fear, surprise, anger, and disgust. Participants were shown one facial expression at a time, and asked to say what the stimulus person was feeling. Participants’ responses were coded by two raters who made judgments concerning the specific emotion category in which the responses belonged. 5- and 6-year-olds were quite accurate in labeling facial expressions of happiness, anger, and sadness but far less accurate for facial expressions of fear, surprise, and disgust. An improvement in accuracy as a function of age was found for fear and surprise only. Labeling facial expressions of disgust proved to be very difficult for the children, even for the 11-year-olds. In order to examine the fit between the model proposed by Widen and Russell (2003) and our data, we looked at the number of participants who had the predicted response patterns. Overall, 88.52% of the participants did. Most of the participants used between 3 and 5 labels, with correspondence percentages varying between 80.00% and 100.00%. Our results suggest that the model proposed by Widen and Russell (2003) is not limited to English-speaking children, but also accounts for the sequence of emotion labeling in French-Canadian children. PMID:24926281
Diminished facial emotion expression and associated clinical characteristics in Anorexia Nervosa.
Lang, Katie; Larsson, Emma E C; Mavromara, Liza; Simic, Mima; Treasure, Janet; Tchanturia, Kate
2016-02-28
This study aimed to investigate emotion expression in a large group of children, adolescents and adults with Anorexia Nervosa (AN), and investigate the associated clinical correlates. One hundred and forty-one participants (AN=66, HC= 75) were recruited and positive and negative film clips were used to elicit emotion expressions. The Facial Activation Coding system (FACES) was used to code emotion expression. Subjective ratings of emotion were collected. Individuals with AN displayed less positive emotions during the positive film clip compared to healthy controls (HC). There was no significant difference between the groups on the Positive and Negative Affect Scale (PANAS). The AN group displayed emotional incongruence (reporting a different emotion to what would be expected given the stimuli, with limited facial affect to signal the emotion experienced), whereby they reported feeling significantly higher rates of negative emotion during the positive clip. There were no differences in emotion expression between the groups during the negative film clip. Despite this individuals with AN reported feeling significantly higher levels of negative emotions during the negative clip. Diminished positive emotion expression was associated with more severe clinical symptoms, which could suggest that these individuals represent a group with serious social difficulties, which may require specific attention in treatment. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Interactive searching of facial image databases
NASA Astrophysics Data System (ADS)
Nicholls, Robert A.; Shepherd, John W.; Shepherd, Jean
1995-09-01
A set of psychological facial descriptors has been devised to enable computerized searching of criminal photograph albums. The descriptors have been used to encode image databased of up to twelve thousand images. Using a system called FACES, the databases are searched by translating a witness' verbal description into corresponding facial descriptors. Trials of FACES have shown that this coding scheme is more productive and efficient than searching traditional photograph albums. An alternative method of searching the encoded database using a genetic algorithm is currenly being tested. The genetic search method does not require the witness to verbalize a description of the target but merely to indicate a degree of similarity between the target and a limited selection of images from the database. The major drawback of FACES is that is requires a manual encoding of images. Research is being undertaken to automate the process, however, it will require an algorithm which can predict human descriptive values. Alternatives to human derived coding schemes exist using statistical classifications of images. Since databases encoded using statistical classifiers do not have an obvious direct mapping to human derived descriptors, a search method which does not require the entry of human descriptors is required. A genetic search algorithm is being tested for such a purpose.
Schubotz, Ricarda I.; Wurm, Moritz F.; Wittmann, Marco K.; von Cramon, D. Yves
2014-01-01
Objects are reminiscent of actions often performed with them: knife and apple remind us on peeling the apple or cutting it. Mnemonic representations of object-related actions (action codes) evoked by the sight of an object may constrain and hence facilitate recognition of unrolling actions. The present fMRI study investigated if and how action codes influence brain activation during action observation. The average number of action codes (NAC) of 51 sets of objects was rated by a group of n = 24 participants. In an fMRI study, different volunteers were asked to recognize actions performed with the same objects presented in short videos. To disentangle areas reflecting the storage of action codes from those exploiting them, we showed object-compatible and object-incompatible (pantomime) actions. Areas storing action codes were considered to positively co-vary with NAC in both object-compatible and object-incompatible action; due to its role in tool-related tasks, we here hypothesized left anterior inferior parietal cortex (aIPL). In contrast, areas exploiting action codes were expected to show this correlation only in object-compatible but not incompatible action, as only object-compatible actions match one of the active action codes. For this interaction, we hypothesized ventrolateral premotor cortex (PMv) to join aIPL due to its role in biasing competition in IPL. We found left anterior intraparietal sulcus (IPS) and left posterior middle temporal gyrus (pMTG) to co-vary with NAC. In addition to these areas, action codes increased activity in object-compatible action in bilateral PMv, right IPS, and lateral occipital cortex (LO). Findings suggest that during action observation, the brain derives possible actions from perceived objects, and uses this information to shape action recognition. In particular, the number of expectable actions quantifies the activity level at PMv, IPL, and pMTG, but only PMv reflects their biased competition while observed action unfolds. PMID:25009519
Intact Imitation of Emotional Facial Actions in Autism Spectrum Conditions
ERIC Educational Resources Information Center
Press, Clare; Richardson, Daniel; Bird, Geoffrey
2010-01-01
It has been proposed that there is a core impairment in autism spectrum conditions (ASC) to the mirror neuron system (MNS): If observed actions cannot be mapped onto the motor commands required for performance, higher order sociocognitive functions that involve understanding another person's perspective, such as theory of mind, may be impaired.…
AP-2α and AP-2β cooperatively orchestrate homeobox gene expression during branchial arch patterning.
Van Otterloo, Eric; Li, Hong; Jones, Kenneth L; Williams, Trevor
2018-01-25
The evolution of a hinged moveable jaw with variable morphology is considered a major factor behind the successful expansion of the vertebrates. DLX homeobox transcription factors are crucial for establishing the positional code that patterns the mandible, maxilla and intervening hinge domain, but how the genes encoding these proteins are regulated remains unclear. Herein, we demonstrate that the concerted action of the AP-2α and AP-2β transcription factors within the mouse neural crest is essential for jaw patterning. In the absence of these two proteins, the hinge domain is lost and there are alterations in the size and patterning of the jaws correlating with dysregulation of homeobox gene expression, with reduced levels of Emx, Msx and Dlx paralogs accompanied by an expansion of Six1 expression. Moreover, detailed analysis of morphological features and gene expression changes indicate significant overlap with various compound Dlx gene mutants. Together, these findings reveal that the AP-2 genes have a major function in mammalian neural crest development, influencing patterning of the craniofacial skeleton via the DLX code, an effect that has implications for vertebrate facial evolution, as well as for human craniofacial disorders. © 2018. Published by The Company of Biologists Ltd.
Micro-Expression Recognition Using Color Spaces.
Wang, Su-Jing; Yan, Wen-Jing; Li, Xiaobai; Zhao, Guoying; Zhou, Chun-Guang; Fu, Xiaolan; Yang, Minghao; Tao, Jianhua
2015-12-01
Micro-expressions are brief involuntary facial expressions that reveal genuine emotions and, thus, help detect lies. Because of their many promising applications, they have attracted the attention of researchers from various fields. Recent research reveals that two perceptual color spaces (CIELab and CIELuv) provide useful information for expression recognition. This paper is an extended version of our International Conference on Pattern Recognition paper, in which we propose a novel color space model, tensor independent color space (TICS), to help recognize micro-expressions. In this paper, we further show that CIELab and CIELuv are also helpful in recognizing micro-expressions, and we indicate why these three color spaces achieve better performance. A micro-expression color video clip is treated as a fourth-order tensor, i.e., a four-dimension array. The first two dimensions are the spatial information, the third is the temporal information, and the fourth is the color information. We transform the fourth dimension from RGB into TICS, in which the color components are as independent as possible. The combination of dynamic texture and independent color components achieves a higher accuracy than does that of RGB. In addition, we define a set of regions of interests (ROIs) based on the facial action coding system and calculated the dynamic texture histograms for each ROI. Experiments are conducted on two micro-expression databases, CASME and CASME 2, and the results show that the performances for TICS, CIELab, and CIELuv are better than those for RGB or gray.
Cartaud, Alice; Ruggiero, Gennaro; Ott, Laurent; Iachini, Tina; Coello, Yann
2018-01-01
Accurate control of interpersonal distances in social contexts is an important determinant of effective social interactions. Although comfortable interpersonal distance seems to be dependent on social factors such as the gender, age and activity of the confederates, it also seems to be modulated by the way we represent our peripersonal-action space. To test this hypothesis, the present study investigated the relation between the emotional responses registered through electrodermal activity (EDA) triggered by human-like point-light displays (PLDs) carrying different facial expressions (neutral, angry, happy) when located in the participants peripersonal or extrapersonal space, and the comfort distance with the same PLDs when approaching and crossing the participants fronto-parallel axis on the right or left side. The results show an increase of the phasic EDA for PLDs with angry facial expressions located in the peripersonal space (reachability judgment task), in comparison to the same PLDs located in the extrapersonal space, which was not observed for PLDs with neutral or happy facial expressions. The results also show an increase of the comfort distance for PLDs approaching the participants with an angry facial expression (interpersonal comfort distance judgment task), in comparison to PLDs with happy and neutral ones, which was related to the increase of the physiological response. Overall, the findings indicate that comfort social space can be predicted from the emotional reaction triggered by a confederate when located within the observer's peripersonal space. This suggests that peripersonal-action space and interpersonal-social space are similarly sensitive to the emotional valence of the confederate, which could reflect a common adaptive mechanism in specifying theses spaces to subtend interactions with both the physical and social environment, but also to ensure body protection from potential threats.
Prigent, Elise; Amorim, Michel-Ange; de Oliveira, Armando Mónica
2018-01-01
Humans have developed a specific capacity to rapidly perceive and anticipate other people's facial expressions so as to get an immediate impression of their emotional state of mind. We carried out two experiments to examine the perceptual and memory dynamics of facial expressions of pain. In the first experiment, we investigated how people estimate other people's levels of pain based on the perception of various dynamic facial expressions; these differ both in terms of the amount and intensity of activated action units. A second experiment used a representational momentum (RM) paradigm to study the emotional anticipation (memory bias) elicited by the same facial expressions of pain studied in Experiment 1. Our results highlighted the relationship between the level of perceived pain (in Experiment 1) and the direction and magnitude of memory bias (in Experiment 2): When perceived pain increases, the memory bias tends to be reduced (if positive) and ultimately becomes negative. Dynamic facial expressions of pain may reenact an "immediate perceptual history" in the perceiver before leading to an emotional anticipation of the agent's upcoming state. Thus, a subtle facial expression of pain (i.e., a low contraction around the eyes) that leads to a significant positive anticipation can be considered an adaptive process-one through which we can swiftly and involuntarily detect other people's pain.
Accurately Assessing Lines on the Aging Face.
Renton, Kim; Keefe, Kathy Young
The ongoing positive aging trend has resulted in many research studies being conducted to determine the characteristics of aging and what steps we can take to prevent the extrinsic signs of aging. Much of this attention has been focused on the prevention and treatment of facial wrinkles. To treat or prevent facial wrinkles correctly, their causative action first needs to be determined. published very compelling evidence that the development of wrinkles is complex and is caused by more factors than just the combination of poor lifestyle choices.
Kwit, Natalie A; Max, Ryan; Mead, Paul S
2018-01-01
Abstract Background Clinical features of Lyme disease (LD) range from localized skin lesions to serious disseminated disease. Information on risk factors for Lyme arthritis, facial palsy, carditis, and meningitis is limited but could facilitate disease recognition and elucidate pathophysiology. Methods Patients from high-incidence states treated for LD during 2005–2014 were identified in a nationwide insurance claims database using the International Classification of Diseases, Ninth Revision code for LD (088.81), antibiotic treatment history, and clinically compatible codiagnosis codes for LD manifestations. Results Among 88022 unique patients diagnosed with LD, 5122 (5.8%) patients with 5333 codiagnoses were identified: 2440 (2.8%) arthritis, 1853 (2.1%) facial palsy, 534 (0.6%) carditis, and 506 (0.6%) meningitis. Patients with disseminated LD had lower median age (35 vs 42 years) and higher male proportion (61% vs 50%) than nondisseminated LD. Greatest differential risks included arthritis in males aged 10–14 years (odds ratio [OR], 3.5; 95% confidence interval [CI], 3.0–4.2), facial palsy (OR, 2.1; 95% CI, 1.6–2.7) and carditis (OR, 2.4; 95% CI, 1.6–3.6) in males aged 20–24 years, and meningitis in females aged 10–14 years (OR, 3.4; 95% CI, 2.1–5.5) compared to the 55–59 year referent age group. Males aged 15–29 years had the highest risk for complete heart block, a potentially fatal condition. Conclusions The risk and manifestations of disseminated LD vary by age and sex. Provider education regarding at-risk populations and additional investigations into pathophysiology could enhance early case recognition and improve patient management. PMID:29326960
Compressive sensing using optimized sensing matrix for face verification
NASA Astrophysics Data System (ADS)
Oey, Endra; Jeffry; Wongso, Kelvin; Tommy
2017-12-01
Biometric appears as one of the solutions which is capable in solving problems that occurred in the usage of password in terms of data access, for example there is possibility in forgetting password and hard to recall various different passwords. With biometrics, physical characteristics of a person can be captured and used in the identification process. In this research, facial biometric is used in the verification process to determine whether the user has the authority to access the data or not. Facial biometric is chosen as its low cost implementation and generate quite accurate result for user identification. Face verification system which is adopted in this research is Compressive Sensing (CS) technique, in which aims to reduce dimension size as well as encrypt data in form of facial test image where the image is represented in sparse signals. Encrypted data can be reconstructed using Sparse Coding algorithm. Two types of Sparse Coding namely Orthogonal Matching Pursuit (OMP) and Iteratively Reweighted Least Squares -ℓp (IRLS-ℓp) will be used for comparison face verification system research. Reconstruction results of sparse signals are then used to find Euclidean norm with the sparse signal of user that has been previously saved in system to determine the validity of the facial test image. Results of system accuracy obtained in this research are 99% in IRLS with time response of face verification for 4.917 seconds and 96.33% in OMP with time response of face verification for 0.4046 seconds with non-optimized sensing matrix, while 99% in IRLS with time response of face verification for 13.4791 seconds and 98.33% for OMP with time response of face verification for 3.1571 seconds with optimized sensing matrix.
Patient experiences and outcomes following facial skin cancer surgery: A qualitative study.
Lee, Erica H; Klassen, Anne F; Lawson, Jessica L; Cano, Stefan J; Scott, Amie M; Pusic, Andrea L
2016-08-01
Early melanoma and non-melanoma skin cancer of the facial area are primarily treated with surgery. Little is known about the outcomes of treatment for facial skin cancer patients. The objective of the study was to identify concerns about aesthetics, procedures and health from the patients' perspective after facial skin surgery. Semi-structured in-depth interviews were conducted with 15 participants. Line-by-line coding was used to establish categories and develop themes. We identified five major themes on the impact of skin cancer surgery: appearance-related concerns; psychological (e.g., fear of new cancers or recurrence); social (e.g. impact on social activities and interaction); physical (e.g. pain and swelling) concerns and satisfaction with the experience of care (e.g., satisfaction with surgeon). The priority of participants was the removal of the facial skin cancer, as this reduced their overall worry. The aesthetic outcome was secondary but important, as it had important implications on the participants' social and psychological functioning. The participants' experience with the care provided by the surgeon and staff also contributed to their satisfaction with their treatment. This conceptual framework provides the basis for the development of a new patient-reported outcome instrument. © 2015 The Australasian College of Dermatologists.
Riehle, M; Mehl, S; Lincoln, T M
2018-04-17
We tested whether people with schizophrenia and prominent expressive negative symptoms (ENS) show reduced facial expressions in face-to-face social interactions and whether this expressive reduction explains negative social evaluations of these persons. We compared participants with schizophrenia with high ENS (n = 18) with participants with schizophrenia with low ENS (n = 30) and with healthy controls (n = 39). Participants engaged in an affiliative role-play that was coded for the frequency of positive and negative facial expression and rated for social performance skills and willingness for future interactions with the respective role-play partner. Participants with schizophrenia with high ENS showed significantly fewer positive facial expressions than those with low ENS and controls and were also rated significantly lower on social performance skills and willingness for future interactions. Participants with schizophrenia with low ENS did not differ from controls on these measures. The group difference in willingness for future interactions was significantly and independently mediated by the reduced positive facial expressions and social performance skills. Reduced facial expressiveness in schizophrenia is specifically related to ENS and has negative social consequences. These findings highlight the need to develop aetiological models and targeted interventions for ENS and its social consequences. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Ragneskog, H; Asplund, K; Kihlgren, M; Norberg, A
2001-06-01
Many nursing home patients with dementia suffer from symptoms of agitation (e.g. anxiety, shouting, irritability). This study investigated whether individualized music could be used as a nursing intervention to reduce such symptoms in four patients with severe dementia. The patients were video-recorded during four sessions in four periods, including a control period without music, two periods where individualized music was played, and one period where classical music was played. The recordings were analysed by systematic observations and the Facial Action Coding System. Two patients became calmer during some of the individualized music sessions; one patient remained sitting in her armchair longer, and the other patient stopped shouting. For the two patients who were most affected by dementia, the noticeable effect of music was minimal. If the nursing staff succeed in discovering the music preferences of an individual, individualized music may be an effective nursing intervention to mitigate anxiety and agitation for some patients.
Facial motion parameter estimation and error criteria in model-based image coding
NASA Astrophysics Data System (ADS)
Liu, Yunhai; Yu, Lu; Yao, Qingdong
2000-04-01
Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.
Kret, Mariska E.
2015-01-01
Humans are well adapted to quickly recognize and adequately respond to another’s emotions. Different theories propose that mimicry of emotional expressions (facial or otherwise) mechanistically underlies, or at least facilitates, these swift adaptive reactions. When people unconsciously mimic their interaction partner’s expressions of emotion, they come to feel reflections of those companions’ emotions, which in turn influence the observer’s own emotional and empathic behavior. The majority of research has focused on facial actions as expressions of emotion. However, the fact that emotions are not just expressed by facial muscles alone is often still ignored in emotion perception research. In this article, I therefore argue for a broader exploration of emotion signals from sources beyond the face muscles that are more automatic and difficult to control. Specifically, I will focus on the perception of implicit sources such as gaze and tears and autonomic responses such as pupil-dilation, eyeblinks and blushing that are subtle yet visible to observers and because they can hardly be controlled or regulated by the sender, provide important “veridical” information. Recently, more research is emerging about the mimicry of these subtle affective signals including pupil-mimicry. I will here review this literature and suggest avenues for future research that will eventually lead to a better comprehension of how these signals help in making social judgments and understand each other’s emotions. PMID:26074855
Space-by-time manifold representation of dynamic facial expressions for emotion categorization
Delis, Ioannis; Chen, Chaona; Jack, Rachael E.; Garrod, Oliver G. B.; Panzeri, Stefano; Schyns, Philippe G.
2016-01-01
Visual categorization is the brain computation that reduces high-dimensional information in the visual environment into a smaller set of meaningful categories. An important problem in visual neuroscience is to identify the visual information that the brain must represent and then use to categorize visual inputs. Here we introduce a new mathematical formalism—termed space-by-time manifold decomposition—that describes this information as a low-dimensional manifold separable in space and time. We use this decomposition to characterize the representations used by observers to categorize the six classic facial expressions of emotion (happy, surprise, fear, disgust, anger, and sad). By means of a Generative Face Grammar, we presented random dynamic facial movements on each experimental trial and used subjective human perception to identify the facial movements that correlate with each emotion category. When the random movements projected onto the categorization manifold region corresponding to one of the emotion categories, observers categorized the stimulus accordingly; otherwise they selected “other.” Using this information, we determined both the Action Unit and temporal components whose linear combinations lead to reliable categorization of each emotion. In a validation experiment, we confirmed the psychological validity of the resulting space-by-time manifold representation. Finally, we demonstrated the importance of temporal sequencing for accurate emotion categorization and identified the temporal dynamics of Action Unit components that cause typical confusions between specific emotions (e.g., fear and surprise) as well as those resolving these confusions. PMID:27305521
Yankouskaya, Alla; Humphreys, Glyn W.; Rotshtein, Pia
2014-01-01
Facial identity and emotional expression are two important sources of information for daily social interaction. However the link between these two aspects of face processing has been the focus of an unresolved debate for the past three decades. Three views have been advocated: (1) separate and parallel processing of identity and emotional expression signals derived from faces; (2) asymmetric processing with the computation of emotion in faces depending on facial identity coding but not vice versa; and (3) integrated processing of facial identity and emotion. We present studies with healthy participants that primarily apply methods from mathematical psychology, formally testing the relations between the processing of facial identity and emotion. Specifically, we focused on the “Garner” paradigm, the composite face effect and the divided attention tasks. We further ask whether the architecture of face-related processes is fixed or flexible and whether (and how) it can be shaped by experience. We conclude that formal methods of testing the relations between processes show that the processing of facial identity and expressions interact, and hence are not fully independent. We further demonstrate that the architecture of the relations depends on experience; where experience leads to higher degree of inter-dependence in the processing of identity and expressions. We propose that this change occurs as integrative processes are more efficient than parallel. Finally, we argue that the dynamic aspects of face processing need to be incorporated into theories in this field. PMID:25452722
Statistical Analysis of Online Eye and Face-tracking Applications in Marketing
NASA Astrophysics Data System (ADS)
Liu, Xuan
Eye-tracking and face-tracking technology have been widely adopted to study viewers' attention and emotional response. In the dissertation, we apply these two technologies to investigate effective online contents that are designed to attract and direct attention and engage viewers emotional responses. In the first part of the dissertation, we conduct a series of experiments that use eye-tracking technology to explore how online models' facial cues affect users' attention on static e-commerce websites. The joint effects of two facial cues, gaze direction and facial expression on attention, are estimated by Bayesian ANOVA, allowing various distributional assumptions. We also consider the similarities and differences in the effects of facial cues among American and Chinese consumers. This study offers insights on how to attract and retain customers' attentions for advertisers that use static advertisement on various websites or ad networks. In the second part of the dissertation, we conduct a face-tracking study where we investigate the relation between experiment participants' emotional responseswhile watching comedy movie trailers and their watching intentions to the actual movies. Viewers' facial expressions are collected in real-time and converted to emo- tional responses with algorithms based on facial coding system. To analyze the data, we propose to use a joint modeling method that link viewers' longitudinal emotion measurements and their watching intentions. This research provides recommenda- tions to filmmakers on how to improve the effectiveness of movie trailers, and how to boost audiences' desire to watch the movies.
Emotional Complexity and the Neural Representation of Emotion in Motion
Barnard, Philip J.; Lawrence, Andrew D.
2011-01-01
According to theories of emotional complexity, individuals low in emotional complexity encode and represent emotions in visceral or action-oriented terms, whereas individuals high in emotional complexity encode and represent emotions in a differentiated way, using multiple emotion concepts. During functional magnetic resonance imaging, participants viewed valenced animated scenarios of simple ball-like figures attending either to social or spatial aspects of the interactions. Participant’s emotional complexity was assessed using the Levels of Emotional Awareness Scale. We found a distributed set of brain regions previously implicated in processing emotion from facial, vocal and bodily cues, in processing social intentions, and in emotional response, were sensitive to emotion conveyed by motion alone. Attention to social meaning amplified the influence of emotion in a subset of these regions. Critically, increased emotional complexity correlated with enhanced processing in a left temporal polar region implicated in detailed semantic knowledge; with a diminished effect of social attention; and with increased differentiation of brain activity between films of differing valence. Decreased emotional complexity was associated with increased activity in regions of pre-motor cortex. Thus, neural coding of emotion in semantic vs action systems varies as a function of emotional complexity, helping reconcile puzzling inconsistencies in neuropsychological investigations of emotion recognition. PMID:20207691
Infants’ sensitivity to emotion in music and emotion-action understanding
Siu, Tik-Sze Carrey; Cheung, Him
2017-01-01
Emerging evidence has indicated infants’ early sensitivity to acoustic cues in music. Do they interpret these cues in emotional terms to represent others’ affective states? The present study examined infants’ development of emotional understanding of music with a violation-of-expectation paradigm. Twelve- and 20-month-olds were presented with emotionally concordant and discordant music-face displays on alternate trials. The 20-month-olds, but not the 12-month-olds, were surprised by emotional incongruence between musical and facial expressions, suggesting their sensitivity to musical emotion. In a separate non-music task, only the 20-month-olds were able to use an actress’s affective facial displays to predict her subsequent action. Interestingly, for the 20-month-olds, such emotion-action understanding correlated with sensitivity to musical expressions measured in the first task. These two abilities however did not correlate with family income, parental estimation of language and communicative skills, and quality of parent-child interaction. The findings suggest that sensitivity to musical emotion and emotion-action understanding may be supported by a generalised common capacity to represent emotion from social cues, which lays a foundation for later social-communicative development. PMID:28152081
The structure of affective action representations: temporal binding of affective response codes.
Eder, Andreas B; Müsseler, Jochen; Hommel, Bernhard
2012-01-01
Two experiments examined the hypothesis that preparing an action with a specific affective connotation involves the binding of this action to an affective code reflecting this connotation. This integration into an action plan should lead to a temporary occupation of the affective code, which should impair the concurrent representation of affectively congruent events, such as the planning of another action with the same valence. This hypothesis was tested with a dual-task setup that required a speeded choice between approach- and avoidance-type lever movements after having planned and before having executed an evaluative button press. In line with the code-occupation hypothesis, slower lever movements were observed when the lever movement was affectively compatible with the prepared evaluative button press than when the two actions were affectively incompatible. Lever movements related to approach and avoidance and evaluative button presses thus seem to share a code that represents affective meaning. A model of affective action control that is based on the theory of event coding is discussed.
[On the contribution of magnets in sequelae of facial paralysis. Preliminary clinical study].
Fombeur, J P; Koubbi, G; Chevalier, A M; Mousset, C
1988-01-01
This trial was designed to evaluate the efficacy of EPOREC 1 500 magnets as an adjuvant to rehabilitation following peripheral facial paralysis. Magnetotherapy is used in many other specialties, and in particular in rheumatology. The properties of repulsion between identical poles were used to decrease the effect of sequelae in the form of contractures on the facial muscles. There were two groups of 20 patients: one group with physiotherapy only and the other with standard rehabilitation together with the use of magnets. These 40 patients had facial paralysis of various origins (trauma, excision of acoustic neuroma, Bell's palsy etc). Obviously all patients had an intact nerve. It was at the time of the development of contractures that magnets could be used in terms of evaluation of their efficacy of action on syncinesiae, contractures and spasticity. Magnets were worn at night for a mean period of six months and results were assessed in terms of disappearance of eye-mouth syncinesiae, and in terms of normality of facial tone. Improvement and total recovery without sequelae were obtained far more frequently in the group which wore magnets, encouraging us to continue along these lines.
Ensemble coding of face identity is not independent of the coding of individual identity.
Neumann, Markus F; Ng, Ryan; Rhodes, Gillian; Palermo, Romina
2018-06-01
Information about a group of similar objects can be summarized into a compressed code, known as ensemble coding. Ensemble coding of simple stimuli (e.g., groups of circles) can occur in the absence of detailed exemplar coding, suggesting dissociable processes. Here, we investigate whether a dissociation would still be apparent when coding facial identity, where individual exemplar information is much more important. We examined whether ensemble coding can occur when exemplar coding is difficult, as a result of large sets or short viewing times, or whether the two types of coding are positively associated. We found a positive association, whereby both ensemble and exemplar coding were reduced for larger groups and shorter viewing times. There was no evidence for ensemble coding in the absence of exemplar coding. At longer presentation times, there was an unexpected dissociation, where exemplar coding increased yet ensemble coding decreased, suggesting that robust information about face identity might suppress ensemble coding. Thus, for face identity, we did not find the classic dissociation-of access to ensemble information in the absence of detailed exemplar information-that has been used to support claims of distinct mechanisms for ensemble and exemplar coding.
Toward DNA-based facial composites: preliminary results and validation.
Claes, Peter; Hill, Harold; Shriver, Mark D
2014-11-01
The potential of constructing useful DNA-based facial composites is forensically of great interest. Given the significant identity information coded in the human face these predictions could help investigations out of an impasse. Although, there is substantial evidence that much of the total variation in facial features is genetically mediated, the discovery of which genes and gene variants underlie normal facial variation has been hampered primarily by the multipartite nature of facial variation. Traditionally, such physical complexity is simplified by simple scalar measurements defined a priori, such as nose or mouth width or alternatively using dimensionality reduction techniques such as principal component analysis where each principal coordinate is then treated as a scalar trait. However, as shown in previous and related work, a more impartial and systematic approach to modeling facial morphology is available and can facilitate both the gene discovery steps, as we recently showed, and DNA-based facial composite construction, as we show here. We first use genomic ancestry and sex to create a base-face, which is simply an average sex and ancestry matched face. Subsequently, the effects of 24 individual SNPs that have been shown to have significant effects on facial variation are overlaid on the base-face forming the predicted-face in a process akin to a photomontage or image blending. We next evaluate the accuracy of predicted faces using cross-validation. Physical accuracy of the facial predictions either locally in particular parts of the face or in terms of overall similarity is mainly determined by sex and genomic ancestry. The SNP-effects maintain the physical accuracy while significantly increasing the distinctiveness of the facial predictions, which would be expected to reduce false positives in perceptual identification tasks. To the best of our knowledge this is the first effort at generating facial composites from DNA and the results are preliminary but certainly promising, especially considering the limited amount of genetic information about the face contained in these 24 SNPs. This approach can incorporate additional SNPs as these are discovered and their effects documented. In this context we discuss three main avenues of research: expanding our knowledge of the genetic architecture of facial morphology, improving the predictive modeling of facial morphology by exploring and incorporating alternative prediction models, and increasing the value of the results through the weighted encoding of physical measurements in terms of human perception of faces. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Spisak, Brian R.; Blaker, Nancy M.; Lefevre, Carmen E.; Moore, Fhionna R.; Krebbers, Kleis F. B.
2014-01-01
Previous research indicates that followers tend to contingently match particular leader qualities to evolutionarily consistent situations requiring collective action (i.e., context-specific cognitive leadership prototypes) and information processing undergoes categorization which ranks certain qualities as first-order context-general and others as second-order context-specific. To further investigate this contingent categorization phenomenon we examined the “attractiveness halo”—a first-order facial cue which significantly biases leadership preferences. While controlling for facial attractiveness, we independently manipulated the underlying facial cues of health and intelligence and then primed participants with four distinct organizational dynamics requiring leadership (i.e., competition vs. cooperation between groups and exploratory change vs. stable exploitation). It was expected that the differing requirements of the four dynamics would contingently select for relatively healthier- or intelligent-looking leaders. We found perceived facial intelligence to be a second-order context-specific trait—for instance, in times requiring a leader to address between-group cooperation—whereas perceived health is significantly preferred across all contexts (i.e., a first-order trait). The results also indicate that facial health positively affects perceived masculinity while facial intelligence negatively affects perceived masculinity, which may partially explain leader choice in some of the environmental contexts. The limitations and a number of implications regarding leadership biases are discussed. PMID:25414653
Davila-Ross, Marina; Jesus, Goncalo; Osborne, Jade; Bard, Kim A.
2015-01-01
The ability to flexibly produce facial expressions and vocalizations has a strong impact on the way humans communicate, as it promotes more explicit and versatile forms of communication. Whereas facial expressions and vocalizations are unarguably closely linked in primates, the extent to which these expressions can be produced independently in nonhuman primates is unknown. The present work, thus, examined if chimpanzees produce the same types of facial expressions with and without accompanying vocalizations, as do humans. Forty-six chimpanzees (Pan troglodytes) were video-recorded during spontaneous play with conspecifics at the Chimfunshi Wildlife Orphanage. ChimpFACS was applied, a standardized coding system to measure chimpanzee facial movements, based on FACS developed for humans. Data showed that the chimpanzees produced the same 14 configurations of open-mouth faces when laugh sounds were present and when they were absent. Chimpanzees, thus, produce these facial expressions flexibly without being morphologically constrained by the accompanying vocalizations. Furthermore, the data indicated that the facial expression plus vocalization and the facial expression alone were used differently in social play, i.e., when in physical contact with the playmates and when matching the playmates’ open-mouth faces. These findings provide empirical evidence that chimpanzees produce distinctive facial expressions independently from a vocalization, and that their multimodal use affects communicative meaning, important traits for a more explicit and versatile way of communication. As it is still uncertain how human laugh faces evolved, the ChimpFACS data were also used to empirically examine the evolutionary relation between open-mouth faces with laugh sounds of chimpanzees and laugh faces of humans. The ChimpFACS results revealed that laugh faces of humans must have gradually emerged from laughing open-mouth faces of ancestral apes. This work examines the main evolutionary changes of laugh faces since the last common ancestor of chimpanzees and humans. PMID:26061420
Davila-Ross, Marina; Jesus, Goncalo; Osborne, Jade; Bard, Kim A
2015-01-01
The ability to flexibly produce facial expressions and vocalizations has a strong impact on the way humans communicate, as it promotes more explicit and versatile forms of communication. Whereas facial expressions and vocalizations are unarguably closely linked in primates, the extent to which these expressions can be produced independently in nonhuman primates is unknown. The present work, thus, examined if chimpanzees produce the same types of facial expressions with and without accompanying vocalizations, as do humans. Forty-six chimpanzees (Pan troglodytes) were video-recorded during spontaneous play with conspecifics at the Chimfunshi Wildlife Orphanage. ChimpFACS was applied, a standardized coding system to measure chimpanzee facial movements, based on FACS developed for humans. Data showed that the chimpanzees produced the same 14 configurations of open-mouth faces when laugh sounds were present and when they were absent. Chimpanzees, thus, produce these facial expressions flexibly without being morphologically constrained by the accompanying vocalizations. Furthermore, the data indicated that the facial expression plus vocalization and the facial expression alone were used differently in social play, i.e., when in physical contact with the playmates and when matching the playmates' open-mouth faces. These findings provide empirical evidence that chimpanzees produce distinctive facial expressions independently from a vocalization, and that their multimodal use affects communicative meaning, important traits for a more explicit and versatile way of communication. As it is still uncertain how human laugh faces evolved, the ChimpFACS data were also used to empirically examine the evolutionary relation between open-mouth faces with laugh sounds of chimpanzees and laugh faces of humans. The ChimpFACS results revealed that laugh faces of humans must have gradually emerged from laughing open-mouth faces of ancestral apes. This work examines the main evolutionary changes of laugh faces since the last common ancestor of chimpanzees and humans.
Confidence Preserving Machine for Facial Action Unit Detection
Zeng, Jiabei; Chu, Wen-Sheng; De la Torre, Fernando; Cohn, Jeffrey F.; Xiong, Zhang
2016-01-01
Facial action unit (AU) detection from video has been a long-standing problem in automated facial expression analysis. While progress has been made, accurate detection of facial AUs remains challenging due to ubiquitous sources of errors, such as inter-personal variability, pose, and low-intensity AUs. In this paper, we refer to samples causing such errors as hard samples, and the remaining as easy samples. To address learning with the hard samples, we propose the Confidence Preserving Machine (CPM), a novel two-stage learning framework that combines multiple classifiers following an “easy-to-hard” strategy. During the training stage, CPM learns two confident classifiers. Each classifier focuses on separating easy samples of one class from all else, and thus preserves confidence on predicting each class. During the testing stage, the confident classifiers provide “virtual labels” for easy test samples. Given the virtual labels, we propose a quasi-semi-supervised (QSS) learning strategy to learn a person-specific (PS) classifier. The QSS strategy employs a spatio-temporal smoothness that encourages similar predictions for samples within a spatio-temporal neighborhood. In addition, to further improve detection performance, we introduce two CPM extensions: iCPM that iteratively augments training samples to train the confident classifiers, and kCPM that kernelizes the original CPM model to promote nonlinearity. Experiments on four spontaneous datasets GFT [15], BP4D [56], DISFA [42], and RU-FACS [3] illustrate the benefits of the proposed CPM models over baseline methods and state-of-the-art semisupervised learning and transfer learning methods. PMID:27479964
Hur, Dong Min; Lee, Young Hee; Kim, Sung Hoon; Park, Jung Mi; Kim, Ji Hyun; Yong, Sang Yeol; Shinn, Jong Mock; Oh, Kyung Joon
2013-01-01
Objective To examine the neurophysiologic status in patients with idiopathic facial nerve palsy (Bell's palsy) and Ramsay Hunt syndrome (herpes zoster oticus) within 7 days from onset of symptoms, by comparing the amplitude of compound muscle action potentials (CMAP) of facial muscles in electroneuronography (ENoG) and transcranial magnetic stimulation (TMS). Methods The facial nerve conduction study using ENoG and TMS was performed in 42 patients with Bell's palsy and 14 patients with Ramsay Hunt syndrome within 7 days from onset of symptoms. Denervation ratio was calculated as CMAP amplitude evoked by ENoG or TMS on the affected side as percentage of the amplitudes on the healthy side. The severity of the facial palsy was graded according to House-Brackmann facial grading scale (H-B FGS). Results In all subjects, the denervation ratio in TMS (71.53±18.38%) was significantly greater than the denervation ratio in ENoG (41.95±21.59%). The difference of denervation ratio between ENoG and TMS was significantly smaller in patients with Ramsay Hunt syndrome than in patients with Bell's palsy. The denervation ratio of ENoG or TMS did not correlated significantly with the H-B FGS. Conclusion In the electrophysiologic study for evaluation in patients with facial palsy within 7 days from onset of symptoms, ENoG and TMS are useful in gaining additional information about the neurophysiologic status of the facial nerve and may help to evaluate prognosis and set management plan. PMID:23525840
Bertalanffy, Helmut; Tissira, Nadir; Krayenbühl, Niklaus; Bozinov, Oliver; Sarnthein, Johannes
2011-03-01
Surgical exposure of intrinsic brainstem lesions through the floor of the 4th ventricle requires precise identification of facial nerve (CN VII) fibers to avoid damage. To assess the shape, size, and variability of the area where the facial nerve can be stimulated electrophysiologically on the surface of the rhomboid fossa. Over a period of 18 months, 20 patients were operated on for various brainstem and/or cerebellar lesions. Facial nerve fibers were stimulated to yield compound muscle action potentials (CMAP) in the target muscles. Using the sites of CMAP yield, a detailed functional map of the rhomboid fossa was constructed for each patient. Lesions resected included 14 gliomas, 5 cavernomas, and 1 epidermoid cyst. Of 40 response areas mapped, 19 reached the median sulcus. The distance from the obex to the caudal border of the response area ranged from 8 to 27 mm (median, 17 mm). The rostrocaudal length of the response area ranged from 2 to 15 mm (median, 5 mm). Facial nerve response areas showed large variability in size and position, even in patients with significant distance between the facial colliculus and underlying pathological lesion. Lesions located close to the facial colliculus markedly distorted the response area. This is the first documentation of variability in the CN VII response area in the rhomboid fossa. Knowledge of this remarkable variability may facilitate the assessment of safe entry zones to the brainstem and may contribute to improved outcome following neurosurgical interventions within this sensitive area of the brain.
Har-Shai, Yaron; Gil, Tamir; Metanes, Issa; Labbé, Daniel
2010-07-01
Facial paralysis is a significant functional and aesthetic handicap. Facial reanimation is performed either by two-stage microsurgical methods or by regional one-stage muscle pedicle flaps. Labbé has modified and improved the regional muscle pedicle transfer flaps for facial reanimation (i.e., the lengthening temporalis myoplasty procedure). This true myoplasty technique is capable of producing a coordinated, spontaneous, and symmetrical smile. An intraoperative electrical stimulation of the temporal muscle is proposed to simulate the smile of the paralyzed side on the surgical table. The intraoperative electrical stimulation of the temporalis muscle, employing direct percutaneous electrode needles or transcutaneous electrical stimulation electrodes, was utilized in 11 primary and four secondary cases with complete facial palsy. The duration of the facial paralysis was up to 12 years. Postoperative follow-up ranged from 3 to 12 months. The insertion points of the temporalis muscle tendon to the nasolabial fold, upper lip, and oral commissure had been changed according to the intraoperative muscle stimulation in six patients of the 11 primary cases (55 percent) and in all four secondary (revisional) cases. A coordinated, spontaneous, and symmetrical smile was achieved in all patients by 3 months after surgery by employing speech therapy and biofeedback. This adjunct intraoperative refinement provides crucial feedback for the surgeon in both primary and secondary facial palsy cases regarding the vector of action of the temporalis muscle and the accuracy of the anchoring points of its tendon, thus enhancing a more coordinated and symmetrical smile.
Two-Stream Transformer Networks for Video-based Face Alignment.
Liu, Hao; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie
2017-08-01
In this paper, we propose a two-stream transformer networks (TSTN) approach for video-based face alignment. Unlike conventional image-based face alignment approaches which cannot explicitly model the temporal dependency in videos and motivated by the fact that consistent movements of facial landmarks usually occur across consecutive frames, our TSTN aims to capture the complementary information of both the spatial appearance on still frames and the temporal consistency information across frames. To achieve this, we develop a two-stream architecture, which decomposes the video-based face alignment into spatial and temporal streams accordingly. Specifically, the spatial stream aims to transform the facial image to the landmark positions by preserving the holistic facial shape structure. Accordingly, the temporal stream encodes the video input as active appearance codes, where the temporal consistency information across frames is captured to help shape refinements. Experimental results on the benchmarking video-based face alignment datasets show very competitive performance of our method in comparisons to the state-of-the-arts.
Network Interactions Explain Sensitivity to Dynamic Faces in the Superior Temporal Sulcus.
Furl, Nicholas; Henson, Richard N; Friston, Karl J; Calder, Andrew J
2015-09-01
The superior temporal sulcus (STS) in the human and monkey is sensitive to the motion of complex forms such as facial and bodily actions. We used functional magnetic resonance imaging (fMRI) to explore network-level explanations for how the form and motion information in dynamic facial expressions might be combined in the human STS. Ventral occipitotemporal areas selective for facial form were localized in occipital and fusiform face areas (OFA and FFA), and motion sensitivity was localized in the more dorsal temporal area V5. We then tested various connectivity models that modeled communication between the ventral form and dorsal motion pathways. We show that facial form information modulated transmission of motion information from V5 to the STS, and that this face-selective modulation likely originated in OFA. This finding shows that form-selective motion sensitivity in the STS can be explained in terms of modulation of gain control on information flow in the motion pathway, and provides a substantial constraint for theories of the perception of faces and biological motion. © The Author 2014. Published by Oxford University Press.
Sex differences in social cognition: The case of face processing.
Proverbio, Alice Mado
2017-01-02
Several studies have demonstrated that women show a greater interest for social information and empathic attitude than men. This article reviews studies on sex differences in the brain, with particular reference to how males and females process faces and facial expressions, social interactions, pain of others, infant faces, faces in things (pareidolia phenomenon), opposite-sex faces, humans vs. landscapes, incongruent behavior, motor actions, biological motion, erotic pictures, and emotional information. Sex differences in oxytocin-based attachment response and emotional memory are also mentioned. In addition, we investigated how 400 different human faces were evaluated for arousal and valence dimensions by a group of healthy male and female University students. Stimuli were carefully balanced for sensory and perceptual characteristics, age, facial expression, and sex. As a whole, women judged all human faces as more positive and more arousing than men. Furthermore, they showed a preference for the faces of children and the elderly in the arousal evaluation. Regardless of face aesthetics, age, or facial expression, women rated human faces higher than men. The preference for opposite- vs. same-sex faces strongly interacted with facial age. Overall, both women and men exhibited differences in facial processing that could be interpreted in the light of evolutionary psychobiology. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Mandrini, Silvia; Comelli, Mario; Dall'angelo, Anna; Togni, Rossella; Cecini, Miriam; Pavese, Chiara; Dalla Toffola, Elena
2016-12-01
Only few studies have considered the effects of the combined treatment with onabotulinumtoxinA (BoNT-A) injections and biofeedback (BFB) rehabilitation in the recovery of postparetic facial synkinesis (PPFS). To explore the presence of a persistent improvement in facial function out of the pharmacological effect of BoNT-A in subjects with established PPFS, after repeated sessions of BoNT-A injections combined with an educational facial training program using mirror biofeedback (BFB) exercises. Secondary objective was to investigate the trend of the presumed persistent improvement. Case-series study. Outpatient Clinic of Physical Medicine and Rehabilitation Unit. Twenty-seven patients (22 females; mean age 45±16 years) affected by an established peripheral facial palsy, treated with a minimum of three BoNT-A injections in association with mirror BFB rehabilitation. The interval between consecutive BoNT-A injections was at least five months. At baseline and before every BoNT-A injection+mirror BFB session (when the effect of the previous BoNT-A injection had vanished), patients were assessed with the Italian version of Sunnybrook Facial Grading System (SB). The statistical analysis considered SB composite and partial scores before each treatment session compared to the baseline scores. A significant improvement of the SB composite and partial scores was observed until the fourth session. Considering the "Symmetry of Voluntary Movement" partial score, the main improvement was observed in the muscles of the lower part of the face. In a chronic stage of postparetic facial synkinesis, patients may benefit from a combined therapy with repeated BoNT-A injections and an educational facial training program with mirror BFB exercises, gaining an improvement of the facial function up to the fourth session. This improvement reflects the acquired ability to use facial muscle correctly. It doesn't involve the injected muscles but those trained with mirror biofeedback exercises and it persists also when BoNT-A action has vanished. The combined therapy with repeated BoNT-A injections and an educational facial training program using mirror BFB exercises may be useful in the motor recovery of the muscles of the lower part of the face not injected but trained.
Modulation of α power and functional connectivity during facial affect recognition.
Popov, Tzvetan; Miller, Gregory A; Rockstroh, Brigitte; Weisz, Nathan
2013-04-03
Research has linked oscillatory activity in the α frequency range, particularly in sensorimotor cortex, to processing of social actions. Results further suggest involvement of sensorimotor α in the processing of facial expressions, including affect. The sensorimotor face area may be critical for perception of emotional face expression, but the role it plays is unclear. The present study sought to clarify how oscillatory brain activity contributes to or reflects processing of facial affect during changes in facial expression. Neuromagnetic oscillatory brain activity was monitored while 30 volunteers viewed videos of human faces that changed their expression from neutral to fearful, neutral, or happy expressions. Induced changes in α power during the different morphs, source analysis, and graph-theoretic metrics served to identify the role of α power modulation and cross-regional coupling by means of phase synchrony during facial affect recognition. Changes from neutral to emotional faces were associated with a 10-15 Hz power increase localized in bilateral sensorimotor areas, together with occipital power decrease, preceding reported emotional expression recognition. Graph-theoretic analysis revealed that, in the course of a trial, the balance between sensorimotor power increase and decrease was associated with decreased and increased transregional connectedness as measured by node degree. Results suggest that modulations in α power facilitate early registration, with sensorimotor cortex including the sensorimotor face area largely functionally decoupled and thereby protected from additional, disruptive input and that subsequent α power decrease together with increased connectedness of sensorimotor areas facilitates successful facial affect recognition.
Face processing regions are sensitive to distinct aspects of temporal sequence in facial dynamics.
Reinl, Maren; Bartels, Andreas
2014-11-15
Facial movement conveys important information for social interactions, yet its neural processing is poorly understood. Computational models propose that shape- and temporal sequence sensitive mechanisms interact in processing dynamic faces. While face processing regions are known to respond to facial movement, their sensitivity to particular temporal sequences has barely been studied. Here we used fMRI to examine the sensitivity of human face-processing regions to two aspects of directionality in facial movement trajectories. We presented genuine movie recordings of increasing and decreasing fear expressions, each of which were played in natural or reversed frame order. This two-by-two factorial design matched low-level visual properties, static content and motion energy within each factor, emotion-direction (increasing or decreasing emotion) and timeline (natural versus artificial). The results showed sensitivity for emotion-direction in FFA, which was timeline-dependent as it only occurred within the natural frame order, and sensitivity to timeline in the STS, which was emotion-direction-dependent as it only occurred for decreased fear. The occipital face area (OFA) was sensitive to the factor timeline. These findings reveal interacting temporal sequence sensitive mechanisms that are responsive to both ecological meaning and to prototypical unfolding of facial dynamics. These mechanisms are temporally directional, provide socially relevant information regarding emotional state or naturalness of behavior, and agree with predictions from modeling and predictive coding theory. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
[Summary of professor YANG Jun's experience for intractable facial paralysis].
Wang, Tao; Li, Zaiyuan; Ge, Tingqiu; Zhang, Man; Yuan, Aihong; Yang, Jun
2017-06-12
Professor YANG Jun 's experience of diagnosis and treatment for intractable facial paralysis is introduced. Professor YANG focuses on the thinking model that combines TCM, western medicine and acupuncture, and adopts the differentiation system that combines disease differentiation, syndrome differentiation and meridian differentiation; he adopts the treatment integrates etiological treatment, overall regulation, symptomatic treatment as well as acupuncture, moxibustion, medication and flash cupping. The acupoints of yangming meridians are mostly selected, and acupoints of governor vessel such as Dazhui (GV 14) and Jinsuo (GV 8) are highly valued. The multiple-needles shallow-penetration-insertion twirling lifting and thrusting technique are mostly adopted to achieve slow and mild acupuncture sensation; in addition, the facial muscles are pulled up with mechanics action. The intensive stimulation with electroacupuncture is recommended at Qianzheng (Extra), Yifeng (TE 17) and Yangbai (GB 14), which is given two or three treatments per week.
Evidence of microbeads from personal care product contaminating the sea.
Cheung, Pui Kwan; Fok, Lincoln
2016-08-15
Plastic microbeads in personal care products have been identified as a source of marine pollution. Yet, their existence in the environment is rarely reported. During two surface manta trawls in the coastal waters of Hong Kong, eleven blue, spherical microbeads were captured. Their sizes (in diameters) ranged from 0.332 to 1.015mm. These microbeads possessed similar characteristics in terms of colour, shape and size with those identified and extracted from a facial scrub available in the local market. The FT-IR spectrum of the captured microbeads also matched those from the facial scrub. It was likely that the floating microbeads at the sea surface originated from a facial scrub and they have bypassed or escaped the sewage treatment system in Hong Kong. Timely voluntary or legislative actions are required to prevent more microbeads from entering the aquatic environment. Copyright © 2016 Elsevier Ltd. All rights reserved.
Wiertel-Krawczuk, Agnieszka; Huber, Juliusz; Wojtysiak, Magdalena; Golusiński, Wojciech; Pieńkowski, Piotr; Golusiński, Paweł
2015-05-01
Parotid gland tumor surgery sometimes leads to facial nerve paralysis. Malignant more than benign tumors determine nerve function preoperatively, while postoperative observations based on clinical, histological and neurophysiological studies have not been reported in detail. The aims of this pilot study were evaluation and correlations of histological properties of tumor (its size and location) and clinical and neurophysiological assessment of facial nerve function pre- and post-operatively (1 and 6 months). Comparative studies included 17 patients with benign (n = 13) and malignant (n = 4) tumors. Clinical assessment was based on House-Brackmann scale (H-B), neurophysiological diagnostics included facial electroneurography [ENG, compound muscle action potential (CMAP)], mimetic muscle electromyography (EMG) and blink-reflex examinations (BR). Mainly grade I of H-B was recorded both pre- (n = 13) and post-operatively (n = 12) in patients with small (1.5-2.4 cm) benign tumors located in superficial lobes. Patients with medium size (2.5-3.4 cm) malignant tumors in both lobes were scored at grade I (n = 2) and III (n = 2) pre- and mainly VI (n = 4) post-operatively. CMAP amplitudes after stimulation of mandibular marginal branch were reduced at about 25 % in patients with benign tumors after surgery. In the cases of malignant tumors CMAPs were not recorded following stimulation of any branch. A similar trend was found for BR results. H-B and ENG results revealed positive correlations between the type of tumor and surgery with facial nerve function. Neurophysiological studies detected clinically silent facial nerve neuropathy of mandibular marginal branch in postoperative period. Needle EMG, ENG and BR examinations allow for the evaluation of face muscles reinnervation and facial nerve regeneration.
Sasaki, Ryo; Takeuchi, Yuichi; Watanabe, Yorikatsu; Niimi, Yosuke; Sakurai, Hiroyuki; Miyata, Mariko; Yamato, Masayuki
2014-01-01
Background: Extensive facial nerve defects between the facial nerve trunk and its branches can be clinically reconstructed by incorporating double innervation into an end-to-side loop graft technique. This study developed a new animal model to evaluate the technique’s ability to promote nerve regeneration. Methods: Rats were divided into the intact, nonsupercharge, and supercharge groups. Artificially created facial nerve defects were reconstructed with a nerve graft, which was end-to-end sutured from proximal facial nerve stump to the mandibular branch (nonsupercharge group), or with the graft of which other end was end-to-side sutured to the hypoglossal nerve (supercharge group). And they were evaluated after 30 weeks. Results: Axonal diameter was significantly larger in the supercharge group than in the nonsupercharge group for the buccal (3.78 ± 1.68 vs 3.16 ± 1.22; P < 0.0001) and marginal mandibular branches (3.97 ± 2.31 vs 3.46 ± 1.57; P < 0.0001), but the diameter was significantly larger in the intact group for all branches except the temporal branch. In the supercharge group, compound muscle action potential amplitude was significantly higher than in the nonsupercharge group (4.18 ± 1.49 mV vs 1.87 ± 0.37 mV; P < 0.0001) and similar to that in the intact group (4.11 ± 0.68 mV). Retrograde labeling showed that the mimetic muscles were double-innervated by facial and hypoglossal nerve nuclei in the supercharge group. Conclusions: Multiple facial nerve branch reconstruction with an end-to-side loop graft was able to achieve axonal distribution. Additionally, axonal supercharge from the hypoglossal nerve significantly improved outcomes. PMID:25426357
Hierarchical Encoding of Social Cues in Primate Inferior Temporal Cortex
Morin, Elyse L.; Hadj-Bouziane, Fadila; Stokes, Mark; Ungerleider, Leslie G.; Bell, Andrew H.
2015-01-01
Faces convey information about identity and emotional state, both of which are important for our social interactions. Models of face processing propose that changeable versus invariant aspects of a face, specifically facial expression/gaze direction versus facial identity, are coded by distinct neural pathways and yet neurophysiological data supporting this separation are incomplete. We recorded activity from neurons along the inferior bank of the superior temporal sulcus (STS), while monkeys viewed images of conspecific faces and non-face control stimuli. Eight monkey identities were used, each presented with 3 different facial expressions (neutral, fear grin, and threat). All facial expressions were displayed with both a direct and averted gaze. In the posterior STS, we found that about one-quarter of face-responsive neurons are sensitive to social cues, the majority of which being sensitive to only one of these cues. In contrast, in anterior STS, not only did the proportion of neurons sensitive to social cues increase, but so too did the proportion of neurons sensitive to conjunctions of identity with either gaze direction or expression. These data support a convergence of signals related to faces as one moves anteriorly along the inferior bank of the STS, which forms a fundamental part of the face-processing network. PMID:24836688
Relations of Early Goal-Blockage Response and Gender to Subsequent Tantrum Behavior
ERIC Educational Resources Information Center
Sullivan, Margaret W.; Lewis, Michael
2012-01-01
Infants and their mothers participated in a longitudinal study of the sequelae of infant goal-blockage responses. Four-month-old infants participated in a standard contingency learning and goal-blockage procedure during which anger and sad facial expressions to the blockage were coded. When infants were 12 and 20 months old, mothers completed a…
Generating and Describing Affective Eye Behaviors
NASA Astrophysics Data System (ADS)
Mao, Xia; Li, Zheng
The manner of a person's eye movement conveys much about nonverbal information and emotional intent beyond speech. This paper describes work on expressing emotion through eye behaviors in virtual agents based on the parameters selected from the AU-Coded facial expression database and real-time eye movement data (pupil size, blink rate and saccade). A rule-based approach to generate primary (joyful, sad, angry, afraid, disgusted and surprise) and intermediate emotions (emotions that can be represented as the mixture of two primary emotions) utilized the MPEG4 FAPs (facial animation parameters) is introduced. Meanwhile, based on our research, a scripting tool, named EEMML (Emotional Eye Movement Markup Language) that enables authors to describe and generate emotional eye movement of virtual agents, is proposed.
Documents Pertaining to Resource Conservation and Recovery Act Corrective Action Event Codes
Document containing RCRA Corrective Action event codes and definitions, including national requirements, initiating sources, dates, and guidance, from the first facility assessment until the Corrective Action is terminated.
Todd, Rebecca M; Lee, Wayne; Evans, Jennifer W; Lewis, Marc D; Taylor, Margot J
2012-07-01
The modulation of control processes by stimulus salience, as well as associated neural activation, changes over development. We investigated age-related differences in the influence of facial emotion on brain activation when an action had to be withheld, focusing on a developmental period characterized by rapid social-emotional and cognitive change. Groups of kindergarten and young school-aged children and a group of young adults performed a modified Go/Nogo task. Response cues were preceded by happy or angry faces. After controlling for task performance, left orbitofrontal regions discriminated trials with happy vs. angry faces in children but not in adults when a response was withheld, and this effect decreased parametrically with age group. Age-related changes in prefrontal responsiveness to facial expression were not observed when an action was required, nor did this region show age-related activation changes with the demand to withhold a response in general. Such results reveal age-related differences in prefrontal activation that are specific to stimulus valence and depend on the action required. Copyright © 2012 Elsevier Ltd. All rights reserved.
Yang, Yang; Saleemi, Imran; Shah, Mubarak
2013-07-01
This paper proposes a novel representation of articulated human actions and gestures and facial expressions. The main goals of the proposed approach are: 1) to enable recognition using very few examples, i.e., one or k-shot learning, and 2) meaningful organization of unlabeled datasets by unsupervised clustering. Our proposed representation is obtained by automatically discovering high-level subactions or motion primitives, by hierarchical clustering of observed optical flow in four-dimensional, spatial, and motion flow space. The completely unsupervised proposed method, in contrast to state-of-the-art representations like bag of video words, provides a meaningful representation conducive to visual interpretation and textual labeling. Each primitive action depicts an atomic subaction, like directional motion of limb or torso, and is represented by a mixture of four-dimensional Gaussian distributions. For one--shot and k-shot learning, the sequence of primitive labels discovered in a test video are labeled using KL divergence, and can then be represented as a string and matched against similar strings of training videos. The same sequence can also be collapsed into a histogram of primitives or be used to learn a Hidden Markov model to represent classes. We have performed extensive experiments on recognition by one and k-shot learning as well as unsupervised action clustering on six human actions and gesture datasets, a composite dataset, and a database of facial expressions. These experiments confirm the validity and discriminative nature of the proposed representation.
Effects of age and mild cognitive impairment on the pain response system.
Kunz, Miriam; Mylius, Veit; Schepelmann, Karsten; Lautenbacher, Stefan
2009-01-01
Both age and dementia have been shown to have an effect on nociception and pain processing. The question arises whether mild cognitive impairment (MCI), which is thought to be a transitional stage between normal ageing and dementia, is also associated with alterations in pain processing. The aim of the present study was to answer this question by investigating the impact of age and MCI on the pain response system. Forty young subjects, 45 cognitively unimpaired elderly subjects and 42 subjects with MCI were investigated by use of an experimental multi-method approach. The subjects were tested for their subjective (pain ratings), motor (RIII reflex), facial (Facial Action Coding System) and their autonomic (sympathetic skin response and evoked heart rate response) responses to noxious electrical stimulation of the nervus suralis. We found significant group differences in the autonomic responses to noxious stimulation. The sympathetic skin response amplitude was significantly reduced in the cognitively unimpaired elderly subjects compared to younger subjects and to an even greater degree in subjects with MCI. The evoked heart rate response was reduced to a similar degree in both groups of aged subjects. Regression analyses within the two groups of the elderly subjects revealed that age and, in the MCI group, cognitive status were significant predictors of the decrease in autonomic responsiveness to noxious stimulation. Except for the autonomic parameters, no other pain parameter differed between the three groups. The pain response system appeared to be quite unaltered in MCI patients compared to cognitively unimpaired individuals of the same age. Only the sympathetic responsiveness qualified as an indicator of early aging effects as well as of pathophysiology associated with MCI, which both seemed to affect the pain system independently from each other.
EEVEE: the Empathy-Enhancing Virtual Evolving Environment
Jackson, Philip L.; Michon, Pierre-Emmanuel; Geslin, Erik; Carignan, Maxime; Beaudoin, Danny
2015-01-01
Empathy is a multifaceted emotional and mental faculty that is often found to be affected in a great number of psychopathologies, such as schizophrenia, yet it remains very difficult to measure in an ecological context. The challenge stems partly from the complexity and fluidity of this social process, but also from its covert nature. One powerful tool to enhance experimental control over such dynamic social interactions has been the use of avatars in virtual reality (VR); information about an individual in such an interaction can be collected through the analysis of his or her neurophysiological and behavioral responses. We have developed a unique platform, the Empathy-Enhancing Virtual Evolving Environment (EEVEE), which is built around three main components: (1) different avatars capable of expressing feelings and emotions at various levels based on the Facial Action Coding System (FACS); (2) systems for measuring the physiological responses of the observer (heart and respiration rate, skin conductance, gaze and eye movements, facial expression); and (3) a multimodal interface linking the avatar's behavior to the observer's neurophysiological response. In this article, we provide a detailed description of the components of this innovative platform and validation data from the first phases of development. Our data show that healthy adults can discriminate different negative emotions, including pain, expressed by avatars at varying intensities. We also provide evidence that masking part of an avatar's face (top or bottom half) does not prevent the detection of different levels of pain. This innovative and flexible platform provides a unique tool to study and even modulate empathy in a comprehensive and ecological manner in various populations, notably individuals suffering from neurological or psychiatric disorders. PMID:25805983
Rhodes, Gillian; Burton, Nichola; Jeffery, Linda; Read, Ainsley; Taylor, Libby; Ewing, Louise
2018-05-01
Individuals with autism spectrum disorder (ASD) can have difficulty recognizing emotional expressions. Here, we asked whether the underlying perceptual coding of expression is disrupted. Typical individuals code expression relative to a perceptual (average) norm that is continuously updated by experience. This adaptability of face-coding mechanisms has been linked to performance on various face tasks. We used an adaptation aftereffect paradigm to characterize expression coding in children and adolescents with autism. We asked whether face expression coding is less adaptable in autism and whether there is any fundamental disruption of norm-based coding. If expression coding is norm-based, then the face aftereffects should increase with adaptor expression strength (distance from the average expression). We observed this pattern in both autistic and typically developing participants, suggesting that norm-based coding is fundamentally intact in autism. Critically, however, expression aftereffects were reduced in the autism group, indicating that expression-coding mechanisms are less readily tuned by experience. Reduced adaptability has also been reported for coding of face identity and gaze direction. Thus, there appears to be a pervasive lack of adaptability in face-coding mechanisms in autism, which could contribute to face processing and broader social difficulties in the disorder. © 2017 The British Psychological Society.
Tanzer, Michal; Shahar, Golan; Avidan, Galia
2014-01-01
The aim of the proposed theoretical model is to illuminate personal and interpersonal resilience by drawing from the field of emotional face perception. We suggest that perception/recognition of emotional facial expressions serves as a central link between subjective, self-related processes and the social context. Emotional face perception constitutes a salient social cue underlying interpersonal communication and behavior. Because problems in communication and interpersonal behavior underlie most, if not all, forms of psychopathology, it follows that perception/recognition of emotional facial expressions impacts psychopathology. The ability to accurately interpret one’s facial expression is crucial in subsequently deciding on an appropriate course of action. However, perception in general, and of emotional facial expressions in particular, is highly influenced by individuals’ personality and the self-concept. Herein we briefly outline well-established theories of personal and interpersonal resilience and link them to the neuro-cognitive basis of face perception. We then describe the findings of our ongoing program of research linking two well-established resilience factors, general self-efficacy (GSE) and perceived social support (PSS), with face perception. We conclude by pointing out avenues for future research focusing on possible genetic markers and patterns of brain connectivity associated with the proposed model. Implications of our integrative model to psychotherapy are discussed. PMID:25165439
Sun, Shiyue; Carretié, Luis; Zhang, Lei; Dong, Yi; Zhu, Chunyan; Luo, Yuejia; Wang, Kai
2014-01-01
Background Although ample evidence suggests that emotion and response inhibition are interrelated at the behavioral and neural levels, neural substrates of response inhibition to negative facial information remain unclear. Thus we used event-related potential (ERP) methods to explore the effects of explicit and implicit facial expression processing in response inhibition. Methods We used implicit (gender categorization) and explicit emotional Go/Nogo tasks (emotion categorization) in which neutral and sad faces were presented. Electrophysiological markers at the scalp and the voxel level were analyzed during the two tasks. Results We detected a task, emotion and trial type interaction effect in the Nogo-P3 stage. Larger Nogo-P3 amplitudes during sad conditions versus neutral conditions were detected with explicit tasks. However, the amplitude differences between the two conditions were not significant for implicit tasks. Source analyses on P3 component revealed that right inferior frontal junction (rIFJ) was involved during this stage. The current source density (CSD) of rIFJ was higher with sad conditions compared to neutral conditions for explicit tasks, rather than for implicit tasks. Conclusions The findings indicated that response inhibition was modulated by sad facial information at the action inhibition stage when facial expressions were processed explicitly rather than implicitly. The rIFJ may be a key brain region in emotion regulation. PMID:25330212
Chao, Xiuhua; Xu, Lei; Li, Jianfeng; Han, Yuechen; Li, Xiaofei; Mao, YanYan; Shang, Haiqiong; Fan, Zhaomin; Wang, Haibo
2016-06-01
Conclusion C/GP hydrogel was demonstrated to be an ideal drug delivery vehicle and scaffold in the vein conduit. Combined use autologous vein and NGF continuously delivered by C/GP-NGF hydrogel can improve the recovery of facial nerve defects. Objective This study investigated the effects of chitosan-β-glycerophosphate-nerve growth factor (C/GP-NGF) hydrogel combined with autologous vein conduit on the recovery of damaged facial nerve in a rat model. Methods A 5 mm gap in the buccal branch of a rat facial nerve was reconstructed with an autologous vein. Next, C/GP-NGF hydrogel was injected into the vein conduit. In negative control groups, NGF solution or phosphate-buffered saline (PBS) was injected into the vein conduits, respectively. Autologous implantation was used as a positive control group. Vibrissae movement, electrophysiological assessment, and morphological analysis of regenerated nerves were performed to assess nerve regeneration. Results NGF continuously released from C/GP-NGF hydrogel in vitro. The recovery rate of vibrissae movement and the compound muscle action potentials of regenerated facial nerve in the C/GP-NGF group were similar to those in the Auto group, and significantly better than those in the NGF group. Furthermore, larger regenerated axons and thicker myelin sheaths were obtained in the C/GP-NGF group than those in the NGF group.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-09
...: Crop production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS code 32532). This listing is not intended to be exhaustive, but... apply to me? You may be potentially affected by this action if you are an agricultural producer, food...
Ichikawa, Hiroko; Kanazawa, So; Yamaguchi, Masami K; Kakigi, Ryusuke
2010-09-27
Adult observers can quickly identify specific actions performed by an invisible actor from the points of lights attached to the actor's head and major joints. Infants are also sensitive to biological motion and prefer to see it depicted by a dynamic point-light display. In detecting biological motion such as whole body and facial movements, neuroimaging studies have demonstrated the involvement of the occipitotemporal cortex, including the superior temporal sulcus (STS). In the present study, we used the point-light display technique and near-infrared spectroscopy (NIRS) to examine infant brain activity while viewing facial biological motion depicted in a point-light display. Dynamic facial point-light displays (PLD) were made from video recordings of three actors making a facial expression of surprise in a dark room. As in Bassili's study, about 80 luminous markers were scattered over the surface of the actor's faces. In the experiment, we measured infant's hemodynamic responses to these displays using NIRS. We hypothesized that infants would show different neural activity for upright and inverted PLD. The responses were compared to the baseline activation during the presentation of individual still images, which were frames extracted from the dynamic PLD. We found that the concentration of oxy-Hb increased in the right temporal area during the presentation of the upright PLD compared to that of the baseline period. This is the first study to demonstrate that infant's brain activity in face processing is induced only by the motion cue of facial movement depicted by dynamic PLD. (c) 2010 Elsevier Ireland Ltd. All rights reserved.
2014-01-01
Background Alexithymia is a personality trait that is characterized by difficulties in identifying and describing feelings. Previous studies have shown that alexithymia is related to problems in recognizing others’ emotional facial expressions when these are presented with temporal constraints. These problems can be less severe when the expressions are visible for a relatively long time. Because the neural correlates of these recognition deficits are still relatively unexplored, we investigated the labeling of facial emotions and brain responses to facial emotions as a function of alexithymia. Results Forty-eight healthy participants had to label the emotional expression (angry, fearful, happy, or neutral) of faces presented for 1 or 3 seconds in a forced-choice format while undergoing functional magnetic resonance imaging. The participants’ level of alexithymia was assessed using self-report and interview. In light of the previous findings, we focused our analysis on the alexithymia component of difficulties in describing feelings. Difficulties describing feelings, as assessed by the interview, were associated with increased reaction times for negative (i.e., angry and fearful) faces, but not with labeling accuracy. Moreover, individuals with higher alexithymia showed increased brain activation in the somatosensory cortex and supplementary motor area (SMA) in response to angry and fearful faces. These cortical areas are known to be involved in the simulation of the bodily (motor and somatosensory) components of facial emotions. Conclusion The present data indicate that alexithymic individuals may use information related to bodily actions rather than affective states to understand the facial expressions of other persons. PMID:24629094
A systematic review and meta-analysis of 'Systems for Social Processes' in eating disorders.
Caglar-Nazali, H Pinar; Corfield, Freya; Cardi, Valentina; Ambwani, Suman; Leppanen, Jenni; Olabintan, Olaolu; Deriziotis, Stephanie; Hadjimichalis, Alexandra; Scognamiglio, Pasquale; Eshkevari, Ertimiss; Micali, Nadia; Treasure, Janet
2014-05-01
Social and emotional problems have been implicated in the development and maintenance of eating disorders (ED). This paper reviews the facets of social processing in ED according to the NIMH Research and Domain Criteria (NIMH RDoC) 'Systems for Social Processes' framework. Embase, Medline, PsycInfo and Web of Science were searched for peer-reviewed articles published by March 2013. One-hundred and fifty four studies measuring constructs of: attachment, social communication, perception and understanding of self and others, and social dominance in people with ED, were identified. Eleven meta-analyses were performed, they showed evidence that people with ED had attachment insecurity (d=1.31), perceived low parental care (d=.51), appraised high parental overprotection (d=0.29), impaired facial emotion recognition (d=.44) and facial communication (d=2.10), increased facial avoidance (d=.52), reduced agency (d=.39), negative self-evaluation (d=2.27), alexithymia (d=.66), poor understanding of mental states (d=1.07) and sensitivity to social dominance (d=1.08). There is less evidence for problems with production and reception of non-facial communication, animacy and action. Copyright © 2013 Elsevier Ltd. All rights reserved.
Dudas, Marek; Kim, Jieun; Li, Wai-Yee; Nagy, Andre; Larsson, Jonas; Karlsson, Stefan; Chai, Yang; Kaartinen, Vesa
2006-01-01
Transforming growth factor beta (TGF-β) proteins play important roles in morphogenesis of many craniofacial tissues; however, detailed biological mechanisms of TGF-β action, particularly in vivo, are still poorly understood. Here, we deleted the TGF-β type I receptor gene Alk5 specifically in the embryonic ectodermal and neural crest cell lineages. Failure in signaling via this receptor, either in the epithelium or in the mesenchyme, caused severe craniofacial defects including cleft palate. Moreover, the facial phenotypes of neural crest-specific Alk5 mutants included devastating facial cleft and appeared significantly more severe than the defects seen in corresponding mutants lacking the TGF-β type II receptor (TGFβRII), a prototypical binding partner of ALK5. Our data indicate that ALK5 plays unique, non-redundant cell-autonomous roles during facial development. Remarkable divergence between Tgfbr2 and Alk5 phenotypes, together with our biochemical in vitro data, imply that (1) ALK5 mediates signaling of a diverse set of ligands not limited to the three isoforms of TGF-β, and (2) ALK5 acts also in conjunction with type II receptors other than TGFβRII. PMID:16806156
Brain systems for assessing the affective value of faces
Said, Christopher P.; Haxby, James V.; Todorov, Alexander
2011-01-01
Cognitive neuroscience research on facial expression recognition and face evaluation has proliferated over the past 15 years. Nevertheless, large questions remain unanswered. In this overview, we discuss the current understanding in the field, and describe what is known and what remains unknown. In §2, we describe three types of behavioural evidence that the perception of traits in neutral faces is related to the perception of facial expressions, and may rely on the same mechanisms. In §3, we discuss cortical systems for the perception of facial expressions, and argue for a partial segregation of function in the superior temporal sulcus and the fusiform gyrus. In §4, we describe the current understanding of how the brain responds to emotionally neutral faces. To resolve some of the inconsistencies in the literature, we perform a large group analysis across three different studies, and argue that one parsimonious explanation of prior findings is that faces are coded in terms of their typicality. In §5, we discuss how these two lines of research—perception of emotional expressions and face evaluation—could be integrated into a common, cognitive neuroscience framework. PMID:21536552
Adaptation effects to attractiveness of face photographs and art portraits are domain-specific
Hayn-Leichsenring, Gregor U.; Kloth, Nadine; Schweinberger, Stefan R.; Redies, Christoph
2013-01-01
We studied the neural coding of facial attractiveness by investigating effects of adaptation to attractive and unattractive human faces on the perceived attractiveness of veridical human face pictures (Experiment 1) and art portraits (Experiment 2). Experiment 1 revealed a clear pattern of contrastive aftereffects. Relative to a pre-adaptation baseline, the perceived attractiveness of faces was increased after adaptation to unattractive faces, and was decreased after adaptation to attractive faces. Experiment 2 revealed similar aftereffects when art portraits rather than face photographs were used as adaptors and test stimuli, suggesting that effects of adaptation to attractiveness are not restricted to facial photographs. Additionally, we found similar aftereffects in art portraits for beauty, another aesthetic feature that, unlike attractiveness, relates to the properties of the image (rather than to the face displayed). Importantly, Experiment 3 showed that aftereffects were abolished when adaptors were art portraits and face photographs were test stimuli. These results suggest that adaptation to facial attractiveness elicits aftereffects in the perception of subsequently presented faces, for both face photographs and art portraits, and that these effects do not cross image domains. PMID:24349690
Bidirectional Gender Face Aftereffects: Evidence Against Normative Facial Coding.
Cronin, Sophie L; Spence, Morgan L; Miller, Paul A; Arnold, Derek H
2017-02-01
Facial appearance can be altered, not just by restyling but also by sensory processes. Exposure to a female face can, for instance, make subsequent faces look more masculine than they would otherwise. Two explanations exist. According to one, exposure to a female face renormalizes face perception, making that female and all other faces look more masculine as a consequence-a unidirectional effect. According to that explanation, exposure to a male face would have the opposite unidirectional effect. Another suggestion is that face gender is subject to contrastive aftereffects. These should make some faces look more masculine than the adaptor and other faces more feminine-a bidirectional effect. Here, we show that face gender aftereffects are bidirectional, as predicted by the latter hypothesis. Images of real faces rated as more and less masculine than adaptors at baseline tended to look even more and less masculine than adaptors post adaptation. This suggests that, rather than mental representations of all faces being recalibrated to better reflect the prevailing statistics of the environment, mental operations exaggerate differences between successive faces, and this can impact facial gender perception.
HAPPEN CAN'T HEAR: An Analysis of Code-Blends in Hearing, Native Signers of American Sign Language
ERIC Educational Resources Information Center
Bishop, Michele
2011-01-01
Hearing native signers often learn sign language as their first language and acquire features that are characteristic of sign languages but are not present in equivalent ways in English (e.g., grammatical facial expressions and the structured use of space for setting up tokens and surrogates). Previous research has indicated that bimodal…
ERIC Educational Resources Information Center
Oster, Harriet; And Others
1992-01-01
Compared subjects' judgments about emotions expressed by the faces of infants pictured in slides to predictions made by the Max system of measuring emotional expression. Judgments did not coincide with Max predictions for fear, anger, sadness, and disgust. Results indicated that expressions of negative affect by infants are not fully…
Standardization of Code on Dental Procedures
1992-02-13
oral hard and soft tissues using a periodontal probe, mirror, and explorer, and bitewing, panoramic, or other radiographs as...of living tissue or inert material into periodontal osseous defects to regenerate new periodontal attachment (bone, periodontal ligament, and cementum...Simple (up to 5 cm). Repair and/or suturing of simple to moderately complicated wounds of facial and/or oral soft tissues . 7211 1.8 Repair
Scherr, Jessica F; Hogan, Abigail L; Hatton, Deborah; Roberts, Jane E
2017-12-01
This study investigated behavioral indicators of social fear in preschool boys with fragile X syndrome (FXS) with a low degree of autism spectrum disorder (ASD) symptoms (FXS-Low; n = 29), FXS with elevated ASD symptoms (FXS-High; n = 25), idiopathic ASD (iASD; n = 11), and typical development (TD; n = 36). Gaze avoidance, escape behaviors, and facial fear during a stranger approach were coded. Boys with elevated ASD symptoms displayed more avoidant gaze, looking less at the stranger and parent than those with low ASD symptoms across etiologies. The iASD group displayed more facial fear than the other groups. Results suggest etiologically distinct behavioral patterns of social fear in preschoolers with elevated ASD symptoms.
Hierarchical Encoding of Social Cues in Primate Inferior Temporal Cortex.
Morin, Elyse L; Hadj-Bouziane, Fadila; Stokes, Mark; Ungerleider, Leslie G; Bell, Andrew H
2015-09-01
Faces convey information about identity and emotional state, both of which are important for our social interactions. Models of face processing propose that changeable versus invariant aspects of a face, specifically facial expression/gaze direction versus facial identity, are coded by distinct neural pathways and yet neurophysiological data supporting this separation are incomplete. We recorded activity from neurons along the inferior bank of the superior temporal sulcus (STS), while monkeys viewed images of conspecific faces and non-face control stimuli. Eight monkey identities were used, each presented with 3 different facial expressions (neutral, fear grin, and threat). All facial expressions were displayed with both a direct and averted gaze. In the posterior STS, we found that about one-quarter of face-responsive neurons are sensitive to social cues, the majority of which being sensitive to only one of these cues. In contrast, in anterior STS, not only did the proportion of neurons sensitive to social cues increase, but so too did the proportion of neurons sensitive to conjunctions of identity with either gaze direction or expression. These data support a convergence of signals related to faces as one moves anteriorly along the inferior bank of the STS, which forms a fundamental part of the face-processing network. Published by Oxford University Press 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Ahmed, Lubna
2018-03-01
The ability to correctly interpret facial expressions is key to effective social interactions. People are well rehearsed and generally very efficient at correctly categorizing expressions. However, does their ability to do so depend on how cognitively loaded they are at the time? Using repeated-measures designs, we assessed the sensitivity of facial expression categorization to cognitive resources availability by measuring people's expression categorization performance during concurrent low and high cognitive load situations. In Experiment1, participants categorized the 6 basic upright facial expressions in a 6-automated-facial-coding response paradigm while maintaining low or high loading information in working memory (N = 40; 60 observations per load condition). In Experiment 2, they did so for both upright and inverted faces (N = 46; 60 observations per load and inversion condition). In both experiments, expression categorization for upright faces was worse during high versus low load. Categorization rates actually improved with increased load for the inverted faces. The opposing effects of cognitive load on upright and inverted expressions are explained in terms of a cognitive load-related dispersion in the attentional window. Overall, the findings support that expression categorization is sensitive to cognitive resources availability and moreover suggest that, in this paradigm, it is the perceptual processing stage of expression categorization that is affected by cognitive load. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Gîlcă, G.; Bîzdoacă, N. G.; Diaconu, I.
2016-08-01
This article aims to implement some practical applications using the Socibot Desktop social robot. We mean to realize three applications: creating a speech sequence using the Kiosk menu of the browser interface, creating a program in the Virtual Robot browser interface and making a new guise to be loaded into the robot's memory in order to be projected onto it face. The first application is actually created in the Compose submenu that contains 5 file categories: audio, eyes, face, head, mood, this being helpful in the creation of the projected sequence. The second application is more complex, the completed program containing: audio files, speeches (can be created in over 20 languages), head movements, the robot's facial parameters function of each action units (AUs) of the facial muscles, its expressions and its line of sight. Last application aims to change the robot's appearance with the guise created by us. The guise was created in Adobe Photoshop and then loaded into the robot's memory.
Emotion processing deficits in alexithymia and response to a depth of processing intervention.
Constantinou, Elena; Panayiotou, Georgia; Theodorou, Marios
2014-12-01
Findings on alexithymic emotion difficulties have been inconsistent. We examined potential differences between alexithymic and control participants in general arousal, reactivity, facial and subjective expression, emotion labeling, and covariation between emotion response systems. A depth of processing intervention was introduced. Fifty-four participants (27 alexithymic), selected using the Toronto Alexithymia Scale-20, completed an imagery experiment (imagining joy, fear and neutral scripts), under instructions for shallow or deep emotion processing. Heart rate, skin conductance, facial electromyography and startle reflex were recorded along with subjective ratings. Results indicated hypo-reactivity to emotion among high alexithymic individuals, smaller and slower startle responses, and low covariation between physiology and self-report. No deficits in facial expression, labeling and emotion ratings were identified. Deep processing was associated with increased physiological reactivity and lower perceived dominance and arousal in high alexithymia. Findings suggest a tendency for avoidance of intense, unpleasant emotions and less defensive action preparation in alexithymia. Copyright © 2014 Elsevier B.V. All rights reserved.
Seligman, Sarah C; Giovannetti, Tania; Sestito, John; Libon, David J
2014-01-01
Mild functional difficulties have been associated with early cognitive decline in older adults and increased risk for conversion to dementia in mild cognitive impairment, but our understanding of this decline has been limited by a dearth of objective methods. This study evaluated the reliability and validity of a new system to code subtle errors on an established performance-based measure of everyday action and described preliminary findings within the context of a theoretical model of action disruption. Here 45 older adults completed the Naturalistic Action Test (NAT) and neuropsychological measures. NAT performance was coded for overt errors, and subtle action difficulties were scored using a novel coding system. An inter-rater reliability coefficient was calculated. Validity of the coding system was assessed using a repeated-measures ANOVA with NAT task (simple versus complex) and error type (overt versus subtle) as within-group factors. Correlation/regression analyses were conducted among overt NAT errors, subtle NAT errors, and neuropsychological variables. The coding of subtle action errors was reliable and valid, and episodic memory breakdown predicted subtle action disruption. Results suggest that the NAT can be useful in objectively assessing subtle functional decline. Treatments targeting episodic memory may be most effective in addressing early functional impairment in older age.
Longfier, Laetitia; Soussignan, Robert; Reissland, Nadja; Leconte, Mathilde; Marret, Stéphane; Schaal, Benoist; Mellier, Daniel
2016-12-01
Facial expressions of 5-6 month-old infants born preterm and at term were compared while tasting for the first time solid foods (two fruit and two vegetable purées) given by the mother. Videotapes of facial reactions to these foods were objectively coded during the first six successive spoons of each test food using Baby FACS and subjectively rated by naïve judges. Infant temperament was also assessed by the parents using the Infant Behaviour Questionnaire. Contrary to our expectations, infants born preterm expressed fewer negative emotions than infants born full-term. Naïve judges rated infants born preterm as displaying more liking than their full-term counterparts when tasting the novel foods. The analysis of facial expressions during the six spoonfuls of four successive meals (at 1-week intervals) suggested a familiarization effect with the frequency of negative expressions decreasing after tasting the second spoon, regardless of infant age, type of food and order of presentation. Finally, positive and negative dimensions of temperament reported by the parents were related with objective and subjective coding of affective reactions toward foods in infants born preterm or full-term. Our research indicates that premature infants are more accepting of novel foods than term infants and this could be used for supporting the development of healthy eating patterns in premature infants. Further research is needed to clarify whether reduced negativity by infants born prematurely to the exposure to novel solid foods reflects a reduction of an adaptive avoidant behaviour during the introduction of novel foods. Copyright © 2016. Published by Elsevier Ltd.
Palumbo, Letizia; Jellema, Tjeerd
2013-01-01
Emotional facial expressions are immediate indicators of the affective dispositions of others. Recently it has been shown that early stages of social perception can already be influenced by (implicit) attributions made by the observer about the agent's mental state and intentions. In the current study possible mechanisms underpinning distortions in the perception of dynamic, ecologically-valid, facial expressions were explored. In four experiments we examined to what extent basic perceptual processes such as contrast/context effects, adaptation and representational momentum underpinned the perceptual distortions, and to what extent 'emotional anticipation', i.e. the involuntary anticipation of the other's emotional state of mind on the basis of the immediate perceptual history, might have played a role. Neutral facial expressions displayed at the end of short video-clips, in which an initial facial expression of joy or anger gradually morphed into a neutral expression, were misjudged as being slightly angry or slightly happy, respectively (Experiment 1). This response bias disappeared when the actor's identity changed in the final neutral expression (Experiment 2). Videos depicting neutral-to-joy-to-neutral and neutral-to-anger-to-neutral sequences again produced biases but in opposite direction (Experiment 3). The bias survived insertion of a 400 ms blank (Experiment 4). These results suggested that the perceptual distortions were not caused by any of the low-level perceptual mechanisms (adaptation, representational momentum and contrast effects). We speculate that especially when presented with dynamic, facial expressions, perceptual distortions occur that reflect 'emotional anticipation' (a low-level mindreading mechanism), which overrules low-level visual mechanisms. Underpinning neural mechanisms are discussed in relation to the current debate on action and emotion understanding.
Palumbo, Letizia; Jellema, Tjeerd
2013-01-01
Emotional facial expressions are immediate indicators of the affective dispositions of others. Recently it has been shown that early stages of social perception can already be influenced by (implicit) attributions made by the observer about the agent’s mental state and intentions. In the current study possible mechanisms underpinning distortions in the perception of dynamic, ecologically-valid, facial expressions were explored. In four experiments we examined to what extent basic perceptual processes such as contrast/context effects, adaptation and representational momentum underpinned the perceptual distortions, and to what extent ‘emotional anticipation’, i.e. the involuntary anticipation of the other’s emotional state of mind on the basis of the immediate perceptual history, might have played a role. Neutral facial expressions displayed at the end of short video-clips, in which an initial facial expression of joy or anger gradually morphed into a neutral expression, were misjudged as being slightly angry or slightly happy, respectively (Experiment 1). This response bias disappeared when the actor’s identity changed in the final neutral expression (Experiment 2). Videos depicting neutral-to-joy-to-neutral and neutral-to-anger-to-neutral sequences again produced biases but in opposite direction (Experiment 3). The bias survived insertion of a 400 ms blank (Experiment 4). These results suggested that the perceptual distortions were not caused by any of the low-level perceptual mechanisms (adaptation, representational momentum and contrast effects). We speculate that especially when presented with dynamic, facial expressions, perceptual distortions occur that reflect ‘emotional anticipation’ (a low-level mindreading mechanism), which overrules low-level visual mechanisms. Underpinning neural mechanisms are discussed in relation to the current debate on action and emotion understanding. PMID:23409112
Bedeschi, Maria Francesca; Marangi, Giuseppe; Calvello, Maria Rosaria; Ricciardi, Stefania; Leone, Francesca Pia Chiara; Baccarin, Marco; Guerneri, Silvana; Orteschi, Daniela; Murdolo, Marina; Lattante, Serena; Frangella, Silvia; Keena, Beth; Harr, Margaret H; Zackai, Elaine; Zollino, Marcella
2017-11-01
Pitt-Hopkins syndrome is a neurodevelopmental disorder characterized by severe intellectual disability and a distinctive facial gestalt. It is caused by haploinsufficiency of the TCF4 gene. The TCF4 protein has different functional domains, with the NLS (nuclear localization signal) domain coded by exons 7-8 and the bHLH (basic Helix-Loop-Helix) domain coded by exon 18. Several alternatively spliced TCF4 variants have been described, allowing for translation of variable protein isoforms. Typical PTHS patients have impairment of at least the bHLH domain. To which extent impairment of the remaining domains contributes to the final phenotype is not clear. There is recent evidence that certain loss-of-function variants disrupting TCF4 are associated with mild ID, but not with typical PTHS. We describe a frameshift-causing partial gene deletion encompassing exons 4-6 of TCF4 in an adult patient with mild ID and nonspecific facial dysmorphisms but without the typical features of PTHS, and a c.520C > T nonsense variant within exon 8 in a child presenting with a severe phenotype largely mimicking PTHS, but lacking the typical facial dysmorphism. Investigation on mRNA, along with literature review, led us to suggest a preliminary phenotypic map of loss-of-function variants affecting TCF4. An intragenic phenotypic map of loss-of-function variants in TCF4 is suggested here for the first time: variants within exons 1-4 and exons 4-6 give rise to a recurrent phenotype with mild ID not in the spectrum of Pitt-Hopkins syndrome (biallelic preservation of both the NLS and bHLH domains); variants within exons 7-8 cause a severe phenotype resembling PTHS but in absence of the typical facial dysmorphism (impairment limited to the NLS domain); variants within exons 9-19 cause typical Pitt-Hopkins syndrome (impairment of at least the bHLH domain). Understanding the TCF4 molecular syndromology can allow for proper nosology in the current era of whole genomic investigations. Copyright © 2017. Published by Elsevier Masson SAS.
The Mirror Neuron System: A Fresh View
Casile, Antonino; Caggiano, Vittorio; Ferrari, Pier Francesco
2013-01-01
Mirror neurons are a class of visuomotor neurons in the monkey premotor and parietal cortices that discharge during the execution and observation of goal-directed motor acts. They are deemed to be at the basis of primates’ social abilities. In this review, the authors provide a fresh view about two still open questions about mirror neurons. The first question is their possible functional role. By reviewing recent neurophysiological data, the authors suggest that mirror neurons might represent a flexible system that encodes observed actions in terms of several behaviorally relevant features. The second question concerns the possible developmental mechanisms responsible for their initial emergence. To provide a possible answer to question, the authors review two different aspects of sensorimotor development: facial and hand movements, respectively. The authors suggest that possibly two different “mirror” systems might underlie the development of action understanding and imitative abilities in the two cases. More specifically, a possibly prewired system already present at birth but shaped by the social environment might underlie the early development of facial imitative abilities. On the contrary, an experience-dependent system might subserve perception-action couplings in the case of hand movements. The development of this latter system might be critically dependent on the observation of own movements. PMID:21467305
World Breastfeeding Week 1994: making the Code work.
1994-01-01
WHO adopted the International Code of Marketing of Breastmilk Substitutes in 1981, with the US being the only member voting against it. US abandoned its opposition and voted for the International Code at the World Health Assembly in May 1994. The US was also part of a unanimous vote to promote a resolution that clearly proclaims breast milk to be better than breast milk substitutes and the best food for infants. World Breastfeeding Week 1994 began more efforts to promote the International Code. In 1994, through its Making the Code Work campaign, the World Alliance for Breastfeeding Action (WABA) will work on increasing awareness about the mission and promise of the International Code, notify governments of the Innocenti target date, call for governments to introduce rules and regulations based on the International Code, and encourage public interest groups, professional organizations, and the general public to monitor enforcement of the Code. So far, 11 countries have passed legislation including all or almost all provisions of the International Code. Governments of 36 countries have passed legislation including only some provisions of the International Code. The International Baby Food Action Network (IBFAN), a coalition of more than 140 breastfeeding promotion groups, monitors implementation of the Code worldwide. IBFAN substantiates 1000s of violations of the Code in its report, Breaking the Rules 1994. The violations consist of promoting breast milk substitutes to health workers, using labels describing a brand of formula in idealizing terms, or using labels that do not have warnings in the local language. We should familiarize ourselves with the provisions of the International Code and the status of the Code in our country. WABA provides an action folder which contains basic background information on the code and action ideas.
Optical stimulation of the facial nerve: a surgical tool?
NASA Astrophysics Data System (ADS)
Richter, Claus-Peter; Teudt, Ingo Ulrik; Nevel, Adam E.; Izzo, Agnella D.; Walsh, Joseph T., Jr.
2008-02-01
One sequela of skull base surgery is the iatrogenic damage to cranial nerves. Devices that stimulate nerves with electric current can assist in the nerve identification. Contemporary devices have two main limitations: (1) the physical contact of the stimulating electrode and (2) the spread of the current through the tissue. In contrast to electrical stimulation, pulsed infrared optical radiation can be used to safely and selectively stimulate neural tissue. Stimulation and screening of the nerve is possible without making physical contact. The gerbil facial nerve was irradiated with 250-μs-long pulses of 2.12 μm radiation delivered via a 600-μm-diameter optical fiber at a repetition rate of 2 Hz. Muscle action potentials were recorded with intradermal electrodes. Nerve samples were examined for possible tissue damage. Eight facial nerves were stimulated with radiant exposures between 0.71-1.77 J/cm2, resulting in compound muscle action potentials (CmAPs) that were simultaneously measured at the m. orbicularis oculi, m. levator nasolabialis, and m. orbicularis oris. Resulting CmAP amplitudes were 0.3-0.4 mV, 0.15-1.4 mV and 0.3-2.3 mV, respectively, depending on the radial location of the optical fiber and the radiant exposure. Individual nerve branches were also stimulated, resulting in CmAP amplitudes between 0.2 and 1.6 mV. Histology revealed tissue damage at radiant exposures of 2.2 J/cm2, but no apparent damage at radiant exposures of 2.0 J/cm2.
Age and sex-related differences in 431 pediatric facial fractures at a level 1 trauma center.
Hoppe, Ian C; Kordahi, Anthony M; Paik, Angie M; Lee, Edward S; Granick, Mark S
2014-10-01
Age and sex-related changes in the pattern of fractures and concomitant injuries observed in this patient population is helpful in understanding craniofacial development and the treatment of these unique injuries. The goal of this study was to examine all facial fractures occurring in a child and adolescent population (age 18 or less) at a trauma center to determine any age or sex-related variability amongst fracture patterns and concomitant injuries. All facial fractures occurring at a trauma center were collected over a 12-year period based on International Classification of Disease, rev. 9 codes. This was delimited to include only those patients 18 years of age or younger. Age, sex, mechanism, and fracture types were collected and analyzed. During this time period, there were 3147 patients with facial fractures treated at our institution, 353 of which were in children and adolescent patients. Upon further review 68 patients were excluded due to insufficient data for analysis, leaving 285 patients for review, with a total of 431 fractures. The most common etiology of injury was assault for males and motor vehicle accidents (MVA) for females. The most common fracture was of the mandible in males and of the orbit in females. The most common etiology in younger age groups includes falls and pedestrian struck. Older age groups exhibit a higher incidence of assault-related injuries. Younger age groups showed a propensity for orbital fractures as opposed to older age groups where mandibular fractures predominated. Intracranial hemorrhage was the most common concomitant injury across most age groups. The differences noted in etiology of injury, fracture patterns, and concomitant injuries between sexes and different age groups likely reflects the differing activities that each group engages in predominantly. In addition the growing facial skeleton offers varying degrees of protection to the cranial contents as force-absorbing mechanisms develop. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Digital assessment of the fetal alcohol syndrome facial phenotype: reliability and agreement study.
Tsang, Tracey W; Laing-Aiken, Zoe; Latimer, Jane; Fitzpatrick, James; Oscar, June; Carter, Maureen; Elliott, Elizabeth J
2017-01-01
To examine the three facial features of fetal alcohol syndrome (FAS) in a cohort of Australian Aboriginal children from two-dimensional digital facial photographs to: (1) assess intrarater and inter-rater reliability; (2) identify the racial norms with the best fit for this population; and (3) assess agreement with clinician direct measures. Photographs and clinical data for 106 Aboriginal children (aged 7.4-9.6 years) were sourced from the Lililwan Project . Fifty-eight per cent had a confirmed prenatal alcohol exposure and 13 (12%) met the Canadian 2005 criteria for FAS/partial FAS. Photographs were analysed using the FAS Facial Photographic Analysis Software to generate the mean PFL three-point ABC-Score, five-point lip and philtrum ranks and four-point face rank in accordance with the 4-Digit Diagnostic Code. Intrarater and inter-rater reliability of digital ratings was examined in two assessors. Caucasian or African American racial norms for PFL and lip thickness were assessed for best fit; and agreement between digital and direct measurement methods was assessed. Reliability of digital measures was substantial within (kappa: 0.70-1.00) and between assessors (kappa: 0.64-0.89). Clinician and digital ratings showed moderate agreement (kappa: 0.47-0.58). Caucasian PFL norms and the African American Lip-Philtrum Guide 2 provided the best fit for this cohort. In an Aboriginal cohort with a high rate of FAS, assessment of facial dysmorphology using digital methods showed substantial inter- and intrarater reliability. Digital measurement of features has high reliability and until data are available from a larger population of Aboriginal children, the African American Lip-Philtrum Guide 2 and Caucasian (Strömland) PFL norms provide the best fit for Australian Aboriginal children.
A systems view of mother-infant face-to-face communication.
Beebe, Beatrice; Messinger, Daniel; Bahrick, Lorraine E; Margolis, Amy; Buck, Karen A; Chen, Henian
2016-04-01
Principles of a dynamic, dyadic systems view of mother-infant face-to-face communication, which considers self- and interactive processes in relation to one another, were tested. The process of interaction across time in a large low-risk community sample at infant age 4 months was examined. Split-screen videotape was coded on a 1-s time base for communication modalities of attention, affect, orientation, touch, and composite facial-visual engagement. Time-series approaches generated self- and interactive contingency estimates in each modality. Evidence supporting the following principles was obtained: (a) Significant moment-to-moment predictability within each partner (self-contingency) and between the partners (interactive contingency) characterizes mother-infant communication. (b) Interactive contingency is organized by a bidirectional, but asymmetrical, process: Maternal contingent coordination with infant is higher than infant contingent coordination with mother. (c) Self-contingency organizes communication to a far greater extent than interactive contingency. (d) Self- and interactive contingency processes are not separate; each affects the other in communication modalities of facial affect, facial-visual engagement, and orientation. Each person's self-organization exists in a dynamic, homoeostatic (negative feedback) balance with the degree to which the person coordinates with the partner. For example, those individuals who are less facially stable are likely to coordinate more strongly with the partner's facial affect and vice versa. Our findings support the concept that the dyad is a fundamental unit of analysis in the investigation of early interaction. Moreover, an individual's self-contingency is influenced by the way the individual coordinates with the partner. Our results imply that it is not appropriate to conceptualize interactive processes without simultaneously accounting for dynamically interrelated self-organizing processes. (c) 2016 APA, all rights reserved).
A Systems View of Mother-Infant Face-to-Face Communication
Beebe, Beatrice; Messinger, Daniel; Bahrick, Lorraine E.; Margolis, Amy; Buck, Karen A.; Chen, Henian
2016-01-01
Principles of a dynamic, dyadic systems view of mother-infant face-to-face communication, which considers self- and interactive processes in relation to one another, were tested. We examined the process of interaction across time in a large, low-risk community sample, at infant age 4 months. Split-screen videotape was coded on a 1-s time base for communication modalities of attention, affect, orientation, touch and composite facial-visual engagement. Time-series approaches generated self- and interactive contingency estimates in each modality. Evidence supporting the following principles was obtained: (1) Significant moment-to-moment predictability within each partner (self-contingency) and between the partners (interactive contingency) characterizes mother-infant communication. (2) Interactive contingency is organized by a bi-directional, but asymmetrical, process: maternal contingent coordination with infant is higher than infant contingent coordination with mother. (3) Self-contingency organizes communication to a far greater extent than interactive contingency. (4) Self-and interactive contingency processes are not separate; each affects the other, in communication modalities of facial affect, facial-visual engagement, and orientation. Each person’s self-organization exists in a dynamic, homoeostatic (negative feedback) balance with the degree to which the person coordinates with the partner. For example, those individuals who are less facially stable are likely to coordinate more strongly with the partner’s facial affect; and vice-versa. Our findings support the concept that the dyad is a fundamental unit of analysis in the investigation of early interaction. Moreover, an individual’s self-contingency is influenced by the way the individual coordinates with the partner. Our results imply that it is not appropriate to conceptualize interactive processes without simultaneously accounting for dynamically inter-related self-organizing processes. PMID:26882118
Nonablative laser treatment of facial rhytides
NASA Astrophysics Data System (ADS)
Lask, Gary P.; Lee, Patrick K.; Seyfzadeh, Manouchehr; Nelson, J. Stuart; Milner, Thomas E.; Anvari, Bahman; Dave, Digant P.; Geronemus, Roy G.; Bernstein, Leonard J.; Mittelman, Harry; Ridener, Laurie A.; Coulson, Walter F.; Sand, Bruce; Baumgarder, Jon; Hennings, David R.; Menefee, Richard F.; Berry, Michael J.
1997-05-01
The purpose of this study is to evaluate the safety and effectiveness of the New Star Model 130 neodymium:yttrium aluminum garnet (Nd:YAG) laser system for nonablative laser treatment of facial rhytides (e.g., periorbital wrinkles). Facial rhytides are treated with 1.32 micrometer wavelength laser light delivered through a fiberoptic handpiece into a 5 mm diameter spot using three 300 microsecond duration pulses at 100 Hz pulse repetition frequency and pulse radiant exposures extending up to 12 J/cm2. Dynamic cooling is used to cool the epidermis selectively prior to laser treatment; animal histology experiments confirm that dynamic cooling combined with nonablative laser heating protects the epidermis and selectively injures the dermis. In the human clinical study, immediately post-treatment, treated sites exhibit mild erythema and, in a few cases, edema or small blisters. There are no long-term complications such as marked dyspigmentation and persistent erythema that are commonly observed following ablative laser skin resurfacing. Preliminary results indicate that the severity of facial rhytides has been reduced, but long-term follow-up examinations are needed to quantify the reduction. The mechanism of action of this nonablative laser treatment modality may involve dermal wound healing that leads to long- term synthesis of new collagen and extracellular matrix material.
Quinto-Sánchez, Mirsha; Muñoz-Muñoz, Francesc; Gomez-Valdes, Jorge; Cintas, Celia; Navarro, Pablo; Cerqueira, Caio Cesar Silva de; Paschetta, Carolina; de Azevedo, Soledad; Ramallo, Virginia; Acuña-Alonzo, Victor; Adhikari, Kaustubh; Fuentes-Guajardo, Macarena; Hünemeier, Tábita; Everardo, Paola; de Avila, Francisco; Jaramillo, Claudia; Arias, Williams; Gallo, Carla; Poletti, Giovani; Bedoya, Gabriel; Bortolini, Maria Cátira; Canizales-Quinteros, Samuel; Rothhammer, Francisco; Rosique, Javier; Ruiz-Linares, Andres; Gonzalez-Jose, Rolando
2018-01-17
Facial asymmetries are usually measured and interpreted as proxies to developmental noise. However, analyses focused on its developmental and genetic architecture are scarce. To advance on this topic, studies based on a comprehensive and simultaneous analysis of modularity, morphological integration and facial asymmetries including both phenotypic and genomic information are needed. Here we explore several modularity hypotheses on a sample of Latin American mestizos, in order to test if modularity and integration patterns differ across several genomic ancestry backgrounds. To do so, 4104 individuals were analyzed using 3D photogrammetry reconstructions and a set of 34 facial landmarks placed on each individual. We found a pattern of modularity and integration that is conserved across sub-samples differing in their genomic ancestry background. Specifically, a signal of modularity based on functional demands and organization of the face is regularly observed across the whole sample. Our results shed more light on previous evidence obtained from Genome Wide Association Studies performed on the same samples, indicating the action of different genomic regions contributing to the expression of the nose and mouth facial phenotypes. Our results also indicate that large samples including phenotypic and genomic metadata enable a better understanding of the developmental and genetic architecture of craniofacial phenotypes.
Action and Emotion Recognition from Point Light Displays: An Investigation of Gender Differences
Alaerts, Kaat; Nackaerts, Evelien; Meyns, Pieter; Swinnen, Stephan P.; Wenderoth, Nicole
2011-01-01
Folk psychology advocates the existence of gender differences in socio-cognitive functions such as ‘reading’ the mental states of others or discerning subtle differences in body-language. A female advantage has been demonstrated for emotion recognition from facial expressions, but virtually nothing is known about gender differences in recognizing bodily stimuli or body language. The aim of the present study was to investigate potential gender differences in a series of tasks, involving the recognition of distinct features from point light displays (PLDs) depicting bodily movements of a male and female actor. Although recognition scores were considerably high at the overall group level, female participants were more accurate than males in recognizing the depicted actions from PLDs. Response times were significantly higher for males compared to females on PLD recognition tasks involving (i) the general recognition of ‘biological motion’ versus ‘non-biological’ (or ‘scrambled’ motion); or (ii) the recognition of the ‘emotional state’ of the PLD-figures. No gender differences were revealed for a control test (involving the identification of a color change in one of the dots) and for recognizing the gender of the PLD-figure. In addition, previous findings of a female advantage on a facial emotion recognition test (the ‘Reading the Mind in the Eyes Test’ (Baron-Cohen, 2001)) were replicated in this study. Interestingly, a strong correlation was revealed between emotion recognition from bodily PLDs versus facial cues. This relationship indicates that inter-individual or gender-dependent differences in recognizing emotions are relatively generalized across facial and bodily emotion perception. Moreover, the tight correlation between a subject's ability to discern subtle emotional cues from PLDs and the subject's ability to basically discriminate biological from non-biological motion provides indications that differences in emotion recognition may - at least to some degree – be related to more basic differences in processing biological motion per se. PMID:21695266
Action and emotion recognition from point light displays: an investigation of gender differences.
Alaerts, Kaat; Nackaerts, Evelien; Meyns, Pieter; Swinnen, Stephan P; Wenderoth, Nicole
2011-01-01
Folk psychology advocates the existence of gender differences in socio-cognitive functions such as 'reading' the mental states of others or discerning subtle differences in body-language. A female advantage has been demonstrated for emotion recognition from facial expressions, but virtually nothing is known about gender differences in recognizing bodily stimuli or body language. The aim of the present study was to investigate potential gender differences in a series of tasks, involving the recognition of distinct features from point light displays (PLDs) depicting bodily movements of a male and female actor. Although recognition scores were considerably high at the overall group level, female participants were more accurate than males in recognizing the depicted actions from PLDs. Response times were significantly higher for males compared to females on PLD recognition tasks involving (i) the general recognition of 'biological motion' versus 'non-biological' (or 'scrambled' motion); or (ii) the recognition of the 'emotional state' of the PLD-figures. No gender differences were revealed for a control test (involving the identification of a color change in one of the dots) and for recognizing the gender of the PLD-figure. In addition, previous findings of a female advantage on a facial emotion recognition test (the 'Reading the Mind in the Eyes Test' (Baron-Cohen, 2001)) were replicated in this study. Interestingly, a strong correlation was revealed between emotion recognition from bodily PLDs versus facial cues. This relationship indicates that inter-individual or gender-dependent differences in recognizing emotions are relatively generalized across facial and bodily emotion perception. Moreover, the tight correlation between a subject's ability to discern subtle emotional cues from PLDs and the subject's ability to basically discriminate biological from non-biological motion provides indications that differences in emotion recognition may - at least to some degree - be related to more basic differences in processing biological motion per se.
Hearing sounds, understanding actions: action representation in mirror neurons.
Kohler, Evelyne; Keysers, Christian; Umiltà, M Alessandra; Fogassi, Leonardo; Gallese, Vittorio; Rizzolatti, Giacomo
2002-08-02
Many object-related actions can be recognized by their sound. We found neurons in monkey premotor cortex that discharge when the animal performs a specific action and when it hears the related sound. Most of the neurons also discharge when the monkey observes the same action. These audiovisual mirror neurons code actions independently of whether these actions are performed, heard, or seen. This discovery in the monkey homolog of Broca's area might shed light on the origin of language: audiovisual mirror neurons code abstract contents-the meaning of actions-and have the auditory access typical of human language to these contents.
The development of motor behavior
Adolph, Karen E.; Franchak, John M.
2016-01-01
This article reviews research on the development of motor behavior from a developmental systems perspective. We focus on infancy when basic action systems are acquired. Posture provides a stable base for locomotion, manual actions, and facial actions. Experience facilitates improvements in motor behavior and infants accumulate immense amounts of experience with all of their basic action systems. At every point in development, perception guides motor behavior by providing feedback about the results of just prior movements and information about what to do next. Reciprocally, the development of motor behavior provides fodder for perception. More generally, motor development brings about new opportunities for acquiring knowledge about the world, and burgeoning motor skills can instigate cascades of developmental changes in perceptual, cognitive, and social domains. PMID:27906517
Effects of Action Relations on the Configural Coding between Objects
ERIC Educational Resources Information Center
Riddoch, M. J.; Pippard, B.; Booth, L.; Rickell, J.; Summers, J.; Brownson, A.; Humphreys, G. W.
2011-01-01
Configural coding is known to take place between the parts of individual objects but has never been shown between separate objects. We provide novel evidence here for configural coding between separate objects through a study of the effects of action relations between objects on extinction. Patients showing visual extinction were presented with…
Chen, Kuan-Hua; Lwi, Sandy J.; Hua, Alice Y.; Haase, Claudia M.; Miller, Bruce L.; Levenson, Robert W.
2017-01-01
Although laboratory procedures are designed to produce specific emotions, participants often experience mixed emotions (i.e., target and non-target emotions). We examined non-target emotions in patients with frontotemporal dementia (FTD), Alzheimer’s disease (AD), other neurodegenerative diseases, and healthy controls. Participants watched film clips designed to produce three target emotions. Subjective experience of non-target emotions was assessed and emotional facial expressions were coded. Compared to patients with other neurodegenerative diseases and healthy controls, FTD patients reported more positive and negative non-target emotions, whereas AD patients reported more positive non-target emotions. There were no group differences in facial expressions of non-target emotions. We interpret these findings as reflecting deficits in processing interoceptive and contextual information resulting from neurodegeneration in brain regions critical for creating subjective emotional experience. PMID:29457053
Reading Minds: How Infants Come to Understand Others
ERIC Educational Resources Information Center
Gopnik, Alison; Seiver, Elizabeth
2009-01-01
Navigating the social world is an extraordinarily difficult and complex task. How do we think about other people's minds, and how do we come to infer other people's intentions from their actions? Developmental psychologists have shown that even very young infants are attuned to the emotions of those around them, imitate facial expressions and…
ERIC Educational Resources Information Center
Davidson, Jane W.
2012-01-01
The research literature concerning gesture in musical performance increasingly reports that musically communicative and meaningful performances contain highly expressive bodily movements. These movements are involved in the generation of the musically expressive performance, but enquiry into the development of expressive bodily movement has been…
Kleydman, Kate; Cohen, Joel L; Marmur, Ellen
2012-12-01
Skin necrosis after soft tissue augmentation with dermal fillers is a rare but potentially severe complication. Nitroglycerin paste may be an important treatment option for dermal and epidermal ischemia in cosmetic surgery. To summarize the knowledge about nitroglycerin paste in cosmetic surgery and to understand its current use in the treatment of vascular compromise after soft tissue augmentation. To review the mechanism of action of nitroglycerin, examine its utility in the dermal vasculature in the setting of dermal filler-induced ischemia, and describe the facial anatomy danger zones in order to avoid vascular injury. A literature review was conducted to examine the mechanism of action of nitroglycerin, and a treatment algorithm was proposed from clinical observations to define strategies for impending facial necrosis after filler injection. Our experience with nitroglycerin paste and our review of the medical literature supports the use of nitroglycerin paste on the skin to help improve flow in the dermal vasculature because of its vasodilatory effect on small-caliber arterioles. © 2012 by the American Society for Dermatologic Surgery, Inc. Published by Wiley Periodicals, Inc.
Thermal Face Protection Delays Finger Cooling and Improves Thermal Comfort during Cold Air Exposure
2011-01-01
code) 2011 Journal Article-Eur Journal of Applied Physiology Thermal face protection delays Fnger cooling and improves thermal comfort during cold air...remains exposed. Facial cooling can decrease finger blood flow, reducing finger temperature (Tf). This study examined whether thermal face protection...limits Wnger cooling and thereby improves thermal comfort and manual dexterity during prolonged cold exposure. Tf was measured in ten volunteers dressed
Extraversion and the Rewarding Effects of Alcohol in a Social Context
Fairbairn, Catharine E.; Sayette, Michael A.; Wright, Aidan G. C.; Levine, John M.; Cohn, Jeffrey F.; Creswell, Kasey G.
2015-01-01
The personality trait of extraversion has been linked to problematic drinking patterns. Researchers have long hypothesized that such associations are attributable to increased alcohol-reward sensitivity among extraverted individuals, and surveys suggest that individuals high in extraversion gain greater mood enhancement from alcohol than those low in extraversion. Surprisingly, however, alcohol administration studies have not found individuals high in extraversion to experience enhanced mood following alcohol consumption. Of note, prior studies have examined extraverted participants—individuals who self-identify as being highly social—consuming alcohol in isolation. In the present research, we used a group drinking paradigm to examine whether individuals high in extraversion gained greater reward from alcohol than did those low in extraversion and, further, whether a particular social mechanism (partners’ Duchenne smiling) might underlie alcohol reward sensitivity among extraverted individuals. Social drinkers (n = 720) consumed a moderate dose of alcohol, placebo, or control beverage in groups of three over the course of 36-min. This social interaction was video-recorded, and Duchenne smiling was coded using the Facial Action Coding System. Results indicated that participants high in extraversion reported significantly more mood enhancement from alcohol than did those low in extraversion. Further, mediated moderation analyses focusing on Duchenne smiling of group members indicated that social processes fully and uniquely accounted for alcohol reward-sensitivity among individuals high in extraversion. Results provide initial experimental evidence that individuals high in extraversion experience increased mood-enhancement from alcohol and further highlight the importance of considering social processes in the etiology of Alcohol Use Disorder. PMID:25844684
Extraversion and the Rewarding Effects of Alcohol in a Social Context.
Fairbairn, Catharine E; Sayette, Michael A; Wright, Aidan G C; Levine, John M; Cohn, Jeffrey F; Creswell, Kasey G
2015-08-01
The personality trait of extraversion has been linked to problematic drinking patterns. Researchers have long hypothesized that such associations are attributable to increased alcohol-reward sensitivity among extraverted individuals, and surveys suggest that individuals high in extraversion gain greater mood enhancement from alcohol than those low in extraversion. Surprisingly, however, alcohol administration studies have not found individuals high in extraversion to experience enhanced mood following alcohol consumption. Of note, prior studies have examined extraverted participants-individuals who self-identify as being highly social-consuming alcohol in isolation. In the present research, we used a group drinking paradigm to examine whether individuals high in extraversion gained greater reward from alcohol than did those low in extraversion and, further, whether a particular social mechanism (partners’ Duchenne smiling) might underlie alcohol reward sensitivity among extraverted individuals. Social drinkers (n 720) consumed a moderate dose of alcohol, placebo, or control beverage in groups of 3 over the course of 36 min. This social interaction was video-recorded, and Duchenne smiling was coded using the Facial Action Coding System. Results indicated that participants high in extraversion reported significantly more mood enhancement from alcohol than did those low in extraversion. Further, mediated moderation analyses focusing on Duchenne smiling of group members indicated that social processes fully and uniquely accounted for alcohol reward-sensitivity among individuals high in extraversion. Results provide initial experimental evidence that individuals high in extraversion experience increased mood-enhancement from alcohol and further highlight the importance of considering social processes in the etiology of alcohol use disorder. (c) 2015 APA, all rights reserved).
Greene, Jacqueline J; McClendon, Mark T; Stephanopoulos, Nicholas; Álvarez, Zaida; Stupp, Samuel I; Richter, Claus-Peter
2018-04-27
Facial nerve injury can cause severe long-term physical and psychological morbidity. There are limited repair options for an acutely transected facial nerve not amenable to primary neurorrhaphy. We hypothesize that a peptide amphiphile nanofiber neurograft may provide the nanostructure necessary to guide organized neural regeneration. Five experimental groups were compared, animals with 1) an intact nerve, 2) following resection of a nerve segment, and following resection and immediate repair with either a 3) autograft (using the resected nerve segment), 4) neurograft, or 5) empty conduit. The buccal branch of the rat facial nerve was directly stimulated with charge balanced biphasic electrical current pulses at different current amplitudes while nerve compound action potentials (nCAPs) and electromygraphic (EMG) responses were recorded. After 8 weeks, the proximal buccal branch was surgically re-exposed and electrically evoked nCAPs were recorded for groups 1-5. As expected, the intact nerves required significantly lower current amplitudes to evoke an nCAP than those repaired with the neurograft and autograft nerves. For other electrophysiologic parameters such as latency and maximum nCAP, there was no significant difference between the intact, autograft and neurograft groups. The resected group had variable responses to electrical stimulation, and the empty tube group was electrically silent. Immunohistochemical analysis and TEM confirmed myelinated neural regeneration. This study demonstrates that the neuroregenerative capability of peptide amphiphile nanofiber neurografts is similar to the current clinical gold standard method of repair and holds potential as an off-the-shelf solution for facial reanimation and potentially peripheral nerve repair. This article is protected by copyright. All rights reserved.
Soccer-related Facial Trauma: Multicenter Experience in 2 Brazilian University Hospitals
Dini, Gal M.; Pereira, Max D.; Gurgel, Augusto; Bastos, Endrigo O.; Nagarkar, Purushottam; Gemperli, Rolf; Ferreira, Lydia M.
2014-01-01
Background: Soccer is the most popular sport in Brazil and a high incidence of related trauma is reported. Maxillofacial trauma can be quite common, sometimes requiring prolonged hospitalization and invasive procedures. To characterize soccer-related facial fractures needing surgery in 2 major Brazilian Centers. Methods: A retrospective review of trauma medical records from the Plastic Surgery Divisions at the Universidade Federal de São Paulo–Escola Paulista de Medicina and the Hospital das Clinicas–Universidade de São Paulo was carried out to identify patients who underwent invasive surgical procedures due to acute soccer-related facial fractures. Data points reviewed included gender, date of injury, type of fracture, date of surgery, and procedure performed. Results: A total of 45 patients (31 from Escola Paulista de Medicina and 14 from Universidade de São Paulo) underwent surgical procedures to address facial fractures between March 2000 and September 2013. Forty-four patients were men, and mean age was 28 years. The fracture patterns seen were nasal bones (16 patients, 35%), orbitozygomatic (16 patients, 35%), mandibular (7 patients, 16%), orbital (6 patients, 13%), frontal (1 patient, 2%), and naso-orbito-ethmoid (1 patient, 2%). Mechanisms of injury included collisions with another player (n = 39) and being struck by the ball (n = 6). Conclusions: Although it is less common than orthopedic injuries, soccer players do sustain maxillofacial trauma. Knowledge of its frequency is important to first responders, nurses, and physicians who have initial contact with patients. Missed diagnosis or delayed treatment can lead to facial deformities and functional problems in the physiological actions of breathing, vision, and chewing. PMID:25289361
Spontaneous Facial Mimicry is Modulated by Joint Attention and Autistic Traits.
Neufeld, Janina; Ioannou, Christina; Korb, Sebastian; Schilbach, Leonhard; Chakrabarti, Bhismadev
2016-07-01
Joint attention (JA) and spontaneous facial mimicry (SFM) are fundamental processes in social interactions, and they are closely related to empathic abilities. When tested independently, both of these processes have been usually observed to be atypical in individuals with autism spectrum conditions (ASC). However, it is not known how these processes interact with each other in relation to autistic traits. This study addresses this question by testing the impact of JA on SFM of happy faces using a truly interactive paradigm. Sixty-two neurotypical participants engaged in gaze-based social interaction with an anthropomorphic, gaze-contingent virtual agent. The agent either established JA by initiating eye contact or looked away, before looking at an object and expressing happiness or disgust. Eye tracking was used to make the agent's gaze behavior and facial actions contingent to the participants' gaze. SFM of happy expressions was measured by Electromyography (EMG) recording over the Zygomaticus Major muscle. Results showed that JA augments SFM in individuals with low compared with high autistic traits. These findings are in line with reports of reduced impact of JA on action imitation in individuals with ASC. Moreover, they suggest that investigating atypical interactions between empathic processes, instead of testing these processes individually, might be crucial to understanding the nature of social deficits in autism. Autism Res 2016, 9: 781-789. © 2015 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research. © 2015 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research.
Memory for faces: the effect of facial appearance and the context in which the face is encountered.
Mattarozzi, Katia; Todorov, Alexander; Codispoti, Maurizio
2015-03-01
We investigated the effects of appearance of emotionally neutral faces and the context in which the faces are encountered on incidental face memory. To approximate real-life situations as closely as possible, faces were embedded in a newspaper article, with a headline that specified an action performed by the person pictured. We found that facial appearance affected memory so that faces perceived as trustworthy or untrustworthy were remembered better than neutral ones. Furthermore, the memory of untrustworthy faces was slightly better than that of trustworthy faces. The emotional context of encoding affected the details of face memory. Faces encountered in a neutral context were more likely to be recognized as only familiar. In contrast, emotionally relevant contexts of encoding, whether pleasant or unpleasant, increased the likelihood of remembering semantic and even episodic details associated with faces. These findings suggest that facial appearance (i.e., perceived trustworthiness) affects face memory. Moreover, the findings support prior evidence that the engagement of emotion processing during memory encoding increases the likelihood that events are not only recognized but also remembered.
Display rules for anger and aggression in school-age children.
Underwood, M K; Coie, J D; Herbsman, C R
1992-04-01
2 related studies addressed the development of display rules for anger and the relation between use of display rules for anger and aggressiveness as rated by school peers. Third, fifth, and seventh graders (ages 8.4, 10.9, and 12.8, respectively) gave hypothetical responses to videotaped, anger provoking vignettes. Overall, regardless of how display rules were defined, subjects reported display rules more often with teachers than with peers for both facial expressions and actions. Reported masking of facial expressions of anger increased with age, but only with teachers. Girls reported masking of facial expressions of anger more than boys. There was a trend for aggressive subjects to invoke display rules for anger less than nonaggressive subjects. The phenomenon of display rules for anger is complex and dependent on the way display rules are defined and the age and gender of the subjects. Most of all, whether children say they would behave angrily seems to be determined by the social context for revealing angry feelings; children say they would express anger genuinely much more often with peers than with teachers.
Infants' Perception of Emotion from Body Movements
ERIC Educational Resources Information Center
Zieber, Nicole; Kangas, Ashley; Hock, Alyson; Bhatt, Ramesh S.
2014-01-01
Adults recognize emotions conveyed by bodies with comparable accuracy to facial emotions. However, no prior study has explored infants' perception of body emotions. In Experiment 1, 6.5-month-olds (n = 32) preferred happy over neutral actions of actors with covered faces in upright but not inverted silent videos. In Experiment 2, infants…
ERIC Educational Resources Information Center
Ferrari, Pier Francesco; Paukner, Annika; Ruggiero, Angela; Darcey, Lisa; Unbehagen, Sarah; Suomi, Stephen J.
2009-01-01
The capacity to imitate facial gestures is highly variable in rhesus macaques and this variability may be related to differences in specific neurobehavioral patterns of development. This study evaluated the differential neonatal imitative response of 41 macaques in relation to the development of sensory, motor, and cognitive skills throughout the…
Direct effects of diazepam on emotional processing in healthy volunteers
Murphy, S. E.; Downham, C.; Cowen, P. J.
2008-01-01
Rationale Pharmacological agents used in the treatment of anxiety have been reported to decrease threat relevant processing in patients and healthy controls, suggesting a potentially relevant mechanism of action. However, the effects of the anxiolytic diazepam have typically been examined at sedative doses, which do not allow the direct actions on emotional processing to be fully separated from global effects of the drug on cognition and alertness. Objectives The aim of this study was to investigate the effect of a lower, but still clinically effective, dose of diazepam on emotional processing in healthy volunteers. Materials and methods Twenty-four participants were randomised to receive a single dose of diazepam (5 mg) or placebo. Sixty minutes later, participants completed a battery of psychological tests, including measures of non-emotional cognitive performance (reaction time and sustained attention) and emotional processing (affective modulation of the startle reflex, attentional dot probe, facial expression recognition, and emotional memory). Mood and subjective experience were also measured. Results Diazepam significantly modulated attentional vigilance to masked emotional faces and significantly decreased overall startle reactivity. Diazepam did not significantly affect mood, alertness, response times, facial expression recognition, or sustained attention. Conclusions At non-sedating doses, diazepam produces effects on attentional vigilance and startle responsivity that are consistent with its anxiolytic action. This may be an underlying mechanism through which benzodiazepines exert their therapeutic effects in clinical anxiety. PMID:18581100
Face recognition with the Karhunen-Loeve transform
NASA Astrophysics Data System (ADS)
Suarez, Pedro F.
1991-12-01
The major goal of this research was to investigate machine recognition of faces. The approach taken to achieve this goal was to investigate the use of Karhunen-Loe've Transform (KLT) by implementing flexible and practical code. The KLT utilizes the eigenvectors of the covariance matrix as a basis set. Faces were projected onto the eigenvectors, called eigenfaces, and the resulting projection coefficients were used as features. Face recognition accuracies for the KLT coefficients were superior to Fourier based techniques. Additionally, this thesis demonstrated the image compression and reconstruction capabilities of the KLT. This theses also developed the use of the KLT as a facial feature detector. The ability to differentiate between facial features provides a computer communications interface for non-vocal people with cerebral palsy. Lastly, this thesis developed a KLT based axis system for laser scanner data of human heads. The scanner data axis system provides the anthropometric community a more precise method of fitting custom helmets.
Differential hemispheric and visual stream contributions to ensemble coding of crowd emotion
Im, Hee Yeon; Albohn, Daniel N.; Steiner, Troy G.; Cushing, Cody A.; Adams, Reginald B.; Kveraga, Kestutis
2017-01-01
In crowds, where scrutinizing individual facial expressions is inefficient, humans can make snap judgments about the prevailing mood by reading “crowd emotion”. We investigated how the brain accomplishes this feat in a set of behavioral and fMRI studies. Participants were asked to either avoid or approach one of two crowds of faces presented in the left and right visual hemifields. Perception of crowd emotion was improved when crowd stimuli contained goal-congruent cues and was highly lateralized to the right hemisphere. The dorsal visual stream was preferentially activated in crowd emotion processing, with activity in the intraparietal sulcus and superior frontal gyrus predicting perceptual accuracy for crowd emotion perception, whereas activity in the fusiform cortex in the ventral stream predicted better perception of individual facial expressions. Our findings thus reveal significant behavioral differences and differential involvement of the hemispheres and the major visual streams in reading crowd versus individual face expressions. PMID:29226255
Evaluation of persons of varying ages.
Stolte, J F
1996-06-01
Dual coding theory (Paivio, 1986) suggests that communicating a stimulus person's age verbally/abstractly through words and numbers arouses little feeling and has little effect on the way others evaluate her or him, whereas communicating age nonverbally/concretely through facial photographs arouses more feeling and has a greater impact on evaluation. Two experiments reported in this article, involving U.S. students and incorporating techniques developed in prior research by Levin (1988) strongly support these theoretical expectations.
Appearance-based human gesture recognition using multimodal features for human computer interaction
NASA Astrophysics Data System (ADS)
Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun
2011-03-01
The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.
de Faria, Maria Estela Justamante; Carvalho, Luciani R; Rossetto, Shirley M; Amaral, Terezinha Sampaio; Berger, Karina; Arnhold, Ivo Jorge Prado; Mendonca, Berenice Bilharinho
2009-01-01
There are many controversies regarding side effects on craniofacial and extremity growth due to growth hormone (GH) treatment. Our aim was to estimate GH action on craniofacial development and extremity growth in GH-deficient patients. Twenty patients with GH deficiency with a chronological age ranging from 4.6 to 24.3 years (bone age from 1.5 to 13 years) were divided in 2 groups: group 1 (n = 6), naive to GH treatment, and group 2 (n = 14), ongoing GH treatment for 2-11 years. GH doses (0.1-0.15 U/kg/day) were adjusted to maintain insulin-like growth factor 1 and insulin-like growth factor binding protein 3 levels within the normal range. Anthropometric measurements, cephalometric analyses and facial photographs to verify profile and harmony were performed annually for at least 3 years. Two patients with a disharmonious profile due to mandibular growth attained harmony, and none of them developed facial disharmony. Increased hand or foot size (>P97) was observed in 2 female patients and in 4 patients (1 female), respectively, both not correlated with GH treatment duration and increased levels of insulin-like growth factor 1. GH treatment with standard doses in GH-deficient patients can improve the facial profile in retrognathic patients and does not lead to facial disharmony although extremity growth, mainly involving the feet, can occur. Copyright 2009 S. Karger AG, Basel.
Anatomy of Sodium Hypochlorite Accidents Involving Facial Ecchymosis – A Review
Zhu, Wan-chun; Gyamfi, Jacqueline; Niu, Li-na; Schoeffel, G. John; Liu, Si-ying; Santarcangelo, Filippo; Khan, Sara; Tay, Kelvin C-Y.; Pashley, David H.; Tay, Franklin R.
2013-01-01
Objectives Root canal treatment forms an essential part of general dental practice. Sodium hypochlorite (NaOCl) is the most commonly used irrigant in endodontics due to its ability to dissolve organic soft tissues in the root canal system and its action as a potent antimicrobial agent. Although NaOCl accidents created by extrusion of the irrigant through root apices are relatively rare and are seldom life-threatening, they do create substantial morbidity when they occur. Methods To date, NaOCl accidents have only been published as isolated case reports. Although previous studies have attempted to summarise the symptoms involved in these case reports, there was no endeavor to analyse the distribution of soft tissue distribution in those reports. In this review, the anatomy of a classical NaOCl accident that involves facial swelling and ecchymosis is discussed. Results By summarising the facial manifestations presented in previous case reports, a novel hypothesis that involves intravenous infusion of extruded NaOCl into the facial vein via non-collapsible venous sinusoids within the cancellous bone is presented. Conclusions Understanding the mechanism involved in precipitating a classic NaOCl accident will enable the profession to make the best decision regarding the choice of irrigant delivery techniques in root canal débridement, and for manufacturers to design and improve their irrigation systems to achieve maximum safety and efficient cleanliness of the root canal system. PMID:23994710
Anatomy of sodium hypochlorite accidents involving facial ecchymosis - a review.
Zhu, Wan-chun; Gyamfi, Jacqueline; Niu, Li-na; Schoeffel, G John; Liu, Si-ying; Santarcangelo, Filippo; Khan, Sara; Tay, Kelvin C-Y; Pashley, David H; Tay, Franklin R
2013-11-01
Root canal treatment forms an essential part of general dental practice. Sodium hypochlorite (NaOCl) is the most commonly used irrigant in endodontics due to its ability to dissolve organic soft tissues in the root canal system and its action as a potent antimicrobial agent. Although NaOCl accidents created by extrusion of the irrigant through root apices are relatively rare and are seldom life-threatening, they do create substantial morbidity when they occur. To date, NaOCl accidents have only been published as isolated case reports. Although previous studies have attempted to summarise the symptoms involved in these case reports, there was no endeavour to analyse the distribution of soft tissue distribution in those reports. In this review, the anatomy of a classical NaOCl accident that involves facial swelling and ecchymosis is discussed. By summarising the facial manifestations presented in previous case reports, a novel hypothesis that involves intravenous infusion of extruded NaOCl into the facial vein via non-collapsible venous sinusoids within the cancellous bone is presented. Understanding the mechanism involved in precipitating a classic NaOCl accident will enable the profession to make the best decision regarding the choice of irrigant delivery techniques in root canal débridement, and for manufacturers to design and improve their irrigation systems to achieve maximum safety and efficient cleanliness of the root canal system. Copyright © 2013 Elsevier Ltd. All rights reserved.
Assessing Attentional Prioritization of Front-of-Pack Nutrition Labels using Change Detection
Becker, Mark W.; Sundar, Raghav Prashant; Bello, Nora; Alzahabi, Reem; Weatherspoon, Lorraine; Bix, Laura
2015-01-01
We used a change detection method to evaluate attentional prioritization of nutrition information that appears in the traditional “Nutrition Facts Panel” and in front-of-pack nutrition labels. Results provide compelling evidence that front-of-pack labels attract attention more readily than the Nutrition Facts Panel, even when participants are not specifically tasked with searching for nutrition information. Further, color-coding the relative nutritional value of key nutrients within the front-of-pack label resulted in increased attentional prioritization of nutrition information, but coding using facial icons did not significantly increase attention to the label. Finally, the general pattern of attentional prioritization across front-of-pack designs was consistent across a diverse sample of participants. Our results indicate that color-coded, front-of-pack nutrition labels increase attention to the nutrition information of packaged food, a finding that has implications for current policy discussions regarding labeling change. PMID:26851468
[Effect of extracts from Dendrobii ifficinalis flos on hyperthyroidism Yin deficiency mice].
Lei, Shan-shan; Lv, Gui-yuan; Jin, Ze-wu; Li, Bo; Yang, Zheng-biao; Chen, Su-hong
2015-05-01
Some unhealthy life habits, such as long-term smoking, heavy drinking, sexual overstrain and frequent stay-up could induce the Yin deficiency symptoms of zygomatic red and dysphoria. Stems of Dendrobii officinalis flos (DOF) showed the efficacy of nourishing Yin. In this study, the hyperthyroidism Yin deficiency model was set up to study the yin nourishing effect and action mechanism of DOF, in order to provide the pharmacological basis for developing DOF resources and decreasing resource wastes. ICR mice were divided into five groups: the normal control group, the model control group, the positive control group and DOF extract groups (6.4 g · kg(-1)). Except for the normal group, the other groups were administrated with thyroxine for 30 d to set up the hyperthyroidism yin deficiency model. At the same time, the other groups were administrated with the corresponding drugs for 30 d. After administration for 4 weeks, the signs (facial temperature, pain domain, heart rate and autonomic activity) in mice were measured, and the facial and ear micro-circulation blood flow were detected by laser Doppler technology. After the last administration, all mice were fasted for 12 hours, blood were collected from their orbits, and serum were separated to detect AST, ALT, TG and TP by the automatic biochemistry analyzer and test T3, T4 and TSH levels by ELISA. (1) Compared with the normal control group, the model control group showed significant increases in facial and ear micro-circulation blood flow, facial temperature and heart rate (P < 0.05, P < 0.01), serum AST, ALT (P < 0.01), T3 level (P < 0.05), TSH level (P < 0.05) and notable deceases in pain domain (P < 0.01), TG level (P < 0.01). (2) Compared with the model control group, extracts from DOF (6 g · kg(-1)) could notably reduce facial and ear micro-circulation blood flow, facial temperature and heart rate (P < 0.05, P < 0.01) and AST (P < 0.05) and enhance pain domain (P < 0.01) and TG (P < 0.01). Extracts from DOF (4 g · kg(-1)) could remarkably reduce AST and ALT levels (P < 0.01, 0.05). Extracts from DOF (6 g · kg(-1) 4 g · kg(-1)) could significantly reduce T3 and increase serum TSH level (P < 0.05). DOF could improve Yin deficiency symptoms of zygomatic red and dysphoria in mice as well as liver function injury caused by overactive thyroid axis. According to its action mechanism, DOF may show yin nourishing and hepatic protective effects by impacting thyroxin substance metabolism, improving micro-circulation and reducing heart rate.
Zeichner, Joshua A; Wong, Vicky; Linkner, Rita V; Haddican, Madelaine
2013-03-01
Combination therapy using medications with complementary mechanisms of action is the standard of care in treating acne. We report results of a clinical trial evaluating the use of a fixed-dose tretinoin 0.025%/clindamycin phosphate 1.2% (T/CP) gel in combination with a benzoyl peroxide 6% foaming cloth compared with T/CP alone for facial acne. At week 12, the combination therapy group showed a trend toward greater efficacy compared with T/CP alone. There was a high success rate observed in the study, which may be attributable to the large percentage of adult female acne patients enrolled. Cutaneous adverse events were not statistically different in using combination therapy compared with T/CP alone.
Potthoff, Denise; Seitz, Rüdiger J
2015-12-01
Humans typically make probabilistic inferences about another person's affective state based on her/his bodily movements such as emotional facial expressions, emblematic gestures and whole body movements. Furthermore, humans deduce tentative predictions about the other person's intentions. Thus, the first person perspective of a subject is supplemented by the second person perspective involving theory of mind and empathy. Neuroimaging investigations have shown that the medial and lateral frontal cortex are critical nodes in the circuits underlying theory of mind, empathy, as well as intention of action. It is suggested that personal perspective taking in social interactions is paradigmatic for the capability of humans to generate probabilistic accounts of the outside world that underlie a person's control of behaviour. Copyright © 2015 Elsevier Ltd. All rights reserved.
Adopting a code requiring radon-resistant new construction (RRNC) in Decatur, Alabama, took months of effort by four people. Their actions demonstrate the influence that passionate residents can have on reversing a city council’s direction.
Lyme disease and Bell's palsy: an epidemiological study of diagnosis and risk in England.
Cooper, Lilli; Branagan-Harris, Michael; Tuson, Richard; Nduka, Charles
2017-05-01
Lyme disease is caused by a tick-borne spirochaete of the Borrelia species. It is associated with facial palsy, is increasingly common in England, and may be misdiagnosed as Bell's palsy. To produce an accurate map of Lyme disease diagnosis in England and to identify patients at risk of developing associated facial nerve palsy, to enable prevention, early diagnosis, and effective treatment. Hospital episode statistics (HES) data in England from the Health and Social Care Information Centre were interrogated from April 2011 to March 2015 for International Classification of Diseases 10th revision (ICD-10) codes A69.2 (Lyme disease) and G51.0 (Bell's palsy) in isolation, and as a combination. Patients' age, sex, postcode, month of diagnosis, and socioeconomic groups as defined according to the English Indices of Deprivation (2004) were also collected. Lyme disease hospital diagnosis increased by 42% per year from 2011 to 2015 in England. Higher incidence areas, largely rural, were mapped. A trend towards socioeconomic privilege and the months of July to September was observed. Facial palsy in combination with Lyme disease is also increasing, particularly in younger patients, with a mean age of 41.7 years, compared with 59.6 years for Bell's palsy and 45.9 years for Lyme disease ( P = 0.05, analysis of variance [ANOVA]). Healthcare practitioners should have a high index of suspicion for Lyme disease following travel in the areas shown, particularly in the summer months. The authors suggest that patients presenting with facial palsy should be tested for Lyme disease. © British Journal of General Practice 2017.
Coupled Dictionary Learning for the Detail-Enhanced Synthesis of 3-D Facial Expressions.
Liang, Haoran; Liang, Ronghua; Song, Mingli; He, Xiaofei
2016-04-01
The desire to reconstruct 3-D face models with expressions from 2-D face images fosters increasing interest in addressing the problem of face modeling. This task is important and challenging in the field of computer animation. Facial contours and wrinkles are essential to generate a face with a certain expression; however, these details are generally ignored or are not seriously considered in previous studies on face model reconstruction. Thus, we employ coupled radius basis function networks to derive an intermediate 3-D face model from a single 2-D face image. To optimize the 3-D face model further through landmarks, a coupled dictionary that is related to 3-D face models and their corresponding 3-D landmarks is learned from the given training set through local coordinate coding. Another coupled dictionary is then constructed to bridge the 2-D and 3-D landmarks for the transfer of vertices on the face model. As a result, the final 3-D face can be generated with the appropriate expression. In the testing phase, the 2-D input faces are converted into 3-D models that display different expressions. Experimental results indicate that the proposed approach to facial expression synthesis can obtain model details more effectively than previous methods can.
Lahera, Guillermo; Ruiz, Alicia; Brañas, Antía; Vicens, María; Orozco, Arantxa
Previous studies have linked processing speed with social cognition and functioning of patients with schizophrenia. A discriminant analysis is needed to determine the different components of this neuropsychological construct. This paper analyzes the impact of processing speed, reaction time and sustained attention on social functioning. 98 outpatients between 18 and 65 with DSM-5 diagnosis of schizophrenia, with a period of 3 months of clinical stability, were recruited. Sociodemographic and clinical data were collected, and the following variables were measured: processing speed (Trail Making Test [TMT], symbol coding [BACS], verbal fluency), simple and elective reaction time, sustained attention, recognition of facial emotions and global functioning. Processing speed (measured only through the BACS), sustained attention (CPT) and elective reaction time (but not simple) were associated with functioning. Recognizing facial emotions (FEIT) correlated significantly with scores on measures of processing speed (BACS, Animals, TMT), sustained attention (CPT) and reaction time. The linear regression model showed a significant relationship between functioning, emotion recognition (P=.015) and processing speed (P=.029). A deficit in processing speed and facial emotion recognition are associated with worse global functioning in patients with schizophrenia. Copyright © 2017 SEP y SEPB. Publicado por Elsevier España, S.L.U. All rights reserved.
How face blurring affects body language processing of static gestures in women and men.
Proverbio, A M; Ornaghi, L; Gabaro, V
2018-05-14
The role of facial coding in body language comprehension was investigated by ERP recordings in 31 participants viewing 800 photographs of gestures (iconic, deictic and emblematic), which could be congruent or incongruent with their caption. Facial information was obscured by blurring in half of the stimuli. The task consisted of evaluating picture/caption congruence. Quicker response times were observed in women than in men to congruent stimuli, and a cost for incongruent vs. congruent stimuli was found only in men. Face obscuration did not affect accuracy in women as reflected by omission percentages, nor reduced their cognitive potentials, thus suggesting a better comprehension of face deprived pantomimes. N170 response (modulated by congruity and face presence) peaked later in men than in women. Late Positivity was much larger for congruent stimuli in the female brain, regardless of face blurring. Face presence specifically activated the right superior temporal and fusiform gyri, cingulate cortex and insula, according to source reconstruction. These regions have been reported to be insufficiently activated in face-avoiding individuals with social deficits. Overall, the results corroborate the hypothesis that females might be more resistant to the lack of facial information or better at understanding body language in face-deprived social information.
Lupis, Sarah B; Lerman, Michelle; Wolf, Jutta M
2014-11-01
While previous research has suggested that anger and fear responses to stress are linked to distinct sympathetic nervous system (SNS) stress responses, little is known about how these emotions predict hypothalamus-pituitary-adrenal (HPA) axis reactivity. Further, earlier research primarily relied on retrospective self-report of emotion. The current study aimed at addressing both issues in male and female individuals by assessing the role of anger and fear in predicting heart rate and cortisol stress responses using both self-report and facial coding analysis to assess emotion responses. We exposed 32 healthy students (18 female; 19.6±1.7 yr) to an acute psychosocial stress paradigm (TSST) and measured heart rate and salivary cortisol levels throughout the protocol. Anger and fear before and after stress exposure was assessed by self-report, and video recordings of the TSST were assessed by a certified facial coder to determine emotion expression (FACS). Self-reported emotions and emotion expressions did not correlate (all p>.23). Increases in self-reported fear predicted blunted cortisol responses in men (β=0.41, p=.04). Also for men, longer durations of anger expression predicted exaggerated cortisol responses (β=0.67 p=.004), and more anger incidences predicted exaggerated cortisol and heart rate responses (β=0.51, p=.033; β=0.46, p=.066, resp.). Anger and fear did not predict SNS or HPA activity for females (all p>.23). The current differential self-report and facial coding findings support the use of multiple modes of emotion assessment. Particularly, FACS but not self-report revealed a robust anger-stress association that could have important downstream health effects for men. For women, future research may clarify the role of other emotions, such as self-conscious expressions of shame, for physiological stress responses. A better understanding of the emotion-stress link may contribute to behavioral interventions targeting health-promoting ways of responding emotionally to stress. Copyright © 2014 Elsevier Ltd. All rights reserved.
Lupis, Sarah B.; Lerman, Michelle; Wolf, Jutta M.
2014-01-01
While previous research has suggested that anger and fear responses to stress are linked to distinct sympathetic nervous system (SNS) stress responses, little is known about how these emotions predict hypothalamus-pituitary-adrenal (HPA) axis reactivity. Further, earlier research primarily relied on retrospective self-report of emotion. The current study aimed at addressing both issues in male and female individuals by assessing the role of anger and fear in predicting heart rate and cortisol stress responses using both self-report and facial coding analysis to assess emotion responses. We exposed 32 healthy students (18 female; 19.6+/−1.7 yrs.) to an acute psychosocial stress paradigm (TSST) and measured heart rate and salivary cortisol levels throughout the protocol. Anger and fear before and after stress exposure was assessed by self-report, and video recordings of the TSST were assessed by a certified facial coder to determine emotion expression (FACS). Self-reported emotions and emotion expressions did not correlate (all p > .23). Increases in self-reported fear predicted blunted cortisol responses in men (β = 0.41, p = .04). Also for men, longer durations of anger expression predicted exaggerated cortisol responses (β = 0.67 p = .004), and more anger incidences predicted exaggerated cortisol and heart rate responses (β = 0.51, p = .033; β = 0.46, p = .066, resp.). Anger and fear did not predict SNS or HPA activity for females (all p > .23). The current differential self-report and facial coding findings support the use of multiple modes of emotion assessment. Particularly, FACS but not self-report revealed a robust anger-stress association that could have important downstream health effects for men. For women, future research may clarify the role of other emotions, such as self-conscious expressions of shame, for physiological stress responses. A better understanding of the emotion-stress link may contribute to behavioral interventions targeting health-promoting ways of responding emotionally to stress. PMID:25064831
Decoding the neural mechanisms of human tool use
Gallivan, Jason P; McLean, D Adam; Valyear, Kenneth F; Culham, Jody C
2013-01-01
Sophisticated tool use is a defining characteristic of the primate species but how is it supported by the brain, particularly the human brain? Here we show, using functional MRI and pattern classification methods, that tool use is subserved by multiple distributed action-centred neural representations that are both shared with and distinct from those of the hand. In areas of frontoparietal cortex we found a common representation for planned hand- and tool-related actions. In contrast, in parietal and occipitotemporal regions implicated in hand actions and body perception we found that coding remained selectively linked to upcoming actions of the hand whereas in parietal and occipitotemporal regions implicated in tool-related processing the coding remained selectively linked to upcoming actions of the tool. The highly specialized and hierarchical nature of this coding suggests that hand- and tool-related actions are represented separately at earlier levels of sensorimotor processing before becoming integrated in frontoparietal cortex. DOI: http://dx.doi.org/10.7554/eLife.00425.001 PMID:23741616
Röder, Christian H; Mohr, Harald; Linden, David E J
2011-02-01
Faces are multidimensional stimuli that convey information for complex social and emotional functions. Separate neural systems have been implicated in the recognition of facial identity (mainly extrastriate visual cortex) and emotional expression (limbic areas and the superior temporal sulcus). Working-memory (WM) studies with faces have shown different but partly overlapping activation patterns in comparison to spatial WM in parietal and prefrontal areas. However, little is known about the neural representations of the different facial dimensions during WM. In the present study 22 subjects performed a face-identity or face-emotion WM task at different load levels during functional magnetic resonance imaging. We found a fronto-parietal-visual WM-network for both tasks during maintenance, including fusiform gyrus. Limbic areas in the amygdala and parahippocampal gyrus demonstrated a stronger activation for the identity than the emotion condition. One explanation for this finding is that the repetitive presentation of faces with different identities but the same emotional expression during the identity-task is responsible for the stronger increase in BOLD signal in the amygdala. These results raise the question how different emotional expressions are coded in WM. Our findings suggest that emotional expressions are re-coded in an abstract representation that is supported at the neural level by the canonical fronto-parietal WM network. Copyright © 2010 Elsevier Ltd. All rights reserved.
Sliwa, Julia; Planté, Aurélie; Duhamel, Jean-René; Wirth, Sylvia
2016-03-01
Social interactions make up to a large extent the prime material of episodic memories. We therefore asked how social signals are coded by neurons in the hippocampus. Human hippocampus is home to neurons representing familiar individuals in an abstract and invariant manner ( Quian Quiroga et al. 2009). In contradistinction, activity of rat hippocampal cells is only weakly altered by the presence of other rats ( von Heimendahl et al. 2012; Zynyuk et al. 2012). We probed the activity of monkey hippocampal neurons to faces and voices of familiar and unfamiliar individuals (monkeys and humans). Thirty-one percent of neurons recorded without prescreening responded to faces or to voices. Yet responses to faces were more informative about individuals than responses to voices and neuronal responses to facial and vocal identities were not correlated, indicating that in our sample identity information was not conveyed in an invariant manner like in human neurons. Overall, responses displayed by monkey hippocampal neurons were similar to the ones of neurons recorded simultaneously in inferotemporal cortex, whose role in face perception is established. These results demonstrate that the monkey hippocampus participates in the read-out of social information contrary to the rat hippocampus, but possibly lack an explicit conceptual coding of as found in humans. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Appetitive Motivation and Negative Emotion Reactivity among Remitted Depressed Youth
Hankin, Benjamin L.; Wetter, Emily K.; Flory, Kate
2012-01-01
Depression has been characterized as involving altered appetitive motivation and emotional reactivity. Yet no study has examined objective indices of emotional reactivity when the appetitive/approach system is suppressed in response to failure to attain a self-relevant goal and desired reward. Three groups of youth (N = 98, ages 9–15; remitted depressed, n = 34; externalizing disordered without depression, n = 30, and healthy controls, n = 34) participated in a novel reward striving task designed to activate the appetitive/approach motivation system. Objective facial expressions of emotion were videotaped and coded throughout both failure (i.e., nonreward) and control (success and reward) conditions. Observational coding of facial expressions as well as youths’ subjective emotion reports showed that the remitted depressed youth specifically exhibited more negative emotional reactivity to failure in the reward striving task, but not the control condition. Neither externalizing disordered (i.e., ADHD, CD, and/ or ODD) nor control youth displayed greater negative emotional reactivity in either the failure or control condition. Findings suggest that depression among youth is related to dysregulated appetitive motivation and associated negative emotional reactivity after failing to achieve an important, self-relevant goal and not attaining reward. These deficits in reward processing appear to be specific to depression as externalizing disordered youth did not display negative emotional reactivity to failure after their appetitive motivation system was activated. PMID:22901275
Appetitive motivation and negative emotion reactivity among remitted depressed youth.
Hankin, Benjamin L; Wetter, Emily K; Flory, Kate
2012-01-01
Depression has been characterized as involving altered appetitive motivation and emotional reactivity. Yet no study has examined objective indices of emotional reactivity when the appetitive/approach system is suppressed in response to failure to attain a self-relevant goal and desired reward. Three groups of youth (N = 98, ages 9-15; remitted depressed, n = 34; externalizing disordered without depression, n = 30; and healthy controls, n = 34) participated in a novel reward striving task designed to activate the appetitive/approach motivation system. Objective facial expressions of emotion were videotaped and coded throughout both failure (i.e., nonreward) and control (success and reward) conditions. Observational coding of facial expressions as well as youths' subjective emotion reports showed that the remitted depressed youth specifically exhibited more negative emotional reactivity to failure in the reward striving task, but not the control condition. Neither externalizing disordered (i.e., attention deficit hyperactivity disorder, conduct disorder, and/or oppositional defiant disorder) nor control youth displayed greater negative emotional reactivity in either the failure or control condition. Findings suggest that depression among youth is related to dysregulated appetitive motivation and associated negative emotional reactivity after failing to achieve an important, self-relevant goal and not attaining reward. These deficits in reward processing appear to be specific to depression as externalizing disordered youth did not display negative emotional reactivity to failure after their appetitive motivation system was activated.
Harmer, Catherine J; Shelley, Nicholas C; Cowen, Philip J; Goodwin, Guy M
2004-07-01
Antidepressants that inhibit the reuptake of serotonin (SSRIs) or norepinephrine (SNRIs) are effective in the treatment of disorders such as depression and anxiety. Cognitive psychological theories emphasize the importance of correcting negative biases of information processing in the nonpharmacological treatment of these disorders, but it is not known whether antidepressant drugs can directly modulate the neural processing of affective information. The present study therefore assessed the actions of repeated antidepressant administration on perception and memory for positive and negative emotional information in healthy volunteers. Forty-two male and female volunteers were randomly assigned to 7 days of double-blind intervention with the SSRI citalopram (20 mg/day), the SNRI reboxetine (8 mg/day), or placebo. On the final day, facial expression recognition, emotion-potentiated startle response, and memory for affect-laden words were assessed. Questionnaires monitoring mood, hostility, and anxiety were given before and after treatment. In the facial expression recognition task, citalopram and reboxetine reduced the identification of the negative facial expressions of anger and fear. Citalopram also abolished the increased startle response found in the context of negative affective images. Both antidepressants increased the relative recall of positive (versus negative) emotional material. These changes in emotional processing occurred in the absence of significant differences in ratings of mood and anxiety. However, reboxetine decreased subjective ratings of hostility and elevated energy. Short-term administration of two different antidepressant types had similar effects on emotion-related tasks in healthy volunteers, reducing the processing of negative relative to positive emotional material. Such effects of antidepressants may ameliorate the negative biases in information processing that characterize mood and anxiety disorders. They also suggest a mechanism of action potentially compatible with cognitive theories of anxiety and depression.
Godlewska, B R; Browning, M; Norbury, R; Cowen, P J; Harmer, C J
2016-11-22
Antidepressant treatment reduces behavioural and neural markers of negative emotional bias early in treatment and has been proposed as a mechanism of antidepressant drug action. Here, we provide a critical test of this hypothesis by assessing whether neural markers of early emotional processing changes predict later clinical response in depression. Thirty-five unmedicated patients with major depression took the selective serotonin re-uptake inhibitor (SSRI), escitalopram (10 mg), over 6 weeks, and were classified as responders (22 patients) versus non-responders (13 patients), based on at least a 50% reduction in symptoms by the end of treatment. The neural response to fearful and happy emotional facial expressions was assessed before and after 7 days of treatment using functional magnetic resonance imaging. Changes in the neural response to these facial cues after 7 days of escitalopram were compared in patients as a function of later clinical response. A sample of healthy controls was also assessed. At baseline, depressed patients showed greater activation to fear versus happy faces than controls in the insula and dorsal anterior cingulate. Depressed patients who went on to respond to the SSRI had a greater reduction in neural activity to fearful versus happy facial expressions after just 7 days of escitalopram across a network of regions including the anterior cingulate, insula, amygdala and thalamus. Mediation analysis confirmed that the direct effect of neural change on symptom response was not mediated by initial changes in depressive symptoms. These results support the hypothesis that early changes in emotional processing with antidepressant treatment are the basis of later clinical improvement. As such, early correction of negative bias may be a key mechanism of antidepressant drug action and a potentially useful predictor of therapeutic response.
Imitation Learning Errors Are Affected by Visual Cues in Both Performance and Observation Phases.
Mizuguchi, Takashi; Sugimura, Ryoko; Shimada, Hideaki; Hasegawa, Takehiro
2017-08-01
Mechanisms of action imitation were examined. Previous studies have suggested that success or failure of imitation is determined at the point of observing an action. In other words, cognitive processing after observation is not related to the success of imitation; 20 university students participated in each of three experiments in which they observed a series of object manipulations consisting of four elements (hands, tools, object, and end points) and then imitated the manipulations. In Experiment 1, a specific intially observed element was color coded, and the specific manipulated object at the imitation stage was identically color coded; participants accurately imitated the color coded element. In Experiment 2, a specific element was color coded at the observation but not at the imitation stage, and there were no effects of color coding on imitation. In Experiment 3, participants were verbally instructed to attend to a specific element at the imitation stage, but the verbal instructions had no effect. Thus, the success of imitation may not be determined at the stage of observing an action and color coding can provide a clue for imitation at the imitation stage.
Seeing the mean: ensemble coding for sets of faces.
Haberman, Jason; Whitney, David
2009-06-01
We frequently encounter groups of similar objects in our visual environment: a bed of flowers, a basket of oranges, a crowd of people. How does the visual system process such redundancy? Research shows that rather than code every element in a texture, the visual system favors a summary statistical representation of all the elements. The authors demonstrate that although it may facilitate texture perception, ensemble coding also occurs for faces-a level of processing well beyond that of textures. Observers viewed sets of faces varying in emotionality (e.g., happy to sad) and assessed the mean emotion of each set. Although observers retained little information about the individual set members, they had a remarkably precise representation of the mean emotion. Observers continued to discriminate the mean emotion accurately even when they viewed sets of 16 faces for 500 ms or less. Modeling revealed that perceiving the average facial expression in groups of faces was not due to noisy representation or noisy discrimination. These findings support the hypothesis that ensemble coding occurs extremely fast at multiple levels of visual analysis. (c) 2009 APA, all rights reserved.
Self-reported empathy and neural activity during action imitation and observation in schizophrenia
Horan, William P.; Iacoboni, Marco; Cross, Katy A.; Korb, Alex; Lee, Junghee; Nori, Poorang; Quintana, Javier; Wynn, Jonathan K.; Green, Michael F.
2014-01-01
Introduction Although social cognitive impairments are key determinants of functional outcome in schizophrenia their neural bases are poorly understood. This study investigated neural activity during imitation and observation of finger movements and facial expressions in schizophrenia, and their correlates with self-reported empathy. Methods 23 schizophrenia outpatients and 23 healthy controls were studied with functional magnetic resonance imaging (fMRI) while they imitated, executed, or simply observed finger movements and facial emotional expressions. Between-group activation differences, as well as relationships between activation and self-reported empathy, were evaluated. Results Both patients and controls similarly activated neural systems previously associated with these tasks. We found no significant between-group differences in task-related activations. There were, however, between-group differences in the correlation between self-reported empathy and right inferior frontal (pars opercularis) activity during observation of facial emotional expressions. As in previous studies, controls demonstrated a positive association between brain activity and empathy scores. In contrast, the pattern in the patient group reflected a negative association between brain activity and empathy. Conclusions Although patients with schizophrenia demonstrated largely normal patterns of neural activation across the finger movement and facial expression tasks, they reported decreased self perceived empathy and failed to show the typical relationship between neural activity and self-reported empathy seen in controls. These findings suggest that patients show a disjunction between automatic neural responses to low level social cues and higher level, integrative social cognitive processes involved in self-perceived empathy. PMID:25009771
Self-reported empathy and neural activity during action imitation and observation in schizophrenia.
Horan, William P; Iacoboni, Marco; Cross, Katy A; Korb, Alex; Lee, Junghee; Nori, Poorang; Quintana, Javier; Wynn, Jonathan K; Green, Michael F
2014-01-01
Although social cognitive impairments are key determinants of functional outcome in schizophrenia their neural bases are poorly understood. This study investigated neural activity during imitation and observation of finger movements and facial expressions in schizophrenia, and their correlates with self-reported empathy. 23 schizophrenia outpatients and 23 healthy controls were studied with functional magnetic resonance imaging (fMRI) while they imitated, executed, or simply observed finger movements and facial emotional expressions. Between-group activation differences, as well as relationships between activation and self-reported empathy, were evaluated. Both patients and controls similarly activated neural systems previously associated with these tasks. We found no significant between-group differences in task-related activations. There were, however, between-group differences in the correlation between self-reported empathy and right inferior frontal (pars opercularis) activity during observation of facial emotional expressions. As in previous studies, controls demonstrated a positive association between brain activity and empathy scores. In contrast, the pattern in the patient group reflected a negative association between brain activity and empathy. Although patients with schizophrenia demonstrated largely normal patterns of neural activation across the finger movement and facial expression tasks, they reported decreased self perceived empathy and failed to show the typical relationship between neural activity and self-reported empathy seen in controls. These findings suggest that patients show a disjunction between automatic neural responses to low level social cues and higher level, integrative social cognitive processes involved in self-perceived empathy.
Selective Transfer Machine for Personalized Facial Expression Analysis
Chu, Wen-Sheng; De la Torre, Fernando; Cohn, Jeffrey F.
2017-01-01
Automatic facial action unit (AU) and expression detection from videos is a long-standing problem. The problem is challenging in part because classifiers must generalize to previously unknown subjects that differ markedly in behavior and facial morphology (e.g., heavy versus delicate brows, smooth versus deeply etched wrinkles) from those on which the classifiers are trained. While some progress has been achieved through improvements in choices of features and classifiers, the challenge occasioned by individual differences among people remains. Person-specific classifiers would be a possible solution but for a paucity of training data. Sufficient training data for person-specific classifiers typically is unavailable. This paper addresses the problem of how to personalize a generic classifier without additional labels from the test subject. We propose a transductive learning method, which we refer as a Selective Transfer Machine (STM), to personalize a generic classifier by attenuating person-specific mismatches. STM achieves this effect by simultaneously learning a classifier and re-weighting the training samples that are most relevant to the test subject. We compared STM to both generic classifiers and cross-domain learning methods on four benchmarks: CK+ [44], GEMEP-FERA [67], RU-FACS [4] and GFT [57]. STM outperformed generic classifiers in all. PMID:28113267
Faces and Photography in 19th-Century Visual Science.
Wade, Nicholas J
2016-09-01
Reading faces for identity, character, and expression is as old as humanity but representing these states is relatively recent. From the 16th century, physiognomists classified character in terms of both facial form and represented the types graphically. Darwin distinguished between physiognomy (which concerned static features reflecting character) and expression (which was dynamic and reflected emotions). Artists represented personality, pleasure, and pain in their paintings and drawings, but the scientific study of faces was revolutionized by photography in the 19th century. Rather than relying on artistic abstractions of fleeting facial expressions, scientists photographed what the eye could not discriminate. Photography was applied first to stereoscopic portraiture (by Wheatstone) then to the study of facial expressions (by Duchenne) and to identity (by Galton and Bertillon). Photography opened new methods for investigating face perception, most markedly with Galton's composites derived from combining aligned photographs of many sitters. In the same decade (1870s), Kühne took the process of photography as a model for the chemical action of light in the retina. These developments and their developers are described and fixed in time, but the ideas they initiated have proved impossible to stop. © The Author(s) 2016.
Ferrari, Pier Francesco; Barbot, Anna; Bianchi, Bernardo; Ferri, Andrea; Garofalo, Gioacchino; Bruno, Nicola; Coudé, Gino; Bertolini, Chiara; Ardizzi, Martina; Nicolini, Ylenia; Belluardo, Mauro; Stefani, Elisa De
2017-05-01
Studies of the last twenty years on the motor and premotor cortices of primates demonstrated that the motor system is involved in the control and initiation of movements, and in higher cognitive processes, such as action understanding, imitation, and empathy. Mirror neurons are only one example of such theoretical shift. Their properties demonstrate that motor and sensory processing are coupled in the brain. Such knowledge has been also central for designing new neurorehabilitative therapies for patients suffering from brain injuries and consequent motor deficits. Moebius Syndrome patients, for example, are incapable of moving their facial muscles, which are fundamental for affective communication. These patients face an important challenge after having undergone a corrective surgery: reanimating the transplanted muscles to achieve a voluntarily control of smiling. We propose two new complementary rehabilitative approaches on MBS patients based on observation/imitation therapy (Facial Imitation Therapy, FIT) and on hand-mouth motor synergies (Synergistic Activity Therapy, SAT). Preliminary results show that our intervention protocol is a promising approach for neurorehabilitation of patients with facial palsy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fairness modulates non-conscious facial mimicry in women.
Hofman, Dennis; Bos, Peter A; Schutter, Dennis J L G; van Honk, Jack
2012-09-07
In societies with high cooperation demands, implicit consensus on social norms enables successful human coexistence. Mimicking other people's actions and emotions has been proposed as a means to synchronize behaviour, thereby enhancing affiliation. Mimicry has long been thought to be reflexive, but it has recently been suggested that mimicry might also be motivationally driven. Here, we show during an economic bargaining game that automatic happy mimicry of those making unfair offers disappears. After the bargaining game, when the proposers have acquired either a fair or unfair reputation, we observe increased angry mimicry of proposers with an unfair reputation and decreased angry mimicry of fair proposers. These findings provide direct empirical evidence that non-conscious mimicry is modulated by fairness. We interpret the present results as reflecting that facial mimicry in women functions conditionally, dependent on situational demands.
Yamazaki, Yumiko; Yokochi, Hiroko; Tanaka, Michio; Okanoya, Kazuo; Iriki, Atsushi
2010-01-01
The anterior portion of the inferior parietal cortex possesses comprehensive representations of actions embedded in behavioural contexts. Mirror neurons, which respond to both self-executed and observed actions, exist in this brain region in addition to those originally found in the premotor cortex. We found that parietal mirror neurons responded differentially to identical actions embedded in different contexts. Another type of parietal mirror neuron represents an inverse and complementary property of responding equally to dissimilar actions made by itself and others for an identical purpose. Here, we propose a hypothesis that these sets of inferior parietal neurons constitute a neural basis for encoding the semantic equivalence of various actions across different agents and contexts. The neurons have mirror neuron properties, and they encoded generalization of agents, differentiation of outcomes, and categorization of actions that led to common functions. By integrating the activities of these mirror neurons with various codings, we further suggest that in the ancestral primates' brains, these various representations of meaningful action led to the gradual establishment of equivalence relations among the different types of actions, by sharing common action semantics. Such differential codings of the components of actions might represent precursors to the parts of protolanguage, such as gestural communication, which are shared among various members of a society. Finally, we suggest that the inferior parietal cortex serves as an interface between this action semantics system and other higher semantic systems, through common structures of action representation that mimic language syntax.
Yamazaki, Yumiko; Yokochi, Hiroko; Tanaka, Michio; Okanoya, Kazuo; Iriki, Atsushi
2010-01-01
The anterior portion of the inferior parietal cortex possesses comprehensive representations of actions embedded in behavioural contexts. Mirror neurons, which respond to both self-executed and observed actions, exist in this brain region in addition to those originally found in the premotor cortex. We found that parietal mirror neurons responded differentially to identical actions embedded in different contexts. Another type of parietal mirror neuron represents an inverse and complementary property of responding equally to dissimilar actions made by itself and others for an identical purpose. Here, we propose a hypothesis that these sets of inferior parietal neurons constitute a neural basis for encoding the semantic equivalence of various actions across different agents and contexts. The neurons have mirror neuron properties, and they encoded generalization of agents, differentiation of outcomes, and categorization of actions that led to common functions. By integrating the activities of these mirror neurons with various codings, we further suggest that in the ancestral primates' brains, these various representations of meaningful action led to the gradual establishment of equivalence relations among the different types of actions, by sharing common action semantics. Such differential codings of the components of actions might represent precursors to the parts of protolanguage, such as gestural communication, which are shared among various members of a society. Finally, we suggest that the inferior parietal cortex serves as an interface between this action semantics system and other higher semantic systems, through common structures of action representation that mimic language syntax. PMID:20119879
43 CFR 11.64 - Injury determination phase-testing and sampling methods.
Code of Federal Regulations, 2012 CFR
2012-10-01
.... In developing these objectives, the availability of information from response actions relating to the...), test cases proving the code works, and any alteration of previously documented code made to adapt the... computer code (if any), test cases proving the code works, and any alteration of previously documented code...
43 CFR 11.64 - Injury determination phase-testing and sampling methods.
Code of Federal Regulations, 2014 CFR
2014-10-01
.... In developing these objectives, the availability of information from response actions relating to the...), test cases proving the code works, and any alteration of previously documented code made to adapt the... computer code (if any), test cases proving the code works, and any alteration of previously documented code...
43 CFR 11.64 - Injury determination phase-testing and sampling methods.
Code of Federal Regulations, 2013 CFR
2013-10-01
.... In developing these objectives, the availability of information from response actions relating to the...), test cases proving the code works, and any alteration of previously documented code made to adapt the... computer code (if any), test cases proving the code works, and any alteration of previously documented code...
Lyme disease and Bell’s palsy: an epidemiological study of diagnosis and risk in England
Cooper, Lilli; Branagan-Harris, Michael; Tuson, Richard; Nduka, Charles
2017-01-01
Background Lyme disease is caused by a tick-borne spirochaete of the Borrelia species. It is associated with facial palsy, is increasingly common in England, and may be misdiagnosed as Bell’s palsy. Aim To produce an accurate map of Lyme disease diagnosis in England and to identify patients at risk of developing associated facial nerve palsy, to enable prevention, early diagnosis, and effective treatment. Design and setting Hospital episode statistics (HES) data in England from the Health and Social Care Information Centre were interrogated from April 2011 to March 2015 for International Classification of Diseases 10th revision (ICD-10) codes A69.2 (Lyme disease) and G51.0 (Bell’s palsy) in isolation, and as a combination. Method Patients’ age, sex, postcode, month of diagnosis, and socioeconomic groups as defined according to the English Indices of Deprivation (2004) were also collected. Results Lyme disease hospital diagnosis increased by 42% per year from 2011 to 2015 in England. Higher incidence areas, largely rural, were mapped. A trend towards socioeconomic privilege and the months of July to September was observed. Facial palsy in combination with Lyme disease is also increasing, particularly in younger patients, with a mean age of 41.7 years, compared with 59.6 years for Bell’s palsy and 45.9 years for Lyme disease (P = 0.05, analysis of variance [ANOVA]). Conclusion Healthcare practitioners should have a high index of suspicion for Lyme disease following travel in the areas shown, particularly in the summer months. The authors suggest that patients presenting with facial palsy should be tested for Lyme disease. PMID:28396367
Orientation to Language Code and Actions in Group Work
ERIC Educational Resources Information Center
Aline, David; Hosoda, Yuri
2009-01-01
This conversation analytic study reveals how learners themselves, as speakers and listeners, demonstrate their own orientation to language code and actions on a moment by moment basis during collaborative tasks in English as a foreign language classrooms. The excerpts presented in this article were drawn from 23 hours of audio- and video-recorded…
76 FR 11340 - Potassium Hypochlorite; Exemption From the Requirement of a Tolerance
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-02
... this action apply to me? You may be potentially affected by this action if you are a dairy cattle milk... not limited to: Dairy Cattle Milk Production (NAICS code 11212). Food manufacturing (NAICS code 311... the docket, http://www.regulations.gov . The Agency conducted an in-depth review of the similarities...
Schonhardt-Bailey, Cheryl
2017-01-01
In parliamentary committee oversight hearings on fiscal policy, monetary policy, and financial stability, where verbal deliberation is the focus, nonverbal communication may be crucial in the acceptance or rejection of arguments proffered by policymakers. Systematic qualitative coding of these hearings in the 2010-15 U.K. Parliament finds the following: (1) facial expressions, particularly in the form of anger and contempt, are more prevalent in fiscal policy hearings, where backbench parliamentarians hold frontbench parliamentarians to account, than in monetary policy or financial stability hearings, where the witnesses being held to account are unelected policy experts; (2) comparing committees across chambers, hearings in the House of Lords committee yield more reassuring facial expressions relative to hearings in the House of Commons committee, suggesting a more relaxed and less adversarial context in the former; and (3) central bank witnesses appearing before both the Lords and Commons committees tend toward expressions of appeasement, suggesting a willingness to defer to Parliament.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-18
... production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS code 32532). This listing is not intended to be exhaustive, but rather...? You may be potentially affected by this action if you are an agricultural producer, food manufacturer...
NASA Technical Reports Server (NTRS)
1975-01-01
A system is presented which processes FORTRAN based software systems to surface potential problems before they become execution malfunctions. The system complements the diagnostic capabilities of compilers, loaders, and execution monitors rather than duplicating these functions. Also, it emphasizes frequent sources of FORTRAN problems which require inordinate manual effort to identify. The principle value of the system is extracting small sections of unusual code from the bulk of normal sequences. Code structures likely to cause immediate or future problems are brought to the user's attention. These messages stimulate timely corrective action of solid errors and promote identification of 'tricky' code. Corrective action may require recoding or simply extending software documentation to explain the unusual technique.
Code of Federal Regulations, 2010 CFR
2010-07-01
... subject to disciplinary actions under the Uniform Code of Military Justice, judicial action as authorized by state or federal law, or administrative action as provided by controlling regulation. ...
Troisi, Alfonso; Pompili, Enrico; Binello, Luigi; Sterpone, Alessandro
2007-03-30
Despite the central role of nonverbal behavior in regulating social interactions, its relationship to functional disability in schizophrenia has received little empirical attention. This study aimed at assessing the relationship of patients' spontaneous facial expressivity during the clinical interview to clinician-rated and self-reported measures of functional disability. The nonverbal behavior of 28 stabilized patients with schizophrenia was analyzed by using the Ethological Coding System for Interviews (ECSI). Functional disability was assessed using the Global Assessment of Functioning (GAF) scale and the Sheehan Disability Scale (DISS). Partial correlation analysis controlling for the confounding effects of neuroleptic treatment showed that facial expressivity was correlated with the GAF score (r=0.42, P=0.03) and the scores on the subscales of the DISS measuring work (r=-0.52, P=0.005) and social (r=-0.50, P=0.007) disability. In a multiple regression model, nonverbal behavior explained variation in patients' work and social disability better than negative symptoms. The results of this pilot study suggest that deficits in encoding affiliative signals may play a role in determining or aggravating functional disability in schizophrenia. One clinical implication of this finding is that remediation training programs designed to improve nonverbal communication could also serve as a useful adjunct for improving work and social functioning in patients with schizophrenia.
I Think We're Alone Now: Solitary Social Behaviors in Adolescents with Autism Spectrum Disorder.
Zane, Emily; Neumeyer, Kayla; Mertens, Julia; Chugg, Amanda; Grossman, Ruth B
2017-10-10
Research into emotional responsiveness in Autism Spectrum Disorder (ASD) has yielded mixed findings. Some studies report uniform, flat and emotionless expressions in ASD; others describe highly variable expressions that are as or even more intense than those of typically developing (TD) individuals. Variability in findings is likely due to differences in study design: some studies have examined posed (i.e., not spontaneous expressions) and others have examined spontaneous expressions in social contexts, during which individuals with ASD-by nature of the disorder-are likely to behave differently than their TD peers. To determine whether (and how) spontaneous facial expressions and other emotional responses are different from TD individuals, we video-recorded the spontaneous responses of children and adolescents with and without ASD (between the ages of 10 and 17 years) as they watched emotionally evocative videos in a non-social context. Researchers coded facial expressions for intensity, and noted the presence of laughter and other responsive vocalizations. Adolescents with ASD displayed more intense, frequent and varied spontaneous facial expressions than their TD peers. They also produced significantly more emotional vocalizations, including laughter. Individuals with ASD may display their emotions more frequently and more intensely than TD individuals when they are unencumbered by social pressure. Differences in the interpretation of the social setting and/or understanding of emotional display rules may also contribute to differences in emotional behaviors between groups.
Laser-assisted hair removal for facial hirsutism in women: A review of evidence.
Lee, Chun-Man
2018-06-01
Poly cystic ovarian syndrome (PCOS) has been described as the common diagnosis for hirsutism in women. Facial hirsutism is by far the most distressing symptom of hyperandrogenism in women with PCOS. A statistically significant improvement in psychological well-being has been reported in patients with PCOS allocated for laser-assisted hair removal. The theory of selective photothermolysis has revolutionized laser hair removal in that it is effective and safe, when operated by sufficiently trained and experienced professionals. Long-pulsed ruby (694 nm), long-pulsed alexandrite (755 nm), diode (800-980 nm), and long-pulsed Nd:YAG (1064 nm) are commercially available laser devices for hair removal most widely studied. This article will introduce the fundamentals and mechanism of action of lasers in hair removal, in a contemporary literature review looking at medium to long term efficacy and safety profiles of various laser hair removal modalities most widely commercially available to date.
Treatment of hemifacial spasm with botulinum A toxin. Results and rationale.
Gonnering, R S
1986-01-01
Hemifacial spasm is characterized by unilateral, periodic, tonic contractions of facial muscles, thought to be caused by mechanical compression at the root-exit zone of the facial nerve. Electrophysiologic abnormalities such as ectopic excitation and synkinesis are typical. Although posterior fossa microsurgical nerve decompression is successful in bringing about relief of the spasm in most cases, it carries a risk to hearing. As an alternative treatment, 15 patients with hemifacial spasm were given a total of 41 sets of injections with botulinum A toxin, with a mean follow-up of 14.3 +/- 1.1 months. Relief of symptoms lasted a mean of 108.3 +/- 4.2 days. Mild transient lagophthalmos and ptosis were the only complications. Although the exact mechanism of its action and beneficial effect is speculative at this time, botulinum A toxin appears to offer an effective, safe alternative to more radical intracranial surgery for patients with hemifacial spasm.
Chiranjeevi, Pojala; Gopalakrishnan, Viswanath; Moogi, Pratibha
2015-09-01
Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning-based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, and so on, in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as user stays neutral for majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this paper, we propose a light-weight neutral versus emotion classification engine, which acts as a pre-processer to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at key emotion (KE) points using a statistical texture model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a statistical texture model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves emotion recognition (ER) accuracy and simultaneously reduces computational complexity of the ER system, as validated on multiple databases.
Spontaneous action potentials and neural coding in unmyelinated axons.
O'Donnell, Cian; van Rossum, Mark C W
2015-04-01
The voltage-gated Na and K channels in neurons are responsible for action potential generation. Because ion channels open and close in a stochastic fashion, spontaneous (ectopic) action potentials can result even in the absence of stimulation. While spontaneous action potentials have been studied in detail in single-compartment models, studies on spatially extended processes have been limited. The simulations and analysis presented here show that spontaneous rate in unmyelinated axon depends nonmonotonically on the length of the axon, that the spontaneous activity has sub-Poisson statistics, and that neural coding can be hampered by the spontaneous spikes by reducing the probability of transmitting the first spike in a train.
Clinical practice guideline: Bell's palsy.
Baugh, Reginald F; Basura, Gregory J; Ishii, Lisa E; Schwartz, Seth R; Drumheller, Caitlin Murray; Burkholder, Rebecca; Deckard, Nathan A; Dawson, Cindy; Driscoll, Colin; Gillespie, M Boyd; Gurgel, Richard K; Halperin, John; Khalid, Ayesha N; Kumar, Kaparaboyna Ashok; Micco, Alan; Munsell, Debra; Rosenbaum, Steven; Vaughan, William
2013-11-01
Bell's palsy, named after the Scottish anatomist, Sir Charles Bell, is the most common acute mono-neuropathy, or disorder affecting a single nerve, and is the most common diagnosis associated with facial nerve weakness/paralysis. Bell's palsy is a rapid unilateral facial nerve paresis (weakness) or paralysis (complete loss of movement) of unknown cause. The condition leads to the partial or complete inability to voluntarily move facial muscles on the affected side of the face. Although typically self-limited, the facial paresis/paralysis that occurs in Bell's palsy may cause significant temporary oral incompetence and an inability to close the eyelid, leading to potential eye injury. Additional long-term poor outcomes do occur and can be devastating to the patient. Treatments are generally designed to improve facial function and facilitate recovery. There are myriad treatment options for Bell's palsy, and some controversy exists regarding the effectiveness of several of these options, and there are consequent variations in care. In addition, numerous diagnostic tests available are used in the evaluation of patients with Bell's palsy. Many of these tests are of questionable benefit in Bell's palsy. Furthermore, while patients with Bell's palsy enter the health care system with facial paresis/paralysis as a primary complaint, not all patients with facial paresis/paralysis have Bell's palsy. It is a concern that patients with alternative underlying etiologies may be misdiagnosed or have unnecessary delay in diagnosis. All of these quality concerns provide an important opportunity for improvement in the diagnosis and management of patients with Bell's palsy. The primary purpose of this guideline is to improve the accuracy of diagnosis for Bell's palsy, to improve the quality of care and outcomes for patients with Bell's palsy, and to decrease harmful variations in the evaluation and management of Bell's palsy. This guideline addresses these needs by encouraging accurate and efficient diagnosis and treatment and, when applicable, facilitating patient follow-up to address the management of long-term sequelae or evaluation of new or worsening symptoms not indicative of Bell's palsy. The guideline is intended for all clinicians in any setting who are likely to diagnose and manage patients with Bell's palsy. The target population is inclusive of both adults and children presenting with Bell's palsy. ACTION STATEMENTS: The development group made a strong recommendation that (a) clinicians should assess the patient using history and physical examination to exclude identifiable causes of facial paresis or paralysis in patients presenting with acute-onset unilateral facial paresis or paralysis, (b) clinicians should prescribe oral steroids within 72 hours of symptom onset for Bell's palsy patients 16 years and older, (c) clinicians should not prescribe oral antiviral therapy alone for patients with new-onset Bell's palsy, and (d) clinicians should implement eye protection for Bell's palsy patients with impaired eye closure. The panel made recommendations that (a) clinicians should not obtain routine laboratory testing in patients with new-onset Bell's palsy, (b) clinicians should not routinely perform diagnostic imaging for patients with new-onset Bell's palsy, (c) clinicians should not perform electrodiagnostic testing in Bell's palsy patients with incomplete facial paralysis, and (d) clinicians should reassess or refer to a facial nerve specialist those Bell's palsy patients with (1) new or worsening neurologic findings at any point, (2) ocular symptoms developing at any point, or (3) incomplete facial recovery 3 months after initial symptom onset. The development group provided the following options: (a) clinicians may offer oral antiviral therapy in addition to oral steroids within 72 hours of symptom onset for patients with Bell's palsy, and (b) clinicians may offer electrodiagnostic testing to Bell's palsy patients with complete facial paralysis. The development group offered the following no recommendations: (a) no recommendation can be made regarding surgical decompression for patients with Bell's palsy, (b) no recommendation can be made regarding the effect of acupuncture in patients with Bell's palsy, and (c) no recommendation can be made regarding the effect of physical therapy in patients with Bell's palsy.
Effects of the potential lithium-mimetic, ebselen, on impulsivity and emotional processing.
Masaki, Charles; Sharpley, Ann L; Cooper, Charlotte M; Godlewska, Beata R; Singh, Nisha; Vasudevan, Sridhar R; Harmer, Catherine J; Churchill, Grant C; Sharp, Trevor; Rogers, Robert D; Cowen, Philip J
2016-07-01
Lithium remains the most effective treatment for bipolar disorder and also has important effects to lower suicidal behaviour, a property that may be linked to its ability to diminish impulsive, aggressive behaviour. The antioxidant drug, ebselen, has been proposed as a possible lithium-mimetic based on its ability in animals to inhibit inositol monophosphatase (IMPase), an action which it shares with lithium. The aim of the study was to determine whether treatment with ebselen altered emotional processing and diminished measures of risk-taking behaviour. We studied 20 healthy participants who were tested on two occasions receiving either ebselen (3600 mg over 24 h) or identical placebo in a double-blind, randomized, cross-over design. Three hours after the final dose of ebselen/placebo, participants completed the Cambridge Gambling Task (CGT) and a task that required the detection of emotional facial expressions (facial emotion recognition task (FERT)). On the CGT, relative to placebo, ebselen reduced delay aversion while on the FERT, it increased the recognition of positive vs negative facial expressions. The study suggests that at the dosage used, ebselen can decrease impulsivity and produce a positive bias in emotional processing. These findings have implications for the possible use of ebselen in the disorders characterized by impulsive behaviour and dysphoric mood.
The Noh mask effect: vertical viewpoint dependence of facial expression perception.
Lyons, M J; Campbell, R; Plante, A; Coleman, M; Kamachi, M; Akamatsu, S
2000-01-01
Full-face masks, worn by skilled actors in the Noh tradition, can induce a variety of perceived expressions with changes in head orientation. Out-of-plane rotation of the head changes the two-dimensional image characteristics of the face which viewers may misinterpret as non-rigid changes due to muscle action. Three experiments with Japanese and British viewers explored this effect. Experiment 1 confirmed a systematic relationship between vertical angle of view of a Noh mask and judged affect. A forward tilted mask was more often judged happy, and one backward tilted more often judged sad. This effect was moderated by culture. Japanese viewers ascribed happiness to the mask at greater degrees of backward tilt with a reversal towards sadness at extreme forward angles. Cropping the facial image of chin and upper head contour reduced the forward-tilt reversal. Finally, the relationship between head tilt and affect was replicated with a laser-scanned human face image, but with no cultural effect. Vertical orientation of the head changes the apparent disposition of facial features and viewers respond systematically to these changes. Culture moderates this effect, and we discuss how perceptual strategies for ascribing expression to familiar and unfamiliar images may account for the differences. PMID:11413638
Rhinoplasty and the aesthetic of the smile.
de Benito, J; Fernandez Sanza, I
1995-01-01
The resection of the columella and nasal depressor muscles is a simple operation to perform and one which allows an improvement in the facial physiognomy of many patients. This operation can be done alone or in conjunction with the classic rhinoplasty, thus achieving an improvement in the aesthetics of the smile. It has also been proved, contrary to common belief, that the action of these muscles has no connection with physiological breathing mechanisms.
2006-12-01
COL Timothy A Mitchener, DC USA 5e. TASK NUMBER 6. AUTHOR( S ) 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) 8...SPONSORING/MONITORING AGENCY NAME( S ) AND 10. SPONSOR/MONITOR’S ACRONYM( S ) ADDRESS(ES) 11. SPONSOR/MONITOR’S REPORT NUMBER( S ) 12. DISTRIBUTION/AVAILABILITY...NATO) Standardization Agreement (STANAG), 5th edition, coding scheme. (See P.J. Amoroso, G.S. Smith, and N.S. Bell : Qualitative assessment of cause
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-12
... (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS code 32532). If you have any questions regarding the applicability of this action to a... commodities, Feed additives, Food additives, Pesticides and pests, Reporting and recordkeeping requirements...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-07
... (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS code 32532). If you have any questions regarding the applicability of this action to a... Subjects Environmental protection, Agricultural commodities, Feed additives, Food additives, Pesticides and...
76 FR 77549 - Colorado River Indian Tribes-Amendment to Health & Safety Code, Article 2. Liquor
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-13
... Health & Safety Code, Article 2. Liquor AGENCY: Bureau of Indian Affairs, Interior. ACTION: Notice. SUMMARY: This notice publishes the amendment to the Colorado River Tribal Health and Safety Code, Article... Code, Article 2, Liquor by Ordinance No. 10-03 on December 13, 2010. This notice is published in...
Sokol, Ellen; Clark, David; Aguayo, Victor M
2008-09-01
In 1981 the World Health Assembly (WHA) adopted the International Code of Marketing of Breastmilk Substitutes out of concern that inappropriate marketing of breastmilk substitutes was contributing to the alarming decline in breastfeeding worldwide and the increase in child malnutrition and mortality, particularly in developing countries. To document progress, challenges, and lessons learned in the implementation of the International Code in West and Central Africa. Data were obtained by literature review and interviews with key informants. Twelve of the 24 countries have laws, decrees, or regulations that implement all or most of the provisions of the Code, 6 countries have a draft law or decree that is awaiting government approval or have a government committee that is studying how best to implement the Code, 3 countries have a legal instrument that enacts a few provisions of the Code, and 3 countries have not taken any action to implement the Code. International declarations and initiatives for child nutrition and survival have provided impetus for national implementation of the Code. National action to regulate the marketing of breastmilk substitutes needs to be linked to national priorities for nutrition and child survival. A clearly defined scope is essential for effective implementation of national legislation. Leadership and support by health professionals is essential to endorse and enforce national legislation. Training on Code implementation is instrumental for national action; national implementation of the Code requires provisions and capacity to monitor and enforce the legislative framework and needs to be part of a multipronged strategy to advance national child nutrition and survival goals. Nations in West and Central Africa have made important progress in implementing the International Code. More than 25 years after its adoption by the WHA, the Code remains as important as ever for child survival and development in West and Central Africa.
The attraction of emotions: Irrelevant emotional information modulates motor actions.
Ambron, Elisabetta; Foroni, Francesco
2015-08-01
Emotional expressions are important cues that capture our attention automatically. Although a wide range of work has explored the role and influence of emotions on cognition and behavior, little is known about the way that emotions influence motor actions. Moreover, considering how critical detecting emotional facial expressions in the environment can be, it is important to understand their impact even when they are not directly relevant to the task being performed. Our novel approach was to explore this issue from the attention-and-action perspective, using a task-irrelevant distractor paradigm in which participants are asked to reach for a target while a nontarget stimulus is also presented. We tested whether the movement trajectory would be influenced by irrelevant stimuli-faces with or without emotional expressions. The results showed that reaching paths veered toward faces with emotional expressions, in particular happiness, but not toward neutral expressions. This reinforces the view of emotions as attention-capturing stimuli that are, however, also potential sources of distraction for motor actions.
Jhang, Yuna; Franklin, Beau; Ramsdell-Hudock, Heather L.; Oller, D. Kimbrough
2017-01-01
Seeking roots of language, we probed infant facial expressions and vocalizations. Both have roles in language, but the voice plays an especially flexible role, expressing a variety of functions and affect conditions with the same vocal categories—a word can be produced with many different affective flavors. This requirement of language is seen in very early infant vocalizations. We examined the extent to which affect is transmitted by early vocal categories termed “protophones” (squeals, vowel-like sounds, and growls) and by their co-occurring facial expressions, and similarly the extent to which vocal type is transmitted by the voice and co-occurring facial expressions. Our coder agreement data suggest infant affect during protophones was most reliably transmitted by the face (judged in video-only), while vocal type was transmitted most reliably by the voice (judged in audio-only). Voice alone transmitted negative affect more reliably than neutral or positive affect, suggesting infant protophones may be used especially to call for attention when the infant is in distress. By contrast, the face alone provided no significant information about protophone categories. Indeed coders in VID could scarcely recognize the difference between silence and voice when coding protophones in VID. The results suggest that partial decoupling of communicative roles for face and voice occurs even in the first months of life. Affect in infancy appears to be transmitted in a way that audio and video aspects are flexibly interwoven, as in mature language. PMID:29423398
Jhang, Yuna; Franklin, Beau; Ramsdell-Hudock, Heather L; Oller, D Kimbrough
2017-01-01
Seeking roots of language, we probed infant facial expressions and vocalizations. Both have roles in language, but the voice plays an especially flexible role, expressing a variety of functions and affect conditions with the same vocal categories-a word can be produced with many different affective flavors. This requirement of language is seen in very early infant vocalizations. We examined the extent to which affect is transmitted by early vocal categories termed "protophones" (squeals, vowel-like sounds, and growls) and by their co-occurring facial expressions, and similarly the extent to which vocal type is transmitted by the voice and co-occurring facial expressions. Our coder agreement data suggest infant affect during protophones was most reliably transmitted by the face (judged in video-only), while vocal type was transmitted most reliably by the voice (judged in audio-only). Voice alone transmitted negative affect more reliably than neutral or positive affect, suggesting infant protophones may be used especially to call for attention when the infant is in distress. By contrast, the face alone provided no significant information about protophone categories. Indeed coders in VID could scarcely recognize the difference between silence and voice when coding protophones in VID. The results suggest that partial decoupling of communicative roles for face and voice occurs even in the first months of life. Affect in infancy appears to be transmitted in a way that audio and video aspects are flexibly interwoven, as in mature language.
Empathy, Challenge, and Psychophysiological Activation in Therapist–Client Interaction
Voutilainen, Liisa; Henttonen, Pentti; Kahri, Mikko; Ravaja, Niklas; Sams, Mikko; Peräkylä, Anssi
2018-01-01
Two central dimensions in psychotherapeutic work are a therapist’s empathy with clients and challenging their judgments. We investigated how they influence psychophysiological responses in the participants. Data were from psychodynamic therapy sessions, 24 sessions from 5 dyads, from which 694 therapist’s interventions were coded. Heart rate and electrodermal activity (EDA) of the participants were used to index emotional arousal. Facial muscle activity (electromyography) was used to index positive and negative emotional facial expressions. Electrophysiological data were analyzed in two time frames: (a) during the therapists’ interventions and (b) across the whole psychotherapy session. Both empathy and challenge had an effect on psychophysiological responses in the participants. Therapists’ empathy decreased clients’ and increased their own EDA across the session. Therapists’ challenge increased their own EDA in response to the interventions, but not across the sessions. Clients, on the other hand, did not respond to challenges during interventions, but challenges tended to increase EDA across a session. Furthermore, there was an interaction effect between empathy and challenge. Heart rate decreased and positive facial expressions increased in sessions where empathy and challenge were coupled, i.e., the amount of both empathy and challenge was either high or low. This suggests that these two variables work together. The results highlight the therapeutic functions and interrelation of empathy and challenge, and in line with the dyadic system theory by Beebe and Lachmann (2002), the systemic linkage between interactional expression and individual regulation of emotion. PMID:29695992
Matsumiya, Lynn C; Sorge, Robert E; Sotocinal, Susana G; Tabaka, John M; Wieskopf, Jeffrey S; Zaloum, Austin; King, Oliver D; Mogil, Jeffrey S
2012-01-01
Postoperative pain management in animals is complicated greatly by the inability to recognize pain. As a result, the choice of analgesics and their doses has been based on extrapolation from greatly differing pain models or the use of measures with unclear relevance to pain. We recently developed the Mouse Grimace Scale (MGS), a facial-expression–based pain coding system adapted directly from scales used in nonverbal human populations. The MGS has shown to be a reliable, highly accurate measure of spontaneous pain of moderate duration, and therefore is particularly useful in the quantification of postoperative pain. In the present study, we quantified the relative intensity and duration of postoperative pain after a sham ventral ovariectomy (laparotomy) in outbred mice. In addition, we compiled dose–response data for 4 commonly used analgesics: buprenorphine, carprofen, ketoprofen, and acetaminophen. We found that postoperative pain in mice, as defined by facial grimacing, lasts for 36 to 48 h, and appears to show relative exacerbation during the early dark (active) photophase. We find that buprenorphine was highly effective in inhibiting postoperative pain-induced facial grimacing in mice at doses equal to or lower than current recommendations, that carprofen and ketoprofen are effective only at doses markedly higher than those currently recommended, and that acetaminophen was ineffective at any dose used. We suggest the revision of practices for postoperative pain management in mice in light of these findings. PMID:22330867
Zhang, Lei; Shen, Shunyao; Yu, Hongbo; Shen, Steve Guofang; Wang, Xudong
2015-07-01
The aim of this study was to investigate the use of computer-aided design and computer-aided manufacturing hydroxyapatite (HA)/epoxide acrylate maleic (EAM) compound construction artificial implants for craniomaxillofacial bone defects. Computed tomography, computer-aided design/computer-aided manufacturing and three-dimensional reconstruction, as well as rapid prototyping were performed in 12 patients between 2008 and 2013. The customized HA/EAM compound artificial implants were manufactured through selective laser sintering using a rapid prototyping machine into the exact geometric shapes of the defect. The HA/EAM compound artificial implants were then implanted during surgical reconstruction. Color-coded superimpositions demonstrated the discrepancy between the virtual plan and achieved results using Geomagic Studio. As a result, the HA/EAM compound artificial bone implants were perfectly matched with the facial areas that needed reconstruction. The postoperative aesthetic and functional results were satisfactory. The color-coded superimpositions demonstrated good consistency between the virtual plan and achieved results. The three-dimensional maximum deviation is 2.12 ± 0.65 mm and the three-dimensional mean deviation is 0.27 ± 0.07 mm. No facial nerve weakness or pain was observed at the follow-up examinations. Only 1 implant had to be removed 2 months after the surgery owing to severe local infection. No other complication was noted during the follow-up period. In conclusion, computer-aided, individually fabricated HA/EAM compound construction artificial implant was a good craniomaxillofacial surgical technique that yielded improved aesthetic results and functional recovery after reconstruction.
ERIC Educational Resources Information Center
Wall, Candace A.; Rafferty, Lisa A.; Camizzi, Mariya A.; Max, Caroline A.; Van Blargan, David M.
2016-01-01
Many students who struggle to obtain the alphabetic principle are at risk for being identified as having a reading disability and would benefit from additional explicit phonics instruction as a remedial measure. In this action research case study, the research team conducted two experiments to investigate the effects of a color-coded, onset-rime,…
2011-01-01
We recently demonstrated the utility of quantifying spontaneous pain in mice via the blinded coding of facial expressions. As the majority of preclinical pain research is in fact performed in the laboratory rat, we attempted to modify the scale for use in this species. We present herein the Rat Grimace Scale, and show its reliability, accuracy, and ability to quantify the time course of spontaneous pain in the intraplantar complete Freund's adjuvant, intraarticular kaolin-carrageenan, and laparotomy (post-operative pain) assays. The scale's ability to demonstrate the dose-dependent analgesic efficacy of morphine is also shown. In addition, we have developed software, Rodent Face Finder®, which successfully automates the most labor-intensive step in the process. Given the known mechanistic dissociations between spontaneous and evoked pain, and the primacy of the former as a clinical problem, we believe that widespread adoption of spontaneous pain measures such as the Rat Grimace Scale might lead to more successful translation of basic science findings into clinical application. PMID:21801409
A Case of Brown-Vialetto-Van Laere Syndrome Due To a Novel Mutation in SLC52A3 Gene
Thulasi, Venkatraman; Veerapandiyan, Aravindhan; Pletcher, Beth A.; Tong, Chun M.
2017-01-01
Brown-Vialetto-Van Laere syndrome is a rare disorder characterized by motor, sensory, and cranial neuronopathies, associated with mutations in SLC52A2 and SLC52A3 genes that code for human riboflavin transporters RFVT2 and RFVT3, respectively. The authors describe the clinical course of a 6-year-old girl with Brown-Vialetto-Van Laere syndrome and a novel homozygous mutation c.1156T>C in the SLC52A3 gene, who presented at the age of 2.5 years with progressive brain stem dysfunction including ptosis, facial weakness, hearing loss, dysphagia, anarthria with bilateral vocal cord paralysis, and ataxic gait. She subsequently developed respiratory failure requiring tracheostomy and worsening dysphagia necessitating a gastrostomy. Following riboflavin supplementation, resolution of facial diplegia and ataxia, improvements in ptosis, and bulbar function including vocalization and respiration were noted. However, her sensorineural hearing loss remained unchanged. Similar to other cases of Brown-Vialetto-Van Laere syndrome, our patient responded favorably to early riboflavin supplementation with significant but not complete neurologic recovery. PMID:28856173
Thulasi, Venkatraman; Veerapandiyan, Aravindhan; Pletcher, Beth A; Tong, Chun M; Ming, Xue
2017-01-01
Brown-Vialetto-Van Laere syndrome is a rare disorder characterized by motor, sensory, and cranial neuronopathies, associated with mutations in SLC52A2 and SLC52A3 genes that code for human riboflavin transporters RFVT2 and RFVT3, respectively. The authors describe the clinical course of a 6-year-old girl with Brown-Vialetto-Van Laere syndrome and a novel homozygous mutation c.1156T>C in the SLC52A3 gene, who presented at the age of 2.5 years with progressive brain stem dysfunction including ptosis, facial weakness, hearing loss, dysphagia, anarthria with bilateral vocal cord paralysis, and ataxic gait. She subsequently developed respiratory failure requiring tracheostomy and worsening dysphagia necessitating a gastrostomy. Following riboflavin supplementation, resolution of facial diplegia and ataxia, improvements in ptosis, and bulbar function including vocalization and respiration were noted. However, her sensorineural hearing loss remained unchanged. Similar to other cases of Brown-Vialetto-Van Laere syndrome, our patient responded favorably to early riboflavin supplementation with significant but not complete neurologic recovery.
Different coding strategies for the perception of stable and changeable facial attributes.
Taubert, Jessica; Alais, David; Burr, David
2016-09-01
Perceptual systems face competing requirements: improving signal-to-noise ratios of noisy images, by integration; and maximising sensitivity to change, by differentiation. Both processes occur in human vision, under different circumstances: they have been termed priming, or serial dependencies, leading to positive sequential effects; and adaptation or habituation, which leads to negative sequential effects. We reasoned that for stable attributes, such as the identity and gender of faces, the system should integrate: while for changeable attributes like facial expression, it should also engage contrast mechanisms to maximise sensitivity to change. Subjects viewed a sequence of images varying simultaneously in gender and expression, and scored each as male or female, and happy or sad. We found strong and consistent positive serial dependencies for gender, and negative dependency for expression, showing that both processes can operate at the same time, on the same stimuli, depending on the attribute being judged. The results point to highly sophisticated mechanisms for optimizing use of past information, either by integration or differentiation, depending on the permanence of that attribute.
The Role of Embodiment and Individual Empathy Levels in Gesture Comprehension.
Jospe, Karine; Flöel, Agnes; Lavidor, Michal
2017-01-01
Research suggests that the action-observation network is involved in both emotional-embodiment (empathy) and action-embodiment (imitation) mechanisms. Here we tested whether empathy modulates action-embodiment, hypothesizing that restricting imitation abilities will impair performance in a hand gesture comprehension task. Moreover, we hypothesized that empathy levels will modulate the imitation restriction effect. One hundred twenty participants with a range of empathy scores performed gesture comprehension under restricted and unrestricted hand conditions. Empathetic participants performed better under the unrestricted compared to the restricted condition, and compared to the low empathy participants. Remarkably however, the latter showed the exactly opposite pattern and performed better under the restricted condition. This pattern was not found in a facial expression recognition task. The selective interaction of embodiment restriction and empathy suggests that empathy modulates the way people employ embodiment in gesture comprehension. We discuss the potential of embodiment-induced therapy to improve empathetic abilities in individuals with low empathy.
Bringing Action Reflection Learning into Action Learning
ERIC Educational Resources Information Center
Rimanoczy, Isabel; Brown, Carole
2008-01-01
This paper introduces Action Reflection Learning (ARL) as a learning methodology that can contribute to, and enrich, the practice of action learning programs. It describes the Swedish constructivist origins of the model, its evolution and the coded responses that resulted from researching the practice. The paper presents the resulting sixteen ARL…
Magai, C; Cohen, C I; Culver, C; Gomberg, D; Malatesta, C
1997-11-01
Twenty-seven nursing home patients with mid- to late-stage dementia participated in a study of the relation between preillness personality, as indexed by attachment and emotion regulation style, and current emotional behavior. Preillness measures were completed by family members and current assessments of emotion were supplied by nursing home aides and family members; in addition, emotion was coded during a family visit using an objective coding system for facial emotion expressions. Attachment style was found to be related to the expression of positive affect, with securely attached individuals displaying more positive affect than avoidantly attached individuals. In addition, high ratings on premorbid hostility were associated with higher rates of negative affect and lower rates of positive affect. These findings indicate that premorbid aspects of personality show continuity over time, even in mid- to late-stage dementia.
The role of great auricular-facial nerve neurorrhaphy in facial nerve damage
Sun, Yan; Liu, Limei; Han, Yuechen; Xu, Lei; Zhang, Daogong; Wang, Haibo
2015-01-01
Background: Facial nerve is easy to be damaged, and there are many reconstructive methods for facial nerve reconstructive, such as facial nerve end to end anastomosis, the great auricular nerve graft, the sural nerve graft, or hypoglossal-facial nerve anastomosis. However, there is still little study about great auricular-facial nerve neurorrhaphy. The aim of the present study was to identify the role of great auricular-facial nerve neurorrhaphy and the mechanism. Methods: Rat models of facial nerve cut (FC), facial nerve end to end anastomosis (FF), facial-great auricular neurorrhaphy (FG), and control (Ctrl) were established. Apex nasi amesiality observation, electrophysiology and immunofluorescence assays were employed to investigate the function and mechanism. Results: In apex nasi amesiality observation, it was found apex nasi amesiality of FG group was partly recovered. Additionally, electrophysiology and immunofluorescence assays revealed that facial-great auricular neurorrhaphy could transfer nerve impulse and express AChR which was better than facial nerve cut and worse than facial nerve end to end anastomosis. Conclusions: The present study indicated that great auricular-facial nerve neurorrhaphy is a substantial solution for facial lesion repair, as it is efficiently preventing facial muscles atrophy by generating neurotransmitter like ACh. PMID:26550216
The role of great auricular-facial nerve neurorrhaphy in facial nerve damage.
Sun, Yan; Liu, Limei; Han, Yuechen; Xu, Lei; Zhang, Daogong; Wang, Haibo
2015-01-01
Facial nerve is easy to be damaged, and there are many reconstructive methods for facial nerve reconstructive, such as facial nerve end to end anastomosis, the great auricular nerve graft, the sural nerve graft, or hypoglossal-facial nerve anastomosis. However, there is still little study about great auricular-facial nerve neurorrhaphy. The aim of the present study was to identify the role of great auricular-facial nerve neurorrhaphy and the mechanism. Rat models of facial nerve cut (FC), facial nerve end to end anastomosis (FF), facial-great auricular neurorrhaphy (FG), and control (Ctrl) were established. Apex nasi amesiality observation, electrophysiology and immunofluorescence assays were employed to investigate the function and mechanism. In apex nasi amesiality observation, it was found apex nasi amesiality of FG group was partly recovered. Additionally, electrophysiology and immunofluorescence assays revealed that facial-great auricular neurorrhaphy could transfer nerve impulse and express AChR which was better than facial nerve cut and worse than facial nerve end to end anastomosis. The present study indicated that great auricular-facial nerve neurorrhaphy is a substantial solution for facial lesion repair, as it is efficiently preventing facial muscles atrophy by generating neurotransmitter like ACh.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allevato, Adam
2016-07-21
ROSSTEP is a system for sequentially running roslaunch, rosnode, and bash scripts automatically, for use in Robot Operating System (ROS) applications. The system consists of YAML files which define actions and conditions. A python file parses the code and runs actions sequentially using the sys and subprocess python modules. Between actions, it uses various ROS-based code to check conditions required to proceed, and only moves on to the next action when all the necessary conditions have been met. Included is rosstep-creator, a QT application designed to create the YAML files required for ROSSTEP. It has a nearly one-to-one mapping frommore » interface elements to YAML output, and serves as a convenient GUI for working with the ROSSTEP system.« less
Behavioral and facial thermal variations in 3-to 4-month-old infants during the Still-Face Paradigm
Aureli, Tiziana; Grazia, Annalisa; Cardone, Daniela; Merla, Arcangelo
2015-01-01
Behavioral and facial thermal responses were recorded in twelve 3- to 4-month-old infants during the Still-Face Paradigm (SFP). As in the usual procedure, infants were observed in a three-step, face-to-face interaction: a normal interaction episode (3 min); the “still-face” episode in which the mother became unresponsive and assumed a neutral expression (1 min); a reunion episode in which the mother resumed the interaction (3 min). A fourth step that consisted of a toy play episode (5 min) was added for our own research interest. We coded the behavioral responses through the Infant and Caregiver Engagement Phases system, and recorded facial skin temperature via thermal infrared (IR) imaging. Comparing still-face episode to play episode, the infants’ communicative engagement decreased, their engagement with the environment increased, and no differences emerged in self-regulatory and protest behaviors. We also found that facial skin temperature increased. For the behavioral results, infants recognized the interruption of the interactional reciprocity caused by the still-face presentation, without showing upset behaviors. According to autonomic results, the parasympathetic system was more active than the sympathetic, as usually happens in aroused but not distressed situations. With respect to the debate about the causal factor of the still-face effect, thermal data were consistent with behavioral data in showing this effect as related to the infants’ expectations of the nature of the social interactions being violated. Moreover, as these are associated to the infants’ subsequent interest in the environment, they indicate the thermal IR imaging as a reliable technique for the detection of physiological variations not only in the emotional system, as indicated by research to date, but also in the attention system. Using this technique for the first time during the SFP allowed us to record autonomic data in a more ecological manner than in previous studies. PMID:26528229
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leeb, Helmut; Dimitriou, Paraskevi; Thompson, Ian
A Consultants Meeting was held at the IAEA Headquarters, from 28 to 30 June 2017, to discuss the results of a test exercise that had been defined and assigned to all participants of the previous meeting held in December 2016. Five codes were used in this exercise: AMUR, AZURE2, RAC, SFRESCO and SAMMY. The results obtained from these codes were compared and further actions were proposed. Participants’ presentations and technical discussions, as well as proposed additional actions have been summarized in this report.
Planning U.S. General Purpose Forces: The Theater Nuclear Forces
1977-01-01
usefulness in combat. All U.S. nuclear weapons deployed in Europe are fitted with Permissive Action Links (PAL), coded devices designed to impede...may be proposed. The Standard Missile 2, the Harpoon missile, the Mk48 tor- pedo , and the SUBROC anti-submarine rocket are all being considered for...Permissive Action Link . A coded device attached to nuclear weapons deployed abroad that impedes the unauthorized arming or firing of the weapon. Pershing
Wang, Fang; Yu, Jia Ming; Yang, De Qi; Gao, Qian; Hua, Hui; Liu, Yang
2017-02-01
To show the distribution of facial exposure to non-melanoma biologically effective UV irradiance changes by rotation angles. This study selected the cheek, nose, and forehead as representative facial sites for UV irradiance measurements, which were performed using a rotating manikin and a spectroradiometer. The measured UV irradiance was weighted using action spectra to calculate the biologically effective UV irradiances that cause non-melanoma (UVBEnon-mel) skin cancer. The biologically effective UV radiant exposure (HBEnon-mel) was calculated by summing the UVBEnon-mel data collected over the exposure period. This study revealed the following: (1) the maximum cheek, nose and forehead exposure UVA and UVB irradiance times and solar elevation angles (SEA) differed from those of the ambient UV irradiance and were influenced by the rotation angles; (2) the UV irradiance exposure increased in the following order: cheek < nose < forehead; (3) the distribution of UVBEnon-mel irradiance differed from that of unweighted UV radiation (UVR) and was influenced by the rotation angles and exposure times; and (4) the maximum percentage decreases in the UVBEnon-mel radiant exposure for the cheek, nose and forehead from 0°to 180°were 48.41%, 69.48% and 71.71%, respectively. Rotation angles relative to the sun influence the face's exposure to non-melanoma biologically effective UV. Copyright © 2017 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.
De Boulle, Koenraad; Fagien, Steven; Sommer, Boris; Glogau, Richard
2010-04-26
Botulinum toxin type A treatment is the foundation of minimally invasive aesthetic facial procedures. Clinicians and their patients recognize the important role, both negative and positive, that facial expression, particularly the glabellar frown lines, plays in self-perception, emotional well-being, and perception by others. This article provides up-to-date information on fundamental properties and mechanisms of action of the major approved formulations of botulinum toxin type A, summarizes recent changes in naming conventions (nonproprietary names) mandated by the United States Food and Drug Administration, and describes the reasons for these changes. The request for these changes provides recognition that formulations of botulinum toxins (eg, onabotulinumtoxinA and abobotulinumtoxinA) are not interchangeable and that dosing recommendations cannot be based on any one single conversion ratio. The extensive safety, tolerability, and efficacy data are summarized in detail, including the patient-reported outcomes that contribute to overall patient satisfaction and probability treatment continuation. Based on this in-depth review, the authors conclude that botulinum toxin type A treatment remains a cornerstone of facial aesthetic treatments, and clinicians must realize that techniques and dosing from one formulation cannot be applied to others, that each patient should undergo a full aesthetic evaluation, and that products and procedures must be selected in the context of individual needs and goals.
77 FR 67628 - National Fire Codes: Request for Public Input for Revision of Codes and Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-13
... DEPARTMENT OF COMMERCE National Institute of Standards and Technology National Fire Codes: Request... Technology, Commerce. ACTION: Notice. SUMMARY: This notice contains the list of National Fire Protection... the National Fire Protection Association (NFPA) to announce the NFPA's proposal to revise some of its...
78 FR 24725 - National Fire Codes: Request for Public Input for Revision of Codes and Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-26
... DEPARTMENT OF COMMERCE National Institute of Standards and Technology National Fire Codes: Request... Technology, Commerce. ACTION: Notice. SUMMARY: This notice contains the list of National Fire Protection... the National Fire Protection Association (NFPA) to announce the NFPA's proposal to revise some of its...
Processing of Visual--Action Codes by Deaf and Hearing Children: Coding Orientation or "M"-Capacity?
ERIC Educational Resources Information Center
Todman, John; Cowdy, Natascha
1993-01-01
Results from a study in which 25 deaf children and 25 hearing children completed a vocabulary test and a compound stimulus visual information task support the hypothesis that performance on cognitive tasks is dependent on compatibility of task demands with a coding orientation. (SLD)
75 FR 74628 - Tristyrylphenol Ethoxylates; Exemption From the Requirement of a Tolerance
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-01
... to EPA under the Federal Food, Drug, and Cosmetic Act (FFDCA), requesting the establishment of an... be potentially affected by this action if you are an agricultural producer, food manufacturer, or... (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide...
75 FR 28155 - Acephate, Cacodylic acid, Dicamba, Dicloran et al.; Proposed Tolerance Actions
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-19
... (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS code 32532). This..., including food service, manufacturing and processing establishments, such as restaurants, cafeterias... concentration shall be limited to a maximum of 1.0 percent active ingredient. Contamination of food or food...
Lee, Anthony J; Mitchem, Dorian G; Wright, Margaret J; Martin, Nicholas G; Keller, Matthew C; Zietsch, Brendan P
2016-01-01
Popular theory suggests that facial averageness is preferred in a partner for genetic benefits to offspring. However, whether facial averageness is associated with genetic quality is yet to be established. Here, we computed an objective measure of facial averageness for a large sample ( N = 1,823) of identical and nonidentical twins and their siblings to test two predictions from the theory that facial averageness reflects genetic quality. First, we use biometrical modelling to estimate the heritability of facial averageness, which is necessary if it reflects genetic quality. We also test for a genetic association between facial averageness and facial attractiveness. Second, we assess whether paternal age at conception (a proxy of mutation load) is associated with facial averageness and facial attractiveness. Our findings are mixed with respect to our hypotheses. While we found that facial averageness does have a genetic component, and a significant phenotypic correlation exists between facial averageness and attractiveness, we did not find a genetic correlation between facial averageness and attractiveness (therefore, we cannot say that the genes that affect facial averageness also affect facial attractiveness) and paternal age at conception was not negatively associated with facial averageness. These findings support some of the previously untested assumptions of the 'genetic benefits' account of facial averageness, but cast doubt on others.
Imagine no religion: Heretical disgust, anger and the symbolic purity of mind.
Ritter, Ryan S; Preston, Jesse L; Salomon, Erika; Relihan-Johnson, Daniel
2016-01-01
Immoral actions, including physical/sexual (e.g., incest) and social (e.g., unfairness) taboos, are often described as disgusting. But what about immoral thoughts, more specifically, thoughts that violate religious beliefs? Do heretical thoughts taint the purity of mind? The present research examined heretical disgust using self-report measures and facial electromyography. Religious thought violations consistently elicited both self-reported disgust and anger. Feelings of disgust also predicted harsh moral judgement, independent of anger, and were mediated by feelings of "contamination". However, religious thought violations were not associated with a disgust facial expression (i.e., levator labii muscle activity) that was elicited by physically disgusting stimuli. We conclude that people (especially more religious people) do feel disgust in response to heretical thoughts that is meaningfully distinct from anger as a moral emotion. However, heretical disgust is not embodied in a physical disgust response. Rather, disgust has a symbolic moral value that marks heretical thoughts as harmful and aversive.
de Mendonça, Maria Cristina C; Segheto, Natália N; Aarestrup, Fernando M; Aarestrup, Beatriz J V
2018-02-01
Phenol peeling is considered an important agent in the treatment of facial rejuvenation; however, its use has limitations due to its high potential for side effects. This article proposes a new peeling application technique for the treatment of photoaging, aiming to evaluate, clinically and histopathologically, the efficacy of a new way of applying 88% phenol, using a punctuated pattern. The procedure was performed in an outpatient setting, with female patients, on static wrinkles and high flaccidity areas of the face. Accompanying photographs and skin samples were taken for histopathological analysis before and after treatment. It was shown that 88% phenol applied topically using a punctuated technique is effective in skin rejuvenation. The authors thus suggest, based on this new proposal, that further studies be conducted with a larger group of patients to better elucidate the action mechanisms of 88% phenol. This new form of application considerably reduced patients' withdrawal from their regular activities, besides reducing the cost, compared with the conventional procedure.
Role of Kabat physical rehabilitation in Bell's palsy: a randomized trial.
Barbara, Maurizio; Antonini, Giovanni; Vestri, Annarita; Volpini, Luigi; Monini, Simonetta
2010-01-01
When applied at an early stage, Kabat's rehabilitation was shown to provide a better and faster recovery rate in comparison with non-rehabilitated patients. To assess the validity of an early rehabilitative approach to Bell's palsy patients. A randomized study involved 20 consecutive patients (10 males, 10 females; aged 35-42 years) affected by Bell's palsy, classified according to the House-Brackmann (HB) grading system and grouped on the basis of undergoing or not early physical rehabilitation according to Kabat, i.e. a proprioceptive neuromuscular rehabilitation. The evaluation was carried out by measuring the amplitude of the compound motor action potential (CMAP), as well as by observing the initial and final HB grade, at days 4, 7 and 15 after onset of facial palsy. Patients belonging to the rehabilitation group clearly showed an overall improvement of clinical stage at the planned final observation, i.e. 15 days after onset of facial palsy, without presenting greater values of CMAP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
N /A
1999-02-11
This environmental assessment was prepared to assess potential environmental impacts associated with the proposed action to widen and operate unused Trench 36 in the 218-E-12B Low-Level Burial Ground for disposal of low-level waste. Information contained herein will be used by the Manager, U.S. Department of Energy, Richland Operations Office, to determine if the Proposed Action is a major federal action significantly affecting the quality of the human environment. If the Proposed Action is determined to be major and significant, an environmental impact statement will be prepared. If the Proposed Action is determined not to be major and significant, a Findingmore » of No Significant Impact will be issued and the action may proceed. Criteria used to evaluate significance can be found in Title 40, Code of Federal Regulations 1508.27. This environmental assessment was prepared in compliance with the ''National Environmental Policy Act of1969'', as amended, the Council on Environmental Quality Regulations for Implementing the Procedural Provisions of ''National Environmental Policy Act'' (Title 40, Code of Federal Regulations 1500-1508), and the U.S. Department of Energy Implementing Procedures for ''National Environmental Polio Act'' (Title 10, Code of Federal Regulations 1021). The following is a description of each section of this environmental assessment. (1) Purpose and Need for Action. This section provides a brief statement concerning the problem or opportunity the U.S, Department of Energy is addressing with the Proposed Action. Background information is provided. (2) Description of the Proposed Action. This section provides a description of the Proposed Action with sufficient detail to identify potential environmental impacts. (3) Alternatives to the Proposed Action. This section describes reasonable,alternative actions to the Proposed Action, which addresses the Purpose and Need. A No Action Alternative, as required by Title 10, Code of Federal Regulations 1021, also is described. (4) Affected Environment. This section provides a brief description of the locale in which the Proposed Action would take place. (5) Environmental Impacts. This section describes the range of environmental impacts, beneficial and adverse, of the Proposed Action. Impacts of alternatives briefly are discussed. (6) Permits and Regulatory Requirements. This section provides a brief description of permits and regulatory requirements for the Proposed Action. (7) Organizations Consulted. This section lists any outside groups, agencies, or individuals contacted as part of the environmental assessment preparation and/or review. (8) References. This section provides a list of documents used to contribute information or data in preparation of this environmental assessment.« less
Chia, Justin; Eroglu, Fehime Kara; Özen, Seza; Orhan, Dicle; Montealegre-Sanchez, Gina; de Jesus, Adriana A; Goldbach-Mansky, Raphaela; Cowen, Edward W
2016-01-01
Key teaching points • SAVI is a recently described interferonopathy resulting from constitutive action of STING and up-regulation of IFN-β signaling. • SAVI is characterized by facial erythema with telangiectasia, acral/cold-sensitive tissue ulceration and amputations, and interstitial lung disease. It has overlapping features with Aicardi-Goutières syndrome and familial chilblain lupus. • Traditional immunosuppressive medications and biologic therapies appear to be of limited benefit, but JAK inhibitors may impact disease progression. Published by Elsevier Inc.
Pellicano, Antonello; Koch, Iring; Binkofski, Ferdinand
2017-09-01
An increasing number of studies have shown a close link between perception and action, which is supposed to be responsible for the automatic activation of actions compatible with objects' properties, such as the orientation of their graspable parts. It has been observed that left and right hand responses to objects (e.g., cups) are faster and more accurate if the handle orientation corresponds to the response location than when it does not. Two alternative explanations have been proposed for this handle-to-hand correspondence effect : location coding and affordance activation. The aim of the present study was to provide disambiguating evidence on the origin of this effect by employing object sets for which the visually salient portion was separated from, and opposite to the graspable 1, and vice versa. Seven experiments were conducted employing both single objects and object pairs as visual stimuli to enhance the contextual information about objects' graspability and usability. Notwithstanding these manipulations intended to favor affordance activation, results fully supported the location-coding account displaying significant Simon-like effects that involved the orientation of the visually salient portion of the object stimulus and the location of the response. Crucially, we provided evidence of Simon-like effects based on higher-level cognitive, iconic representations of action directions rather than based on lower-level spatial coding of the pure position of protruding portions of the visual stimuli. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Fournier, Lisa Renee; Wiediger, Matthew D; McMeans, Ryan; Mattson, Paul S; Kirkwood, Joy; Herzog, Theibot
2010-07-01
Holding an action plan in memory for later execution can delay execution of another action if the actions share a similar (compatible) feature. This compatibility interference (CI) occurs for actions that share the same response modality (e.g., manual response). We investigated whether CI can generalize to actions that utilize different response modalities (manual and vocal). In three experiments, participants planned and withheld a sequence of key-presses with the left- or right-hand based on the visual identity of the first stimulus, and then immediately executed a speeded, vocal response ('left' or 'right') to a second visual stimulus. The vocal response was based on discriminating stimulus color (Experiment 1), reading a written word (Experiment 2), or reporting the antonym of a written word (Experiment 3). Results showed that CI occurred when the manual response hand (e.g., left) was compatible with the identity of the vocal response (e.g., 'left') in Experiment 1 and 3, but not in Experiment 2. This suggests that partial overlap of semantic codes is sufficient to obtain CI unless the intervening action can be accessed automatically (Experiment 2). These findings are consistent with the code occupation hypothesis and the general framework of the theory of event coding (Behav Brain Sci 24:849-878, 2001a; Behav Brain Sci 24:910-937, 2001b).
Facial Scar Revision: Understanding Facial Scar Treatment
... Contact Us Trust your face to a facial plastic surgeon Facial Scar Revision Understanding Facial Scar Treatment ... face like the eyes or lips. A facial plastic surgeon has many options for treating and improving ...
Choi, Hyoung Ju; Shin, Sung Hee
2016-08-01
The purpose of this study was to examine the effects of a facial muscle exercise program including facial massage on the facial muscle function, subjective symptoms related to paralysis and depression in patients with facial palsy. This study was a quasi-experimental research with a non-equivalent control group non-synchronized design. Participants were 70 patients with facial palsy (experimental group 35, control group 35). For the experimental group, the facial muscular exercise program including facial massage was performed 20 minutes a day, 3 times a week for two weeks. Data were analyzed using descriptive statistics, χ²-test, Fisher's exact test and independent sample t-test with the SPSS 18.0 program. Facial muscular function of the experimental group improved significantly compared to the control group. There was no significant difference in symptoms related to paralysis between the experimental group and control group. The level of depression in the experimental group was significantly lower than the control group. Results suggest that a facial muscle exercise program including facial massage is an effective nursing intervention to improve facial muscle function and decrease depression in patients with facial palsy.
Abe, Hiroshi; Lee, Daeyeol
2011-01-01
SUMMARY Knowledge about hypothetical outcomes from unchosen actions is beneficial only when such outcomes can be correctly attributed to specific actions. Here, we show that during a simulated rock-paper-scissors game, rhesus monkeys can adjust their choice behaviors according to both actual and hypothetical outcomes from their chosen and unchosen actions, respectively. In addition, neurons in both dorsolateral prefrontal cortex and orbitofrontal cortex encoded the signals related to actual and hypothetical outcomes immediately after they were revealed to the animal. Moreover, compared to the neurons in the orbitofrontal cortex, those in the dorsolateral prefrontal cortex were more likely to change their activity according to the hypothetical outcomes from specific actions. Conjunctive and parallel coding of multiple actions and their outcomes in the prefrontal cortex might enhance the efficiency of reinforcement learning and also contribute to their context-dependent memory. PMID:21609828
Facial neuropathy with imaging enhancement of the facial nerve: a case report
Mumtaz, Sehreen; Jensen, Matthew B
2014-01-01
A young women developed unilateral facial neuropathy 2 weeks after a motor vehicle collision involving fractures of the skull and mandible. MRI showed contrast enhancement of the facial nerve. We review the literature describing facial neuropathy after trauma and facial nerve enhancement patterns with different causes of facial neuropathy. PMID:25574155
Hughes, L.; Eckstein, D.; Owen, A.M.
2008-01-01
The human capacity for voluntary action is one of the major contributors to our success as a species. In addition to choosing actions themselves, we can also voluntarily choose behavioral codes or sets of rules that can guide future responses to events. Such rules have been proposed to be superordinate to actions in a cognitive hierarchy and mediated by distinct brain regions. We used event-related functional magnetic resonance imaging to study novel tasks of rule-based and voluntary action. We show that the voluntary selection of rules to govern future responses to events is associated with activation of similar regions of prefrontal and parietal cortex as the voluntary selection of an action itself. The results are discussed in terms of hierarchical models and the adaptive coding potential of prefrontal neurons and their contribution to a global workspace for nonautomatic tasks. These tasks include the choices we make about our behavior. PMID:18234684
Traumatic facial nerve neuroma with facial palsy presenting in infancy.
Clark, James H; Burger, Peter C; Boahene, Derek Kofi; Niparko, John K
2010-07-01
To describe the management of traumatic neuroma of the facial nerve in a child and literature review. Sixteen-month-old male subject. Radiological imaging and surgery. Facial nerve function. The patient presented at 16 months with a right facial palsy and was found to have a right facial nerve traumatic neuroma. A transmastoid, middle fossa resection of the right facial nerve lesion was undertaken with a successful facial nerve-to-hypoglossal nerve anastomosis. The facial palsy improved postoperatively. A traumatic neuroma should be considered in an infant who presents with facial palsy, even in the absence of an obvious history of trauma. The treatment of such lesion is complex in any age group but especially in young children. Symptoms, age, lesion size, growth rate, and facial nerve function determine the appropriate management.
Outcome of different facial nerve reconstruction techniques.
Mohamed, Aboshanif; Omi, Eigo; Honda, Kohei; Suzuki, Shinsuke; Ishikawa, Kazuo
There is no technique of facial nerve reconstruction that guarantees facial function recovery up to grade III. To evaluate the efficacy and safety of different facial nerve reconstruction techniques. Facial nerve reconstruction was performed in 22 patients (facial nerve interpositional graft in 11 patients and hypoglossal-facial nerve transfer in another 11 patients). All patients had facial function House-Brackmann (HB) grade VI, either caused by trauma or after resection of a tumor. All patients were submitted to a primary nerve reconstruction except 7 patients, where late reconstruction was performed two weeks to four months after the initial surgery. The follow-up period was at least two years. For facial nerve interpositional graft technique, we achieved facial function HB grade III in eight patients and grade IV in three patients. Synkinesis was found in eight patients, and facial contracture with synkinesis was found in two patients. In regards to hypoglossal-facial nerve transfer using different modifications, we achieved facial function HB grade III in nine patients and grade IV in two patients. Facial contracture, synkinesis and tongue atrophy were found in three patients, and synkinesis was found in five patients. However, those who had primary direct facial-hypoglossal end-to-side anastomosis showed the best result without any neurological deficit. Among various reanimation techniques, when indicated, direct end-to-side facial-hypoglossal anastomosis through epineural suturing is the most effective technique with excellent outcomes for facial reanimation and preservation of tongue movement, particularly when performed as a primary technique. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-23
... Offshore Drilling Units AGENCY: Coast Guard, DHS. ACTION: Notice of availability. SUMMARY: The Coast Guard...), Code for the Construction and Equipment of Mobile Offshore Drilling Units, 2009 (2009 MODU Code). CG...: Background and Purpose Foreign documented MODUs engaged in any offshore activity associated with the...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-19
...]Animal production (NAICS code 112). [emsp14]Food manufacturing (NAICS code 311). [emsp14]Pesticide manufacturing (NAICS code 32532). If you have any questions regarding the applicability of this action to a... protection, Agricultural commodities, Feed additives, Food additives, Pesticides and pests, Reporting and...
A Critical Reflection on Codes of Conduct in Vocational Education
ERIC Educational Resources Information Center
Bagnall, Richard G.; Nakar, Sonal
2018-01-01
The contemporary cultural context may be seen as presenting a moral void in vocational education, sanctioning the ascendency of instrumental epistemology and a proliferation of codes of conduct, to which workplace actions are expected to conform. Important among the purposes of such codes is that of encouraging ethical conduct, but, true to their…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-28
...: Crop production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code... contamination of food, feed, or food-contact/feed-contact surfaces. Compliance with the tolerance level... Apply to Me? You may be potentially affected by this action if you are an agricultural producer, food...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-08
... production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311... Federal Food, Drug, and Cosmetic Act (FFDCA) requesting an exemption from the requirement of a tolerance...? You may be potentially affected by this action if you are an agricultural producer, food manufacturer...
25 CFR 11.500 - Law applicable to civil actions.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false Law applicable to civil actions. 11.500 Section 11.500 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Civil Actions § 11.500 Law applicable to civil actions. (a) In all civil cases, the...
25 CFR 11.502 - Costs in civil actions.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false Costs in civil actions. 11.502 Section 11.502 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Civil Actions § 11.502 Costs in civil actions. (a) The court may assess the accruing costs of...
25 CFR 11.500 - Law applicable to civil actions.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 1 2013-04-01 2013-04-01 false Law applicable to civil actions. 11.500 Section 11.500 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Civil Actions § 11.500 Law applicable to civil actions. (a) In all civil cases, the...
25 CFR 11.502 - Costs in civil actions.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 1 2012-04-01 2011-04-01 true Costs in civil actions. 11.502 Section 11.502 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Civil Actions § 11.502 Costs in civil actions. (a) The court may assess the accruing costs of...
25 CFR 11.500 - Law applicable to civil actions.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 1 2012-04-01 2011-04-01 true Law applicable to civil actions. 11.500 Section 11.500 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Civil Actions § 11.500 Law applicable to civil actions. (a) In all civil cases, the...
25 CFR 11.502 - Costs in civil actions.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 1 2013-04-01 2013-04-01 false Costs in civil actions. 11.502 Section 11.502 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Civil Actions § 11.502 Costs in civil actions. (a) The court may assess the accruing costs of...
25 CFR 11.501 - Judgments in civil actions.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 1 2011-04-01 2011-04-01 false Judgments in civil actions. 11.501 Section 11.501 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Civil Actions § 11.501 Judgments in civil actions. (a) In all civil cases, judgment shall...
25 CFR 11.502 - Costs in civil actions.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 1 2011-04-01 2011-04-01 false Costs in civil actions. 11.502 Section 11.502 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Civil Actions § 11.502 Costs in civil actions. (a) The court may assess the accruing costs of...
25 CFR 11.500 - Law applicable to civil actions.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 1 2014-04-01 2014-04-01 false Law applicable to civil actions. 11.500 Section 11.500 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Civil Actions § 11.500 Law applicable to civil actions. (a) In all civil cases, the...
25 CFR 11.501 - Judgments in civil actions.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 1 2014-04-01 2014-04-01 false Judgments in civil actions. 11.501 Section 11.501 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Civil Actions § 11.501 Judgments in civil actions. (a) In all civil cases, judgment shall...
25 CFR 11.502 - Costs in civil actions.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 1 2014-04-01 2014-04-01 false Costs in civil actions. 11.502 Section 11.502 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Civil Actions § 11.502 Costs in civil actions. (a) The court may assess the accruing costs of...
25 CFR 11.500 - Law applicable to civil actions.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 1 2011-04-01 2011-04-01 false Law applicable to civil actions. 11.500 Section 11.500 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Civil Actions § 11.500 Law applicable to civil actions. (a) In all civil cases, the...
25 CFR 11.501 - Judgments in civil actions.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 1 2013-04-01 2013-04-01 false Judgments in civil actions. 11.501 Section 11.501 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Civil Actions § 11.501 Judgments in civil actions. (a) In all civil cases, judgment shall...
25 CFR 11.501 - Judgments in civil actions.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 1 2012-04-01 2011-04-01 true Judgments in civil actions. 11.501 Section 11.501 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Civil Actions § 11.501 Judgments in civil actions. (a) In all civil cases, judgment shall...
Vangeneugden, Joris; Pollick, Frank; Vogels, Rufin
2009-03-01
Neurons in the rostral superior temporal sulcus (STS) are responsive to displays of body movements. We employed a parametric action space to determine how similarities among actions are represented by visual temporal neurons and how form and motion information contributes to their responses. The stimulus space consisted of a stick-plus-point-light figure performing arm actions and their blends. Multidimensional scaling showed that the responses of temporal neurons represented the ordinal similarity between these actions. Further tests distinguished neurons responding equally strongly to static presentations and to actions ("snapshot" neurons), from those responding much less strongly to static presentations, but responding well when motion was present ("motion" neurons). The "motion" neurons were predominantly found in the upper bank/fundus of the STS, and "snapshot" neurons in the lower bank of the STS and inferior temporal convexity. Most "motion" neurons showed strong response modulation during the course of an action, thus responding to action kinematics. "Motion" neurons displayed a greater average selectivity for these simple arm actions than did "snapshot" neurons. We suggest that the "motion" neurons code for visual kinematics, whereas the "snapshot" neurons code for form/posture, and that both can contribute to action recognition, in agreement with computation models of action recognition.
A Common Neural Code for Perceived and Inferred Emotion
Saxe, Rebecca
2014-01-01
Although the emotions of other people can often be perceived from overt reactions (e.g., facial or vocal expressions), they can also be inferred from situational information in the absence of observable expressions. How does the human brain make use of these diverse forms of evidence to generate a common representation of a target's emotional state? In the present research, we identify neural patterns that correspond to emotions inferred from contextual information and find that these patterns generalize across different cues from which an emotion can be attributed. Specifically, we use functional neuroimaging to measure neural responses to dynamic facial expressions with positive and negative valence and to short animations in which the valence of a character's emotion could be identified only from the situation. Using multivoxel pattern analysis, we test for regions that contain information about the target's emotional state, identifying representations specific to a single stimulus type and representations that generalize across stimulus types. In regions of medial prefrontal cortex (MPFC), a classifier trained to discriminate emotional valence for one stimulus (e.g., animated situations) could successfully discriminate valence for the remaining stimulus (e.g., facial expressions), indicating a representation of valence that abstracts away from perceptual features and generalizes across different forms of evidence. Moreover, in a subregion of MPFC, this neural representation generalized to trials involving subjectively experienced emotional events, suggesting partial overlap in neural responses to attributed and experienced emotions. These data provide a step toward understanding how the brain transforms stimulus-bound inputs into abstract representations of emotion. PMID:25429141
A common neural code for perceived and inferred emotion.
Skerry, Amy E; Saxe, Rebecca
2014-11-26
Although the emotions of other people can often be perceived from overt reactions (e.g., facial or vocal expressions), they can also be inferred from situational information in the absence of observable expressions. How does the human brain make use of these diverse forms of evidence to generate a common representation of a target's emotional state? In the present research, we identify neural patterns that correspond to emotions inferred from contextual information and find that these patterns generalize across different cues from which an emotion can be attributed. Specifically, we use functional neuroimaging to measure neural responses to dynamic facial expressions with positive and negative valence and to short animations in which the valence of a character's emotion could be identified only from the situation. Using multivoxel pattern analysis, we test for regions that contain information about the target's emotional state, identifying representations specific to a single stimulus type and representations that generalize across stimulus types. In regions of medial prefrontal cortex (MPFC), a classifier trained to discriminate emotional valence for one stimulus (e.g., animated situations) could successfully discriminate valence for the remaining stimulus (e.g., facial expressions), indicating a representation of valence that abstracts away from perceptual features and generalizes across different forms of evidence. Moreover, in a subregion of MPFC, this neural representation generalized to trials involving subjectively experienced emotional events, suggesting partial overlap in neural responses to attributed and experienced emotions. These data provide a step toward understanding how the brain transforms stimulus-bound inputs into abstract representations of emotion. Copyright © 2014 the authors 0270-6474/14/3315997-12$15.00/0.
Choi, Kyung-Sik; Kim, Min-Su; Kwon, Hyeok-Gyu; Jang, Sung-Ho
2014-01-01
Objective Facial nerve palsy is a common complication of treatment for vestibular schwannoma (VS), so preserving facial nerve function is important. The preoperative visualization of the course of facial nerve in relation to VS could help prevent injury to the nerve during the surgery. In this study, we evaluate the accuracy of diffusion tensor tractography (DTT) for preoperative identification of facial nerve. Methods We prospectively collected data from 11 patients with VS, who underwent preoperative DTT for facial nerve. Imaging results were correlated with intraoperative findings. Postoperative DTT was performed at postoperative 3 month. Facial nerve function was clinically evaluated according to the House-Brackmann (HB) facial nerve grading system. Results Facial nerve courses on preoperative tractography were entirely correlated with intraoperative findings in all patients. Facial nerve was located on the anterior of the tumor surface in 5 cases, on anteroinferior in 3 cases, on anterosuperior in 2 cases, and on posteroinferior in 1 case. In postoperative facial nerve tractography, preservation of facial nerve was confirmed in all patients. No patient had severe facial paralysis at postoperative one year. Conclusion This study shows that DTT for preoperative identification of facial nerve in VS surgery could be a very accurate and useful radiological method and could help to improve facial nerve preservation. PMID:25289119
Facial animation on an anatomy-based hierarchical face model
NASA Astrophysics Data System (ADS)
Zhang, Yu; Prakash, Edmond C.; Sung, Eric
2003-04-01
In this paper we propose a new hierarchical 3D facial model based on anatomical knowledge that provides high fidelity for realistic facial expression animation. Like real human face, the facial model has a hierarchical biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators and underlying skull structure. The deformable skin model has multi-layer structure to approximate different types of soft tissue. It takes into account the nonlinear stress-strain relationship of the skin and the fact that soft tissue is almost incompressible. Different types of muscle models have been developed to simulate distribution of the muscle force on the skin due to muscle contraction. By the presence of the skull model, our facial model takes advantage of both more accurate facial deformation and the consideration of facial anatomy during the interactive definition of facial muscles. Under the muscular force, the deformation of the facial skin is evaluated using numerical integration of the governing dynamic equations. The dynamic facial animation algorithm runs at interactive rate with flexible and realistic facial expressions to be generated.
Human Facial Expressions as Adaptations:Evolutionary Questions in Facial Expression Research
SCHMIDT, KAREN L.; COHN, JEFFREY F.
2007-01-01
The importance of the face in social interaction and social intelligence is widely recognized in anthropology. Yet the adaptive functions of human facial expression remain largely unknown. An evolutionary model of human facial expression as behavioral adaptation can be constructed, given the current knowledge of the phenotypic variation, ecological contexts, and fitness consequences of facial behavior. Studies of facial expression are available, but results are not typically framed in an evolutionary perspective. This review identifies the relevant physical phenomena of facial expression and integrates the study of this behavior with the anthropological study of communication and sociality in general. Anthropological issues with relevance to the evolutionary study of facial expression include: facial expressions as coordinated, stereotyped behavioral phenotypes, the unique contexts and functions of different facial expressions, the relationship of facial expression to speech, the value of facial expressions as signals, and the relationship of facial expression to social intelligence in humans and in nonhuman primates. Human smiling is used as an example of adaptation, and testable hypotheses concerning the human smile, as well as other expressions, are proposed. PMID:11786989
Santana, Sharlene E.; Dobson, Seth D.; Diogo, Rui
2014-01-01
Facial colour patterns and facial expressions are among the most important phenotypic traits that primates use during social interactions. While colour patterns provide information about the sender's identity, expressions can communicate its behavioural intentions. Extrinsic factors, including social group size, have shaped the evolution of facial coloration and mobility, but intrinsic relationships and trade-offs likely operate in their evolution as well. We hypothesize that complex facial colour patterning could reduce how salient facial expressions appear to a receiver, and thus species with highly expressive faces would have evolved uniformly coloured faces. We test this hypothesis through a phylogenetic comparative study, and explore the underlying morphological factors of facial mobility. Supporting our hypothesis, we find that species with highly expressive faces have plain facial colour patterns. The number of facial muscles does not predict facial mobility; instead, species that are larger and have a larger facial nucleus have more expressive faces. This highlights a potential trade-off between facial mobility and colour patterning in primates and reveals complex relationships between facial features during primate evolution. PMID:24850898
Paulus, Markus
2014-10-01
It has been proposed that already in infancy, imitative learning plays a pivotal role in the acquisition of knowledge and abilities. Yet the cognitive mechanisms underlying the acquisition of novel action knowledge through social learning have remained unclear. The present contribution presents an ideomotor approach to imitative learning (IMAIL) in infancy (and beyond) that draws on the ideomotor theory of action control and on recent findings of perception-action matching. According to IMAIL, the central mechanism of imitative and social learning is the acquisition of cascading bidirectional action-effect associations through observation of own and others' actions. First, the observation of the visual effect of own actions leads to the acquisition of first-order action-effect associations, linking motor codes to the action's typical visual effects. Second, observing another person's action leads to motor activation (i.e., motor resonance) due to the first-order associations. This activated motor code then becomes linked to the other salient effects produced by the observed action, leading to the acquisition of (second-order) action-effect associations. These novel action-effect associations enable later imitation of the observed actions. The article reviews recent behavioral and neurophysiological studies with infants and adults that provide empirical support for the model. Furthermore, it is discussed how the model relates to other approaches on social-cognitive development and how developmental changes in imitative abilities can be conceptualized.
Facial dynamics and emotional expressions in facial aging treatments.
Michaud, Thierry; Gassia, Véronique; Belhaouari, Lakhdar
2015-03-01
Facial expressions convey emotions that form the foundation of interpersonal relationships, and many of these emotions promote and regulate our social linkages. Hence, the facial aging symptomatological analysis and the treatment plan must of necessity include knowledge of the facial dynamics and the emotional expressions of the face. This approach aims to more closely meet patients' expectations of natural-looking results, by correcting age-related negative expressions while observing the emotional language of the face. This article will successively describe patients' expectations, the role of facial expressions in relational dynamics, the relationship between facial structures and facial expressions, and the way facial aging mimics negative expressions. Eventually, therapeutic implications for facial aging treatment will be addressed. © 2015 Wiley Periodicals, Inc.
The effects of an action video game on visual and affective information processing.
Bailey, Kira; West, Robert
2013-04-04
Playing action video games can have beneficial effects on visuospatial cognition and negative effects on social information processing. However, these two effects have not been demonstrated in the same individuals in a single study. The current study used event-related brain potentials (ERPs) to examine the effects of playing an action or non-action video game on the processing of emotion in facial expression. The data revealed that 10h of playing an action or non-action video game had differential effects on the ERPs relative to a no-contact control group. Playing an action game resulted in two effects: one that reflected an increase in the amplitude of the ERPs following training over the right frontal and posterior regions that was similar for angry, happy, and neutral faces; and one that reflected a reduction in the allocation of attention to happy faces. In contrast, playing a non-action game resulted in changes in slow wave activity over the central-parietal and frontal regions that were greater for targets (i.e., angry and happy faces) than for non-targets (i.e., neutral faces). These data demonstrate that the contrasting effects of action video games on visuospatial and emotion processing occur in the same individuals following the same level of gaming experience. This observation leads to the suggestion that caution should be exercised when using action video games to modify visual processing, as this experience could also have unintended effects on emotion processing. Published by Elsevier B.V.
Wingenbach, Tanja S. H.; Brosnan, Mark; Pfaltz, Monique C.; Plichta, Michael M.; Ashwin, Chris
2018-01-01
According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed. PMID:29928240
Wingenbach, Tanja S H; Brosnan, Mark; Pfaltz, Monique C; Plichta, Michael M; Ashwin, Chris
2018-01-01
According to embodied cognition accounts, viewing others' facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others' facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others' faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions' order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.
Choi, Kyung-Sik; Kim, Min-Su; Jang, Sung-Ho
2014-01-01
Recently, the increasing rates of facial nerve preservation after vestibular schwannoma (VS) surgery have been achieved. However, the management of a partially or completely damaged facial nerve remains an important issue. The authors report a patient who was had a good recovery after a facial nerve reconstruction using fibrin glue-coated collagen fleece for a totally transected facial nerve during VS surgery. And, we verifed the anatomical preservation and functional outcome of the facial nerve with postoperative diffusion tensor (DT) imaging facial nerve tractography, electroneurography (ENoG) and House-Brackmann (HB) grade. DT imaging tractography at the 3rd postoperative day revealed preservation of facial nerve. And facial nerve degeneration ratio was 94.1% at 7th postoperative day ENoG. At postoperative 3 months and 1 year follow-up examination with DT imaging facial nerve tractography and ENoG, good results for facial nerve function were observed. PMID:25024825
Pattern of facial palsy in a typical Nigerian specialist hospital.
Lamina, S; Hanif, S
2012-12-01
Data on incidence of facial palsy is generally lacking in Nigeria. To assess six years' incidence of facial palsy in Murtala Muhammed Specialist Hospital (MMSH), Kano, Nigeria. The records of patients diagnosed as facial problems between January 2000 and December 2005 were scrutinized. Data on diagnosis, age, sex, side affected, occupation and causes were obtained. A total number of 698 patients with facial problems were recorded. Five hundred and ninety four (85%) were diagnosed as facial palsy. Out of the diagnosed facial palsy, males (56.2%) had a higher incidence than females; 20-34 years age group (40.3%) had a greater prevalence; the commonest cause of facial palsy was found out to be Idiopathic (39.1%) and was most common among business men (31.6%). Right sided facial palsy (52.2%) was predominant. Incidence of facial palsy was highest in 2003 (25.3%) and decreased from 2004. It was concluded that the incidence of facial palsy was high and Bell's palsy remains the most common causes of facial (nerve) paralysis.
ERIC Educational Resources Information Center
Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno
2007-01-01
This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…
Turning Avatar into Realistic Human Expression Using Linear and Bilinear Interpolations
NASA Astrophysics Data System (ADS)
Hazim Alkawaz, Mohammed; Mohamad, Dzulkifli; Rehman, Amjad; Basori, Ahmad Hoirul
2014-06-01
The facial animation in term of 3D facial data has accurate research support of the laser scan and advance 3D tools for complex facial model production. However, the approach still lacks facial expression based on emotional condition. Though, facial skin colour is required to offers an effect of facial expression improvement, closely related to the human emotion. This paper presents innovative techniques for facial animation transformation using the facial skin colour based on linear interpolation and bilinear interpolation. The generated expressions are almost same to the genuine human expression and also enhance the facial expression of the virtual human.
Cerit, Hilâl; Veer, Ilya M; Dahan, Albert; Niesters, Marieke; Harmer, Catherine J; Miskowiak, Kamilla W; Rombouts, Serge A R B; Van der Does, Willem
2015-12-01
Studies on the neural effects of Erythropoietin (EPO) indicate that EPO may have antidepressant effects. Due to its hematopoietic effects, EPO may cause serious side-effects with repeated administration if patients are not monitored extensively. ARA290 is an EPO-analog peptide without such hematopoietic side-effects but may have neurotrophic and antidepressant effects. The aim of this study was to investigate the possible antidepressant effects of ARA290 in a neuropsychological model of drug action. Healthy participants (N=36) received ARA290 (2mg) or placebo in a double-blind, randomized, parallel-group design. Neural and cognitive effects were assessed one week after administration. Primary outcome measures were the neural processing of fearful vs happy faces and the behavioral recognition of emotional facial expressions. ARA290-treated individuals displayed lower neural responses to happy faces in the fusiform gyrus. ARA290 tended to lower the recognition of happy and disgust facial expressions. Although ARA290 was not associated with a better memory for positive words, it was associated with faster categorization of positive vs negative words. Finally, ARA290 increased attention towards positive emotional pictures. No effects were observed on mood and affective symptoms. ARA290 may modulate some aspects of emotional processing, however, the direction and the strength of its effects do not unequivocally support an antidepressant-like profile for ARA290. Future studies may investigate the effects of different timing and dose. Copyright © 2015 Elsevier B.V. and ECNP. All rights reserved.
Acevedo, Bianca P; Aron, Elaine N; Aron, Arthur; Sangster, Matthew-Donald; Collins, Nancy; Brown, Lucy L
2014-07-01
Theory and research suggest that sensory processing sensitivity (SPS), found in roughly 20% of humans and over 100 other species, is a trait associated with greater sensitivity and responsiveness to the environment and to social stimuli. Self-report studies have shown that high-SPS individuals are strongly affected by others' moods, but no previous study has examined neural systems engaged in response to others' emotions. This study examined the neural correlates of SPS (measured by the standard short-form Highly Sensitive Person [HSP] scale) among 18 participants (10 females) while viewing photos of their romantic partners and of strangers displaying positive, negative, or neutral facial expressions. One year apart, 13 of the 18 participants were scanned twice. Across all conditions, HSP scores were associated with increased brain activation of regions involved in attention and action planning (in the cingulate and premotor area [PMA]). For happy and sad photo conditions, SPS was associated with activation of brain regions involved in awareness, integration of sensory information, empathy, and action planning (e.g., cingulate, insula, inferior frontal gyrus [IFG], middle temporal gyrus [MTG], and PMA). As predicted, for partner images and for happy facial photos, HSP scores were associated with stronger activation of brain regions involved in awareness, empathy, and self-other processing. These results provide evidence that awareness and responsiveness are fundamental features of SPS, and show how the brain may mediate these traits.
Negative ion treatment increases positive emotional processing in seasonal affective disorder.
Harmer, C J; Charles, M; McTavish, S; Favaron, E; Cowen, P J
2012-08-01
Antidepressant drug treatments increase the processing of positive compared to negative affective information early in treatment. Such effects have been hypothesized to play a key role in the development of later therapeutic responses to treatment. However, it is unknown whether these effects are a common mechanism of action for different treatment modalities. High-density negative ion (HDNI) treatment is an environmental manipulation that has efficacy in randomized clinical trials in seasonal affective disorder (SAD). The current study investigated whether a single session of HDNI treatment could reverse negative affective biases seen in seasonal depression using a battery of emotional processing tasks in a double-blind, placebo-controlled randomized study. Under placebo conditions, participants with seasonal mood disturbance showed reduced recognition of happy facial expressions, increased recognition memory for negative personality characteristics and increased vigilance to masked presentation of negative words in a dot-probe task compared to matched healthy controls. Negative ion treatment increased the recognition of positive compared to negative facial expression and improved vigilance to unmasked stimuli across participants with seasonal depression and healthy controls. Negative ion treatment also improved recognition memory for positive information in the SAD group alone. These effects were seen in the absence of changes in subjective state or mood. These results are consistent with the hypothesis that early change in emotional processing may be an important mechanism for treatment action in depression and suggest that these effects are also apparent with negative ion treatment in seasonal depression.
Acevedo, Bianca P; Aron, Elaine N; Aron, Arthur; Sangster, Matthew-Donald; Collins, Nancy; Brown, Lucy L
2014-01-01
Background Theory and research suggest that sensory processing sensitivity (SPS), found in roughly 20% of humans and over 100 other species, is a trait associated with greater sensitivity and responsiveness to the environment and to social stimuli. Self-report studies have shown that high-SPS individuals are strongly affected by others' moods, but no previous study has examined neural systems engaged in response to others' emotions. Methods This study examined the neural correlates of SPS (measured by the standard short-form Highly Sensitive Person [HSP] scale) among 18 participants (10 females) while viewing photos of their romantic partners and of strangers displaying positive, negative, or neutral facial expressions. One year apart, 13 of the 18 participants were scanned twice. Results Across all conditions, HSP scores were associated with increased brain activation of regions involved in attention and action planning (in the cingulate and premotor area [PMA]). For happy and sad photo conditions, SPS was associated with activation of brain regions involved in awareness, integration of sensory information, empathy, and action planning (e.g., cingulate, insula, inferior frontal gyrus [IFG], middle temporal gyrus [MTG], and PMA). Conclusions As predicted, for partner images and for happy facial photos, HSP scores were associated with stronger activation of brain regions involved in awareness, empathy, and self-other processing. These results provide evidence that awareness and responsiveness are fundamental features of SPS, and show how the brain may mediate these traits. PMID:25161824
Lee, Anthony J.; Mitchem, Dorian G.; Wright, Margaret J.; Martin, Nicholas G.; Keller, Matthew C.; Zietsch, Brendan P.
2014-01-01
For women, choosing a facially masculine man as a mate is thought to confer genetic benefits to offspring. Crucial assumptions of this hypothesis have not been adequately tested. It has been assumed that variation in facial masculinity is due to genetic variation and that genetic factors that increase male facial masculinity do not increase facial masculinity in female relatives. We objectively quantified the facial masculinity in photos of identical (n = 411) and nonidentical (n = 782) twins and their siblings (n = 106). Using biometrical modeling, we found that much of the variation in male and female facial masculinity is genetic. However, we also found that masculinity of male faces is unrelated to their attractiveness and that facially masculine men tend to have facially masculine, less-attractive sisters. These findings challenge the idea that facially masculine men provide net genetic benefits to offspring and call into question this popular theoretical framework. PMID:24379153
Lee, Anthony J; Mitchem, Dorian G; Wright, Margaret J; Martin, Nicholas G; Keller, Matthew C; Zietsch, Brendan P
2014-02-01
For women, choosing a facially masculine man as a mate is thought to confer genetic benefits to offspring. Crucial assumptions of this hypothesis have not been adequately tested. It has been assumed that variation in facial masculinity is due to genetic variation and that genetic factors that increase male facial masculinity do not increase facial masculinity in female relatives. We objectively quantified the facial masculinity in photos of identical (n = 411) and nonidentical (n = 782) twins and their siblings (n = 106). Using biometrical modeling, we found that much of the variation in male and female facial masculinity is genetic. However, we also found that masculinity of male faces is unrelated to their attractiveness and that facially masculine men tend to have facially masculine, less-attractive sisters. These findings challenge the idea that facially masculine men provide net genetic benefits to offspring and call into question this popular theoretical framework.
Lee, Chi-Heon; Moon, Suk-Hee; Park, Ki-Min; Kang, Youngjin
2016-12-01
In the title compound, [Ir(C 11 H 8 N) 2 (C 18 H 14 N)], the Ir III ion adopts a distorted octa-hedral coordination environment defined by three C , N -chelating ligands, one stemming from a 2-(4-phenyl-5-methyl-pyridin-2-yl)phenyl ligand and two from 2-(pyridin-2-yl)phenyl ligands, arranged in a facial manner. The Ir III ion lies almost in the equatorial plane [deviation = 0.0069 (15) Å]. In the crystal, inter-molecular π-π stacking inter-actions, as well as inter-molecular C-H⋯π inter-actions, are present, leading to a three-dimensional network.
Kiesewetter, Jan; Ebersbach, René; Görlitz, Anja; Holzer, Matthias; Fischer, Martin R; Schmidmaier, Ralf
2013-01-01
Problem-solving in terms of clinical reasoning is regarded as a key competence of medical doctors. Little is known about the general cognitive actions underlying the strategies of problem-solving among medical students. In this study, a theory-based model was used and adapted in order to investigate the cognitive actions in which medical students are engaged when dealing with a case and how patterns of these actions are related to the correct solution. Twenty-three medical students worked on three cases on clinical nephrology using the think-aloud method. The transcribed recordings were coded using a theory-based model consisting of eight different cognitive actions. The coded data was analysed using time sequences in a graphical representation software. Furthermore the relationship between the coded data and accuracy of diagnosis was investigated with inferential statistical methods. The observation of all main actions in a case elaboration, including evaluation, representation and integration, was considered a complete model and was found in the majority of cases (56%). This pattern significantly related to the accuracy of the case solution (φ = 0.55; p<.001). Extent of prior knowledge was neither related to the complete model nor to the correct solution. The proposed model is suitable to empirically verify the cognitive actions of problem-solving of medical students. The cognitive actions evaluation, representation and integration are crucial for the complete model and therefore for the accuracy of the solution. The educational implication which may be drawn from this study is to foster students reasoning by focusing on higher level reasoning.
Facial nerve palsy associated with a cystic lesion of the temporal bone.
Kim, Na Hyun; Shin, Seung-Ho
2014-03-01
Facial nerve palsy results in the loss of facial expression and is most commonly caused by a benign, self-limiting inflammatory condition known as Bell palsy. However, there are other conditions that may cause facial paralysis, such as neoplastic conditions of the facial nerve, traumatic nerve injury, and temporal bone lesions. We present a case of facial nerve palsy concurrent with a benign cystic lesion of the temporal bone, adjacent to the tympanic segment of the facial nerve. The patient's symptoms subsided after facial nerve decompression via a transmastoid approach.
40 CFR 35.936-16 - Code or standards of conduct.
Code of Federal Regulations, 2010 CFR
2010-07-01
... provide for penalties, sanctions, or other adequate disciplinary actions to be instituted for project... their agents. The grantee must also inform the project officer of the prosecutive or disciplinary action... disciplinary action. Under § 30.245 of this subchapter, the project officer must notify the Director, EPA...
Contemporary solutions for the treatment of facial nerve paralysis.
Garcia, Ryan M; Hadlock, Tessa A; Klebuc, Michael J; Simpson, Roger L; Zenn, Michael R; Marcus, Jeffrey R
2015-06-01
After reviewing this article, the participant should be able to: 1. Understand the most modern indications and technique for neurotization, including masseter-to-facial nerve transfer (fifth-to-seventh cranial nerve transfer). 2. Contrast the advantages and limitations associated with contiguous muscle transfers and free-muscle transfers for facial reanimation. 3. Understand the indications for a two-stage and one-stage free gracilis muscle transfer for facial reanimation. 4. Apply nonsurgical adjuvant treatments for acute facial nerve paralysis. Facial expression is a complex neuromotor and psychomotor process that is disrupted in patients with facial paralysis breaking the link between emotion and physical expression. Contemporary reconstructive options are being implemented in patients with facial paralysis. While static procedures provide facial symmetry at rest, true 'facial reanimation' requires restoration of facial movement. Contemporary treatment options include neurotization procedures (a new motor nerve is used to restore innervation to a viable muscle), contiguous regional muscle transfer (most commonly temporalis muscle transfer), microsurgical free muscle transfer, and nonsurgical adjuvants used to balance facial symmetry. Each approach has advantages and disadvantages along with ongoing controversies and should be individualized for each patient. Treatments for patients with facial paralysis continue to evolve in order to restore the complex psychomotor process of facial expression.
Brain responses to facial attractiveness induced by facial proportions: evidence from an fMRI study
Shen, Hui; Chau, Desmond K. P.; Su, Jianpo; Zeng, Ling-Li; Jiang, Weixiong; He, Jufang; Fan, Jintu; Hu, Dewen
2016-01-01
Brain responses to facial attractiveness induced by facial proportions are investigated by using functional magnetic resonance imaging (fMRI), in 41 young adults (22 males and 19 females). The subjects underwent fMRI while they were presented with computer-generated, yet realistic face images, which had varying facial proportions, but the same neutral facial expression, baldhead and skin tone, as stimuli. Statistical parametric mapping with parametric modulation was used to explore the brain regions with the response modulated by facial attractiveness ratings (ARs). The results showed significant linear effects of the ARs in the caudate nucleus and the orbitofrontal cortex for all of the subjects, and a non-linear response profile in the right amygdala for only the male subjects. Furthermore, canonical correlation analysis was used to learn the most relevant facial ratios that were best correlated with facial attractiveness. A regression model on the fMRI-derived facial ratio components demonstrated a strong linear relationship between the visually assessed mean ARs and the predictive ARs. Overall, this study provided, for the first time, direct neurophysiologic evidence of the effects of facial ratios on facial attractiveness and suggested that there are notable gender differences in perceiving facial attractiveness as induced by facial proportions. PMID:27779211
Brain responses to facial attractiveness induced by facial proportions: evidence from an fMRI study.
Shen, Hui; Chau, Desmond K P; Su, Jianpo; Zeng, Ling-Li; Jiang, Weixiong; He, Jufang; Fan, Jintu; Hu, Dewen
2016-10-25
Brain responses to facial attractiveness induced by facial proportions are investigated by using functional magnetic resonance imaging (fMRI), in 41 young adults (22 males and 19 females). The subjects underwent fMRI while they were presented with computer-generated, yet realistic face images, which had varying facial proportions, but the same neutral facial expression, baldhead and skin tone, as stimuli. Statistical parametric mapping with parametric modulation was used to explore the brain regions with the response modulated by facial attractiveness ratings (ARs). The results showed significant linear effects of the ARs in the caudate nucleus and the orbitofrontal cortex for all of the subjects, and a non-linear response profile in the right amygdala for only the male subjects. Furthermore, canonical correlation analysis was used to learn the most relevant facial ratios that were best correlated with facial attractiveness. A regression model on the fMRI-derived facial ratio components demonstrated a strong linear relationship between the visually assessed mean ARs and the predictive ARs. Overall, this study provided, for the first time, direct neurophysiologic evidence of the effects of facial ratios on facial attractiveness and suggested that there are notable gender differences in perceiving facial attractiveness as induced by facial proportions.
Facial approximation-from facial reconstruction synonym to face prediction paradigm.
Stephan, Carl N
2015-05-01
Facial approximation was first proposed as a synonym for facial reconstruction in 1987 due to dissatisfaction with the connotations the latter label held. Since its debut, facial approximation's identity has morphed as anomalies in face prediction have accumulated. Now underpinned by differences in what problems are thought to count as legitimate, facial approximation can no longer be considered a synonym for, or subclass of, facial reconstruction. Instead, two competing paradigms of face prediction have emerged, namely: facial approximation and facial reconstruction. This paper shines a Kuhnian lens across the discipline of face prediction to comprehensively review these developments and outlines the distinguishing features between the two paradigms. © 2015 American Academy of Forensic Sciences.
Reproducibility of the dynamics of facial expressions in unilateral facial palsy.
Alagha, M A; Ju, X; Morley, S; Ayoub, A
2018-02-01
The aim of this study was to assess the reproducibility of non-verbal facial expressions in unilateral facial paralysis using dynamic four-dimensional (4D) imaging. The Di4D system was used to record five facial expressions of 20 adult patients. The system captured 60 three-dimensional (3D) images per second; each facial expression took 3-4seconds which was recorded in real time. Thus a set of 180 3D facial images was generated for each expression. The procedure was repeated after 30min to assess the reproducibility of the expressions. A mathematical facial mesh consisting of thousands of quasi-point 'vertices' was conformed to the face in order to determine the morphological characteristics in a comprehensive manner. The vertices were tracked throughout the sequence of the 180 images. Five key 3D facial frames from each sequence of images were analyzed. Comparisons were made between the first and second capture of each facial expression to assess the reproducibility of facial movements. Corresponding images were aligned using partial Procrustes analysis, and the root mean square distance between them was calculated and analyzed statistically (paired Student t-test, P<0.05). Facial expressions of lip purse, cheek puff, and raising of eyebrows were reproducible. Facial expressions of maximum smile and forceful eye closure were not reproducible. The limited coordination of various groups of facial muscles contributed to the lack of reproducibility of these facial expressions. 4D imaging is a useful clinical tool for the assessment of facial expressions. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Yoshimura, Sayaka; Sato, Wataru; Uono, Shota; Toichi, Motomi
2015-01-01
Previous electromyographic studies have reported that individuals with autism spectrum disorders (ASD) exhibited atypical patterns of facial muscle activity in response to facial expression stimuli. However, whether such activity is expressed in visible facial mimicry remains unknown. To investigate this issue, we videotaped facial responses in…
Multiple Mechanisms in the Perception of Face Gender: Effect of Sex-Irrelevant Features
ERIC Educational Resources Information Center
Komori, Masashi; Kawamura, Satoru; Ishihara, Shigekazu
2011-01-01
Effects of sex-relevant and sex-irrelevant facial features on the evaluation of facial gender were investigated. Participants rated masculinity of 48 male facial photographs and femininity of 48 female facial photographs. Eighty feature points were measured on each of the facial photographs. Using a generalized Procrustes analysis, facial shapes…
Characterization and recognition of mixed emotional expressions in thermal face image
NASA Astrophysics Data System (ADS)
Saha, Priya; Bhattacharjee, Debotosh; De, Barin K.; Nasipuri, Mita
2016-05-01
Facial expressions in infrared imaging have been introduced to solve the problem of illumination, which is an integral constituent of visual imagery. The paper investigates facial skin temperature distribution on mixed thermal facial expressions of our created face database where six are basic expressions and rest 12 are a mixture of those basic expressions. Temperature analysis has been performed on three facial regions of interest (ROIs); periorbital, supraorbital and mouth. Temperature variability of the ROIs in different expressions has been measured using statistical parameters. The temperature variation measurement in ROIs of a particular expression corresponds to a vector, which is later used in recognition of mixed facial expressions. Investigations show that facial features in mixed facial expressions can be characterized by positive emotion induced facial features and negative emotion induced facial features. Supraorbital is a useful facial region that can differentiate basic expressions from mixed expressions. Analysis and interpretation of mixed expressions have been conducted with the help of box and whisker plot. Facial region containing mixture of two expressions is generally less temperature inducing than corresponding facial region containing basic expressions.
Tardif, Carole; Lainé, France; Rodriguez, Mélissa; Gepner, Bruno
2007-09-01
This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on CD-Rom, under audio or silent conditions, and under dynamic visual conditions (slowly, very slowly, at normal speed) plus a static control. Overall, children with autism showed lower performance in expression recognition and more induced facial-vocal imitation than controls. In the autistic group, facial expression recognition and induced facial-vocal imitation were significantly enhanced in slow conditions. Findings may give new perspectives for understanding and intervention for verbal and emotional perceptive and communicative impairments in autistic populations.
Multiracial Facial Golden Ratio and Evaluation of Facial Appearance.
Alam, Mohammad Khursheed; Mohd Noor, Nor Farid; Basri, Rehana; Yew, Tan Fo; Wen, Tay Hui
2015-01-01
This study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM) Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian), with the mean age of 21.54 ± 1.56 (Age range, 18-25). Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects' evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI), Malaysian Chinese (MC) and Malaysian Malay (MM) were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; P<0.05) but no significant difference was found between races. Out of the 286 subjects, 49 (17.1%) were of ideal facial shape, 156 (54.5%) short and 81 (28.3%) long. The facial evaluation questionnaire showed that MC had the lowest satisfaction with mean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts. 1) Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%); 2) Facial index did not depend significantly on races; 3) Significant sexual dimorphism was shown among Malaysian Chinese; 4) All three races are generally satisfied with their own facial appearance; 5) No significant association was found between golden ratio and facial evaluation score among Malaysian population.
Multiracial Facial Golden Ratio and Evaluation of Facial Appearance
2015-01-01
This study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM) Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian), with the mean age of 21.54 ± 1.56 (Age range, 18–25). Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects’ evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI), Malaysian Chinese (MC) and Malaysian Malay (MM) were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; P<0.05) but no significant difference was found between races. Out of the 286 subjects, 49 (17.1%) were of ideal facial shape, 156 (54.5%) short and 81 (28.3%) long. The facial evaluation questionnaire showed that MC had the lowest satisfaction with mean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts. In conclusion: 1) Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%); 2) Facial index did not depend significantly on races; 3) Significant sexual dimorphism was shown among Malaysian Chinese; 4) All three races are generally satisfied with their own facial appearance; 5) No significant association was found between golden ratio and facial evaluation score among Malaysian population. PMID:26562655
A patient with bilateral facial palsy associated with hypertension and chickenpox: learning points.
Al-Abadi, Eslam; Milford, David V; Smith, Martin
2010-11-26
Bilateral facial nerve paralysis is an uncommon presentation and even more so in children. There are reports of different causes of bilateral facial nerve palsy. It is well-established that hypertension and chickenpox causes unilateral facial paralysis and the importance of checking the blood pressure in children with facial nerve paralysis cannot be stressed enough. The authors report a boy with bilateral facial nerve paralysis in association with hypertension and having recently recovered from chickenpox. The authors review aspects of bilateral facial nerve paralysis as well as hypertension and chickenpox causing facial nerve paralysis.
A patient with bilateral facial palsy associated with hypertension and chickenpox: learning points
Al-Abadi, Eslam; Milford, David V; Smith, Martin
2010-01-01
Bilateral facial nerve paralysis is an uncommon presentation and even more so in children. There are reports of different causes of bilateral facial nerve palsy. It is well-established that hypertension and chickenpox causes unilateral facial paralysis and the importance of checking the blood pressure in children with facial nerve paralysis cannot be stressed enough. The authors report a boy with bilateral facial nerve paralysis in association with hypertension and having recently recovered from chickenpox. The authors review aspects of bilateral facial nerve paralysis as well as hypertension and chickenpox causing facial nerve paralysis. PMID:22797481
Facial nerve paralysis secondary to occult malignant neoplasms.
Boahene, Derek O; Olsen, Kerry D; Driscoll, Colin; Lewis, Jean E; McDonald, Thomas J
2004-04-01
This study reviewed patients with unilateral facial paralysis and normal clinical and imaging findings who underwent diagnostic facial nerve exploration. Study design and setting Fifteen patients with facial paralysis and normal findings were seen in the Mayo Clinic Department of Otorhinolaryngology. Eleven patients were misdiagnosed as having Bell palsy or idiopathic paralysis. Progressive facial paralysis with sequential involvement of adjacent facial nerve branches occurred in all 15 patients. Seven patients had a history of regional skin squamous cell carcinoma, 13 patients had surgical exploration to rule out a neoplastic process, and 2 patients had negative exploration. At last follow-up, 5 patients were alive. Patients with facial paralysis and normal clinical and imaging findings should be considered for facial nerve exploration when the patient has a history of pain or regional skin cancer, involvement of other cranial nerves, and prolonged facial paralysis. Occult malignancy of the facial nerve may cause unilateral facial paralysis in patients with normal clinical and imaging findings.
Cavoy, R
2013-09-01
Facial palsy is a daily challenge for the clinicians. Determining whether facial nerve palsy is peripheral or central is a key step in the diagnosis. Central nervous lesions can give facial palsy which may be easily differentiated from peripheral palsy. The next question is the peripheral facial paralysis idiopathic or symptomatic. A good knowledge of anatomy of facial nerve is helpful. A structure approach is given to identify additional features that distinguish symptomatic facial palsy from idiopathic one. The main cause of peripheral facial palsies is idiopathic one, or Bell's palsy, which remains a diagnosis of exclusion. The most common cause of symptomatic peripheral facial palsy is Ramsay-Hunt syndrome. Early identification of symptomatic facial palsy is important because of often worst outcome and different management. The prognosis of Bell's palsy is on the whole favorable and is improved with a prompt tapering course of prednisone. In Ramsay-Hunt syndrome, an antiviral therapy is added along with prednisone. We also discussed of current treatment recommendations. We will review short and long term complications of peripheral facial palsy.
Llamazares-Martín, Clara; Scopa, Chiara; Guillén-Salazar, Federico; Palagi, Elisabetta
2017-07-01
Fine-tuning communication is well documented in mammalian social play which relies on a large variety of specific and non-specific signals. Facial expressions are one of the most frequent patterns in play communication. The reciprocity of facial signals expressed by the players provides information on their reciprocal attentional state and on the correct perception/decoding of the signal itself. Here, for the first time, we explored the Relaxed Open Mouth (ROM), a widespread playful facial expression among mammals, in the South American sea lion (Otaria flavescens). In this species, like many others, ROM appears to be used as a playful signal as distinct from merely being a biting action. ROM was often reciprocated by players. Even though ROM did not vary in frequency of emission as a function of the number of players involved, it was reciprocated more often during dyadic encounters, in which the players had the highest probability to engage in a face-to-face interaction. Finally, we found that it was the reciprocation of ROMs, more than their frequency performance, that was effective in prolonging playful bouts. In conclusion, ROM is widespread in many social mammals and O. flavescens is not an exception. At least in those species for which quantitative data are available, ROM seems to be characterized by similar design features clearly indicating that the signal underwent to similar selective pressures. Copyright © 2017 Elsevier B.V. All rights reserved.
Energy and Environment Guide to Action - Chapter 4.3: Building Codes for Energy Efficiency
Provides guidance and recommendations for establishing, implementing, and evaluating state building codes for energy efficiency, which improve energy efficiency in new construction and major renovations. State success stories are included for reference.
Exemplar-based human action pose correction.
Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen
2014-07-01
The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems.
NASA Technical Reports Server (NTRS)
Pollara, Fabrizio; Hamkins, Jon; Dolinar, Sam; Andrews, Ken; Divsalar, Dariush
2006-01-01
This viewgraph presentation reviews uplink coding. The purpose and goals of the briefing are (1) Show a plan for using uplink coding and describe benefits (2) Define possible solutions and their applicability to different types of uplink, including emergency uplink (3) Concur with our conclusions so we can embark on a plan to use proposed uplink system (4) Identify the need for the development of appropriate technology and infusion in the DSN (5) Gain advocacy to implement uplink coding in flight projects Action Item EMB04-1-14 -- Show a plan for using uplink coding, including showing where it is useful or not (include discussion of emergency uplink coding).
Nellis, Jason C.; Ishii, Masaru; Byrne, Patrick J.; Boahene, Kofi D. O.; Dey, Jacob K.; Ishii, Lisa E.
2017-01-01
IMPORTANCE Though anecdotally linked, few studies have investigated the impact of facial paralysis on depression and quality of life (QOL). OBJECTIVE To measure the association between depression, QOL, and facial paralysis in patients seeking treatment at a facial plastic surgery clinic. DESIGN, SETTING, PARTICIPANTS Data were prospectively collected for patients with all-cause facial paralysis and control patients initially presenting to a facial plastic surgery clinic from 2013 to 2015. The control group included a heterogeneous patient population presenting to facial plastic surgery clinic for evaluation. Patients who had prior facial reanimation surgery or missing demographic and psychometric data were excluded from analysis. MAIN OUTCOMES AND MEASURES Demographics, facial paralysis etiology, facial paralysis severity (graded on the House-Brackmann scale), Beck depression inventory, and QOL scores in both groups were examined. Potential confounders, including self-reported attractiveness and mood, were collected and analyzed. Self-reported scores were measured using a 0 to 100 visual analog scale. RESULTS There was a total of 263 patients (mean age, 48.8 years; 66.9% were female) were analyzed. There were 175 control patients and 88 patients with facial paralysis. Sex distributions were not significantly different between the facial paralysis and control groups. Patients with facial paralysis had significantly higher depression, lower self-reported attractiveness, lower mood, and lower QOL scores. Overall, 37 patients with facial paralysis (42.1%) screened positive for depression, with the greatest likelihood in patients with House-Brackmann grade 3 or greater (odds ratio, 10.8; 95% CI, 5.13–22.75) compared with 13 control patients (8.1%) (P < .001). In multivariate regression, facial paralysis and female sex were significantly associated with higher depression scores (constant, 2.08 [95% CI, 0.77–3.39]; facial paralysis effect, 5.98 [95% CI, 4.38–7.58]; female effect, 1.95 [95% CI, 0.65–3.25]). Facial paralysis was associated with lower QOL scores (constant, 81.62 [95% CI, 78.98–84.25]; facial paralysis effect, −16.06 [95% CI, −20.50 to −11.62]). CONCLUSIONS AND RELEVANCE For treatment-seeking patients, facial paralysis was significantly associated with increased depression and worse QOL scores. In addition, female sex was significantly associated with increased depression scores. Moreover, patients with a greater severity of facial paralysis were more likely to screen positive for depression. Clinicians initially evaluating patients should consider the psychological impact of facial paralysis to optimize care. LEVEL OF EVIDENCE 2. PMID:27930763
The Prevalence of Cosmetic Facial Plastic Procedures among Facial Plastic Surgeons.
Moayer, Roxana; Sand, Jordan P; Han, Albert; Nabili, Vishad; Keller, Gregory S
2018-04-01
This is the first study to report on the prevalence of cosmetic facial plastic surgery use among facial plastic surgeons. The aim of this study is to determine the frequency with which facial plastic surgeons have cosmetic procedures themselves. A secondary aim is to determine whether trends in usage of cosmetic facial procedures among facial plastic surgeons are similar to that of nonsurgeons. The study design was an anonymous, five-question, Internet survey distributed via email set in a single academic institution. Board-certified members of the American Academy of Facial Plastic and Reconstructive Surgery (AAFPRS) were included in this study. Self-reported history of cosmetic facial plastic surgery or minimally invasive procedures were recorded. The survey also queried participants for demographic data. A total of 216 members of the AAFPRS responded to the questionnaire. Ninety percent of respondents were male ( n = 192) and 10.3% were female ( n = 22). Thirty-three percent of respondents were aged 31 to 40 years ( n = 70), 25% were aged 41 to 50 years ( n = 53), 21.4% were aged 51 to 60 years ( n = 46), and 20.5% were older than 60 years ( n = 44). Thirty-six percent of respondents had a surgical cosmetic facial procedure and 75% has at least one minimally invasive cosmetic facial procedure. Facial plastic surgeons are frequent users of cosmetic facial plastic surgery. This finding may be due to access, knowledge base, values, or attitudes. By better understanding surgeon attitudes toward facial plastic surgery, we can improve communication with patients and delivery of care. This study is a first step in understanding use of facial plastic procedures among facial plastic surgeons. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Valentova, Jaroslava Varella; Varella, Marco Antonio Corrêa; Havlíček, Jan; Kleisner, Karel
2017-02-01
Various species use multiple sensory modalities in the communication processes. In humans, female facial appearance and vocal display are correlated and it has been suggested that they serve as redundant markers indicating the bearer's reproductive potential and/or residual fertility. In men, evidence for redundancy of facial and vocal attractiveness is ambiguous. We tested the redundancy/multiple signals hypothesis by correlating perceived facial and vocal attractiveness in men and women from two different populations, Brazil and the Czech Republic. We also investigated whether facial and vocal attractiveness are linked to facial morphology. Standardized facial pictures and vocal samples of 86 women (47 from Brazil) and 81 men (41 from Brazil), aged 18-35, were rated for attractiveness by opposite-sex raters. Facial and vocal attractiveness were found to positively correlate in women but not in men. We further applied geometric morphometrics and regressed facial shape coordinates on facial and vocal attractiveness ratings. In women, facial shape was linked to their facial attractiveness but there was no association between facial shape and vocal attractiveness. In men, none of these associations was significant. Having shown that women with more attractive faces possess also more attractive voices, we thus only partly supported the redundant signal hypothesis. Copyright © 2016 Elsevier B.V. All rights reserved.
Nader, Marc-Elie; Bell, Diana; Sturgis, Erich M.; Ginsberg, Lawrence E.; Gidley, Paul W.
2014-01-01
Background Facial nerve paralysis in a patient with a salivary gland mass usually denotes malignancy. However, facial paralysis can also be caused by benign salivary gland tumors. Methods We present a case of facial nerve paralysis due to a benign salivary gland tumor that had the imaging characteristics of an intraparotid facial nerve schwannoma. Results The patient presented to our clinic 4 years after the onset of facial nerve paralysis initially diagnosed as Bell palsy. Computed tomography demonstrated filling and erosion of the stylomastoid foramen with a mass on the facial nerve. Postoperative histopathology showed the presence of a pleomorphic adenoma. Facial paralysis was thought to be caused by extrinsic nerve compression. Conclusions This case illustrates the difficulty of accurate preoperative diagnosis of a parotid gland mass and reinforces the concept that facial nerve paralysis in the context of salivary gland tumors may not always indicate malignancy. PMID:25083397
Nader, Marc-Elie; Bell, Diana; Sturgis, Erich M; Ginsberg, Lawrence E; Gidley, Paul W
2014-08-01
Background Facial nerve paralysis in a patient with a salivary gland mass usually denotes malignancy. However, facial paralysis can also be caused by benign salivary gland tumors. Methods We present a case of facial nerve paralysis due to a benign salivary gland tumor that had the imaging characteristics of an intraparotid facial nerve schwannoma. Results The patient presented to our clinic 4 years after the onset of facial nerve paralysis initially diagnosed as Bell palsy. Computed tomography demonstrated filling and erosion of the stylomastoid foramen with a mass on the facial nerve. Postoperative histopathology showed the presence of a pleomorphic adenoma. Facial paralysis was thought to be caused by extrinsic nerve compression. Conclusions This case illustrates the difficulty of accurate preoperative diagnosis of a parotid gland mass and reinforces the concept that facial nerve paralysis in the context of salivary gland tumors may not always indicate malignancy.
Moriya, Jun; Tanno, Yoshihiko; Sugiura, Yoshinori
2013-11-01
This study investigated whether sensitivity to and evaluation of facial expressions varied with repeated exposure to non-prototypical facial expressions for a short presentation time. A morphed facial expression was presented for 500 ms repeatedly, and participants were required to indicate whether each facial expression was happy or angry. We manipulated the distribution of presentations of the morphed facial expressions for each facial stimulus. Some of the individuals depicted in the facial stimuli expressed anger frequently (i.e., anger-prone individuals), while the others expressed happiness frequently (i.e., happiness-prone individuals). After being exposed to the faces of anger-prone individuals, the participants became less sensitive to those individuals' angry faces. Further, after being exposed to the faces of happiness-prone individuals, the participants became less sensitive to those individuals' happy faces. We also found a relative increase in the social desirability of happiness-prone individuals after exposure to the facial stimuli.
The effects of facial adiposity on attractiveness and perceived leadership ability.
Re, Daniel E; Perrett, David I
2014-01-01
Facial attractiveness has a positive influence on electoral success both in experimental paradigms and in the real world. One parameter that influences facial attractiveness and social judgements is facial adiposity (a facial correlate to body mass index, BMI). Overweight people have high facial adiposity and are perceived to be less attractive and lower in leadership ability. Here, we used an interactive design in order to assess whether the most attractive level of facial adiposity is also perceived as most leader-like. We found that participants reduced facial adiposity more to maximize attractiveness than to maximize perceived leadership ability. These results indicate that facial appearance impacts leadership judgements beyond the effects of attractiveness. We suggest that the disparity between optimal facial adiposity in attractiveness and leadership judgements stems from social trends that have produced thin ideals for attractiveness, while leadership judgements are associated with perception of physical dominance.
A unified coding strategy for processing faces and voices
Yovel, Galit; Belin, Pascal
2013-01-01
Both faces and voices are rich in socially-relevant information, which humans are remarkably adept at extracting, including a person's identity, age, gender, affective state, personality, etc. Here, we review accumulating evidence from behavioral, neuropsychological, electrophysiological, and neuroimaging studies which suggest that the cognitive and neural processing mechanisms engaged by perceiving faces or voices are highly similar, despite the very different nature of their sensory input. The similarity between the two mechanisms likely facilitates the multi-modal integration of facial and vocal information during everyday social interactions. These findings emphasize a parsimonious principle of cerebral organization, where similar computational problems in different modalities are solved using similar solutions. PMID:23664703
The Performance and Observation of Action Shape Future Behaviour
ERIC Educational Resources Information Center
Welsh, Timothy N.; McDougall, Laura M.; Weeks, Daniel J.
2009-01-01
The observation of other people's actions plays an important role in shaping the perceptual, cognitive, and motor processes of the observer. It has been suggested that these social influences occur because the observation of action evokes a representation of that response in the observer and that these codes are subsequently accessed by other…
El Haj, Mohamad; Daoudi, Mohamed; Gallouj, Karim; Moustafa, Ahmed A; Nandrino, Jean-Louis
2018-05-11
Thanks to the current advances in the software analysis of facial expressions, there is a burgeoning interest in understanding emotional facial expressions observed during the retrieval of autobiographical memories. This review describes the research on facial expressions during autobiographical retrieval showing distinct emotional facial expressions according to the characteristics of retrieved memoires. More specifically, this research demonstrates that the retrieval of emotional memories can trigger corresponding emotional facial expressions (e.g. positive memories may trigger positive facial expressions). Also, this study demonstrates the variations of facial expressions according to specificity, self-relevance, or past versus future direction of memory construction. Besides linking research on facial expressions during autobiographical retrieval to cognitive and affective characteristics of autobiographical memory in general, this review positions this research within the broader context research on the physiologic characteristics of autobiographical retrieval. We also provide several perspectives for clinical studies to investigate facial expressions in populations with deficits in autobiographical memory (e.g. whether autobiographical overgenerality in neurologic and psychiatric populations may trigger few emotional facial expressions). In sum, this review paper demonstrates how the evaluation of facial expressions during autobiographical retrieval may help understand the functioning and dysfunctioning of autobiographical memory.
Aberrant patterns of visual facial information usage in schizophrenia.
Clark, Cameron M; Gosselin, Frédéric; Goghari, Vina M
2013-05-01
Deficits in facial emotion perception have been linked to poorer functional outcome in schizophrenia. However, the relationship between abnormal emotion perception and functional outcome remains poorly understood. To better understand the nature of facial emotion perception deficits in schizophrenia, we used the Bubbles Facial Emotion Perception Task to identify differences in usage of visual facial information in schizophrenia patients (n = 20) and controls (n = 20), when differentiating between angry and neutral facial expressions. As hypothesized, schizophrenia patients required more facial information than controls to accurately differentiate between angry and neutral facial expressions, and they relied on different facial features and spatial frequencies to differentiate these facial expressions. Specifically, schizophrenia patients underutilized the eye regions, overutilized the nose and mouth regions, and virtually ignored information presented at the lowest levels of spatial frequency. In addition, a post hoc one-tailed t test revealed a positive relationship of moderate strength between the degree of divergence from "normal" visual facial information usage in the eye region and lower overall social functioning. These findings provide direct support for aberrant patterns of visual facial information usage in schizophrenia in differentiating between socially salient emotional states. © 2013 American Psychological Association
Automated facial attendance logger for students
NASA Astrophysics Data System (ADS)
Krithika, L. B.; Kshitish, S.; Kishore, M. R.
2017-11-01
From the past two decades, various spheres of activity are in the aspect of ‘Face recognition’ as an essential tool. The complete series of actions of face recognition is composed of 3 stages: Face Detection, Feature Extraction and Recognition. In this paper, we make an effort to put forth a new application of face recognition and detection in education. The proposed system scans the classroom and detects the face of the students in class and matches the scanned face with the templates that is available in the database and updates the attendance of the respective students.
Towards the neurobiology of emotional body language.
de Gelder, Beatrice
2006-03-01
People's faces show fear in many different circumstances. However, when people are terrified, as well as showing emotion, they run for cover. When we see a bodily expression of emotion, we immediately know what specific action is associated with a particular emotion, leaving little need for interpretation of the signal, as is the case for facial expressions. Research on emotional body language is rapidly emerging as a new field in cognitive and affective neuroscience. This article reviews how whole-body signals are automatically perceived and understood, and their role in emotional communication and decision-making.
Belden, Sarah; Miller, Richard A.
2015-01-01
There is a growing demand for noninvasive anti-aging products for which the periorbital region serves as a critical aspect of facial rejuvenation. This article reviews a multitude of cosmeceutical ingredients that have good scientific data, specifically for the periorbital region. Topical treatment options have exponentially grown from extensively studied retinoids, to recently developed technology, such as growth factors and peptides. With a focus on the periorbital anatomy, the authors review the mechanisms of action of topical cosmeceutical ingredients, effectiveness of ingredient penetration through the stratum corneum, and validity of clinical trials. PMID:26430490
Su, Diya; Li, Dezhi; Wang, Shiwei; Qiao, Hui; Li, Ping; Wang, Binbin; Wan, Hong; Schumacher, Michael; Liu, Song
2018-06-06
Closed temporal bone fractures due to cranial trauma often result in facial nerve injury, frequently inducing incomplete facial paralysis. Conventional hypoglossal-facial nerve end-to-end neurorrhaphy may not be suitable for these injuries because sacrifice of the lesioned facial nerve for neurorrhaphy destroys the remnant axons and/or potential spontaneous innervation. we modified the classical method by hypoglossal-facial nerve "side"-to-side neurorrhaphy using an interpositional predegenerated nerve graft to treat these injuries. Five patients who experienced facial paralysis resulting from closed temporal bone fractures due to cranial trauma were treated with the "side"-to-side neurorrhaphy. An additional 4 patients did not receive the neurorrhaphy and served as controls. Before treatment, all patients had suffered House-Brackmann (H-B) grade V or VI facial paralysis for a mean of 5 months. During the 12-30 months of follow-up period, no further detectable deficits were observed, but an improvement in facial nerve function was evidenced over time in the 5 neurorrhaphy-treated patients. At the end of follow-up, the improved facial function reached H-B grade II in 3, grade III in 1 and grade IV in 1 of the 5 patients, consistent with the electrophysiological examinations. In the control group, two patients showed slightly spontaneous innervation with facial function improved from H-B grade VI to V, and the other patients remained unchanged at H-B grade V or VI. We concluded that the hypoglossal-facial nerve "side"-to-side neurorrhaphy can preserve the injured facial nerve and is suitable for treating significant incomplete facial paralysis resulting from closed temporal bone fractures, providing an evident beneficial effect. Moreover, this treatment may be performed earlier after the onset of facial paralysis in order to reduce the unfavorable changes to the injured facial nerve and atrophy of its target muscles due to long-term denervation and allow axonal regrowth in a rich supportive environment.
Patidar, Monika V; Deshmukh, Ashish Ramchandra; Khedkar, Maruti Yadav
2016-01-01
Background: Acne vulgaris is the most common disease of the skin affecting adolescents and young adults causing psychological distress. The combination of antibiotic resistance, adverse effects of topical and systemic anti acne medications and desire for high tech approaches have all led to new enthusiasm for light based acne treatment. Intense pulse light (IPL) therapy has three modes of action in acne vulgaris i.e., photochemical, photo thermal and photo immunological. Aims: (1) to study efficacy of IPL therapy in facial acne vulgaris. (2) To compare two fluences - one normal and other subnormal on right and left side of face respectively. Methods: (Including settings and design and statistical analysis used). Total 45 patients in age group 16 to 28 years with inflammatory facial acne vulgaris were included in prospective study. Baseline data for each patient was recorded. All patients were given 4 sittings of IPL at 2 weeks interval and were followed for 2 months every 2 weeks. Fluence used was 35J/cm2 on right and 20J/cm2 on left side. Percentage reduction in lesion count was calculated at each sitting and follow up and graded as mild (0-25%), moderate (26-50%), good (51-75%) and excellent (76-100%). Side effects were noted. The results were analysed using Mann-Whitney Test. Results: On right side, excellent results were achieved in 10(22%), good in 22(49%) and moderate in 13(29%) patients. On left side excellent were results achieved in 7(15%), good in 19(42%) and moderate in 16(43%) patients. There was no statically significant difference noted in efficacy of two fluences used in treatment of facial acne vulgaris. Conclusions: IPL is a effective and safe option for inflammatory acne vulgaris with minimal reversible side effects. Subnormal fluence is as effective as normal fluence in Indian skin. PMID:27688446
Patidar, Monika V; Deshmukh, Ashish Ramchandra; Khedkar, Maruti Yadav
2016-01-01
Acne vulgaris is the most common disease of the skin affecting adolescents and young adults causing psychological distress. The combination of antibiotic resistance, adverse effects of topical and systemic anti acne medications and desire for high tech approaches have all led to new enthusiasm for light based acne treatment. Intense pulse light (IPL) therapy has three modes of action in acne vulgaris i.e., photochemical, photo thermal and photo immunological. (1) to study efficacy of IPL therapy in facial acne vulgaris. (2) To compare two fluences - one normal and other subnormal on right and left side of face respectively. (Including settings and design and statistical analysis used). Total 45 patients in age group 16 to 28 years with inflammatory facial acne vulgaris were included in prospective study. Baseline data for each patient was recorded. All patients were given 4 sittings of IPL at 2 weeks interval and were followed for 2 months every 2 weeks. Fluence used was 35J/cm(2) on right and 20J/cm(2) on left side. Percentage reduction in lesion count was calculated at each sitting and follow up and graded as mild (0-25%), moderate (26-50%), good (51-75%) and excellent (76-100%). Side effects were noted. The results were analysed using Mann-Whitney Test. On right side, excellent results were achieved in 10(22%), good in 22(49%) and moderate in 13(29%) patients. On left side excellent were results achieved in 7(15%), good in 19(42%) and moderate in 16(43%) patients. There was no statically significant difference noted in efficacy of two fluences used in treatment of facial acne vulgaris. IPL is a effective and safe option for inflammatory acne vulgaris with minimal reversible side effects. Subnormal fluence is as effective as normal fluence in Indian skin.
Akwagyiram, Ivy; Butler, Andrew; Maclure, Robert; Colgan, Patrick; Yan, Nicole; Bosma, Mary Lynn
2016-08-25
Gingivitis can develop as a reaction to dental plaque. It can be limited by curtailing plaque build-up through actions including tooth brushing and the use of medicinal mouthwashes, such as those containing chlorhexidine digluconate (CHX), that can reach parts of the mouth that may be missed when brushing. This study aimed to compare dental stain control of twice-daily brushing with a sodium fluoride (NaF) dentifrice containing 67 % sodium bicarbonate (NaHCO3) or a commercially available NaF silica dentifrice without NaHCO3, while using a mouthwash containing 0.2 % CHX. This was a 6-week, randomised, two-site, examiner-blind, parallel-group study in healthy subjects with at least 'mild' stain levels on the facial surfaces of ≥4 teeth and ≥15 bleeding sites. Assessment was via modified Lobene Stain Index (MLSI), the score being the mean of stain intensity multiplied by area (MLSI [IxA]). One hundred and fifty of 160 randomised subjects completed the study. There were no significant differences in Overall (facial and lingual) MLSI (IxA) scores between dentifrices. The Facial MLSI (IxA) was statistically significant at 6 weeks, favouring the 67 % NaHCO3 dentifrice (p = 0.0404). Post-hoc analysis, conducted due to a significant site interaction, found significant differences for all MLSI scores in favour of the 67 % NaHCO3 dentifrice at Site 1 (both weeks) but not Site 2. No overall significant differences were found between a 67 and 0 % NaHCO3 dentifrice in controlling CHX stain; a significant difference on facial surfaces suggests advantage of the former on more accessible surfaces. This study was registered at ClinicalTrials.gov ( NCT01962493 ) on 10 October 2013 and was funded by GSK Consumer Healthcare.
Cheung, Pui Kwan; Fok, Lincoln
2017-10-01
Plastic microbeads are often added to personal care and cosmetic products (PCCPs) as an abrasive agent in exfoliants. These beads have been reported to contaminate the aquatic environment and are sufficiently small to be readily ingested by aquatic organisms. Plastic microbeads can be directly released into the aquatic environment with domestic sewage if no sewage treatment is provided, and they can also escape from wastewater treatment plants (WWTPs) because of incomplete removal. However, the emissions of microbeads from these two sources have never been estimated for China, and no regulation has been imposed on the use of plastic microbeads in PCCPs. Therefore, in this study, we aimed to estimate the annual microbead emissions in Mainland China from both direct emissions and WWTP emissions. Nine facial scrubs were purchased, and the microbeads in the scrubs were extracted and enumerated. The microbead density in those products ranged from 5219 to 50,391 particles/g, with an average of 20,860 particles/g. Direct emissions arising from the use of facial scrubs were estimated using this average density number, population data, facial scrub usage rate, sewage treatment rate, and a few conservative assumptions. WWTP emissions were calculated by multiplying the annual treated sewage volume and estimated microbead density in treated sewage. We estimated that, on average, 209.7 trillion microbeads (306.9 tonnes) are emitted into the aquatic environment in Mainland China every year. More than 80% of the emissions originate from incomplete removal in WWTPs, and the remaining 20% are derived from direct emissions. Although the weight of the emitted microbeads only accounts for approximately 0.03% of the plastic waste input into the ocean from China, the number of microbeads emitted far exceeds the previous estimate of plastic debris (>330 μm) on the world's sea surface. Immediate actions are required to prevent plastic microbeads from entering the aquatic environment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Interference among the Processing of Facial Emotion, Face Race, and Face Gender.
Li, Yongna; Tse, Chi-Shing
2016-01-01
People can process multiple dimensions of facial properties simultaneously. Facial processing models are based on the processing of facial properties. The current study examined the processing of facial emotion, face race, and face gender using categorization tasks. The same set of Chinese, White and Black faces, each posing a neutral, happy or angry expression, was used in three experiments. Facial emotion interacted with face race in all the tasks. The interaction of face race and face gender was found in the race and gender categorization tasks, whereas the interaction of facial emotion and face gender was significant in the emotion and gender categorization tasks. These results provided evidence for a symmetric interaction between variant facial properties (emotion) and invariant facial properties (race and gender).
Interference among the Processing of Facial Emotion, Face Race, and Face Gender
Li, Yongna; Tse, Chi-Shing
2016-01-01
People can process multiple dimensions of facial properties simultaneously. Facial processing models are based on the processing of facial properties. The current study examined the processing of facial emotion, face race, and face gender using categorization tasks. The same set of Chinese, White and Black faces, each posing a neutral, happy or angry expression, was used in three experiments. Facial emotion interacted with face race in all the tasks. The interaction of face race and face gender was found in the race and gender categorization tasks, whereas the interaction of facial emotion and face gender was significant in the emotion and gender categorization tasks. These results provided evidence for a symmetric interaction between variant facial properties (emotion) and invariant facial properties (race and gender). PMID:27840621
Kim, B Y; Choi, J W; Park, K C; Youn, S W
2013-02-01
Enlarged facial pores have been esthetic problems and have become a matter of cosmetic concern. Several factors are supposed to be related to the enlargement of facial pores, although scientific evaluations were not performed yet. To assess the correlation between facial pores and possible relating factors such as age, gender, sebum secretion, skin elasticity, and the presence of acne, using objective bioengineering instruments. Sixty volunteers, 30 males and 30 females, participated in this study. Various parameters of facial pores were assessed using the Robo Skin Analyzer. The facial sebum secretion and skin elasticity were measured using the Sebumeter and the Cutometer, respectively. These data were compared and correlated to examine the possible relationship between facial pores and age, sebum secretion and skin elasticity, according to gender and the presence of acne. Male gender and the existence of acne were correlated with higher number of facial pores. Sebum secretion levels showed positive correlation with facial pores. The R7 parameter of skin elasticity was negatively correlated with facial pores, suggesting increased facial pores with decreased skin elasticity. However, the age and the severity of acne did not show a definite relationship with facial pores. Male, increased sebum and decreased skin elasticity were mostly correlated with facial pore development. Further studies on population with various demographic profiles and more severe acne may be helpful to elucidate the potential effect of aging and acne severity on facial pores. © 2011 John Wiley & Sons A/S.
Hatch, Cory D; Wehby, George L; Nidey, Nichole L; Moreno Uribe, Lina M
2017-09-01
Meeting patient desires for enhanced facial esthetics requires that providers have standardized and objective methods to measure esthetics. The authors evaluated the effects of objective 3-dimensional (3D) facial shape and asymmetry measurements derived from 3D facial images on perceptions of facial attractiveness. The 3D facial images of 313 adults in Iowa were digitized with 32 landmarks, and objective 3D facial measurements capturing symmetric and asymmetric components of shape variation, centroid size, and fluctuating asymmetry were obtained from the 3D coordinate data using geo-morphometric analyses. Frontal and profile images of study participants were rated for facial attractiveness by 10 volunteers (5 women and 5 men) on a 5-point Likert scale and a visual analog scale. Multivariate regression was used to identify the effects of the objective 3D facial measurements on attractiveness ratings. Several objective 3D facial measurements had marked effects on attractiveness ratings. Shorter facial heights with protrusive chins, midface retrusion, faces with protrusive noses and thin lips, flat mandibular planes with deep labiomental folds, any cants of the lip commissures and floor of the nose, larger faces overall, and increased fluctuating asymmetry were rated as significantly (P < .001) less attractive. Perceptions of facial attractiveness can be explained by specific 3D measurements of facial shapes and fluctuating asymmetry, which have important implications for clinical practice and research. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Facial nerve palsy due to birth trauma
Seventh cranial nerve palsy due to birth trauma; Facial palsy - birth trauma; Facial palsy - neonate; Facial palsy - infant ... An infant's facial nerve is also called the seventh cranial nerve. It can be damaged just before or at the time of delivery. ...
Taylor, Helena O; Morrison, Clinton S; Linden, Olivia; Phillips, Benjamin; Chang, Johnny; Byrne, Margaret E; Sullivan, Stephen R; Forrest, Christopher R
2014-01-01
Although symmetry is hailed as a fundamental goal of aesthetic and reconstructive surgery, our tools for measuring this outcome have been limited and subjective. With the advent of three-dimensional photogrammetry, surface geometry can be captured, manipulated, and measured quantitatively. Until now, few normative data existed with regard to facial surface symmetry. Here, we present a method for reproducibly calculating overall facial symmetry and present normative data on 100 subjects. We enrolled 100 volunteers who underwent three-dimensional photogrammetry of their faces in repose. We collected demographic data on age, sex, and race and subjectively scored facial symmetry. We calculated the root mean square deviation (RMSD) between the native and reflected faces, reflecting about a plane of maximum symmetry. We analyzed the interobserver reliability of the subjective assessment of facial asymmetry and the quantitative measurements and compared the subjective and objective values. We also classified areas of greatest asymmetry as localized to the upper, middle, or lower facial thirds. This cluster of normative data was compared with a group of patients with subtle but increasing amounts of facial asymmetry. We imaged 100 subjects by three-dimensional photogrammetry. There was a poor interobserver correlation between subjective assessments of asymmetry (r = 0.56). There was a high interobserver reliability for quantitative measurements of facial symmetry RMSD calculations (r = 0.91-0.95). The mean RMSD for this normative population was found to be 0.80 ± 0.24 mm. Areas of greatest asymmetry were distributed as follows: 10% upper facial third, 49% central facial third, and 41% lower facial third. Precise measurement permitted discrimination of subtle facial asymmetry within this normative group and distinguished norms from patients with subtle facial asymmetry, with placement of RMSDs along an asymmetry ruler. Facial surface symmetry, which is poorly assessed subjectively, can be easily and reproducibly measured using three-dimensional photogrammetry. The RMSD for facial asymmetry of healthy volunteers clusters at approximately 0.80 ± 0.24 mm. Patients with facial asymmetry due to a pathologic process can be differentiated from normative facial asymmetry based on their RMSDs.
Beurskens, Carien H G; Heymans, Peter G
2006-01-01
What is the effect of mime therapy on facial symmetry and severity of paresis in people with facial nerve paresis? Randomised controlled trial. 50 people recruited from the Outpatient department of two metropolitan hospitals with facial nerve paresis for more than nine months. The experimental group received three months of mime therapy consisting of massage, relaxation, inhibition of synkinesis, and co-ordination and emotional expression exercises. The control group was placed on a waiting list. Assessments were made on admission to the trial and three months later by a measurer blinded to group allocation. Facial symmetry was measured using the Sunnybrook Facial Grading System. Severity of paresis was measured using the House-Brackmann Facial Grading System. After three months of mime therapy, the experimental group had improved their facial symmetry by 20.4 points (95% CI 10.4 to 30.4) on the Sunnybrook Facial Grading System compared with the control group. In addition, the experimental group had reduced the severity of their paresis by 0.6 grade (95% CI 0.1 to 1.1) on the House-Brackmann Facial Grading System compared with the control group. These effects were independent of age, sex, and duration of paresis. Mime therapy improves facial symmetry and reduces the severity of paresis in people with facial nerve paresis.
Facial transplantation for massive traumatic injuries.
Alam, Daniel S; Chi, John J
2013-10-01
This article describes the challenges of facial reconstruction and the role of facial transplantation in certain facial defects and injuries. This information is of value to surgeons assessing facial injuries with massive soft tissue loss or injury. Copyright © 2013 Elsevier Inc. All rights reserved.
Human and animal sounds influence recognition of body language.
Van den Stock, Jan; Grèzes, Julie; de Gelder, Beatrice
2008-11-25
In naturalistic settings emotional events have multiple correlates and are simultaneously perceived by several sensory systems. Recent studies have shown that recognition of facial expressions is biased towards the emotion expressed by a simultaneously presented emotional expression in the voice even if attention is directed to the face only. So far, no study examined whether this phenomenon also applies to whole body expressions, although there is no obvious reason why this crossmodal influence would be specific for faces. Here we investigated whether perception of emotions expressed in whole body movements is influenced by affective information provided by human and by animal vocalizations. Participants were instructed to attend to the action displayed by the body and to categorize the expressed emotion. The results indicate that recognition of body language is biased towards the emotion expressed by the simultaneously presented auditory information, whether it consist of human or of animal sounds. Our results show that a crossmodal influence from auditory to visual emotional information obtains for whole body video images with the facial expression blanked and includes human as well as animal sounds.
Facial dog bite injuries in children: treatment and outcome assessment.
Eppley, Barry L; Schleich, Arno Rene
2013-03-01
Dog bite injuries to a child's face are not an infrequent occurrence. They may require primary and revisional surgery. All result in permanent facial scars. This report describes the treatment and outcomes of dog bites of the face, scalp, and neck based on a case series of 107 children over a 10-year period.The average children's age was 5.9 years. In cases where the dog was identified (95%), it was known to the victim and their family. The events leading to the dog bite were categorized as provoked (77%) in the majority of the cases.The majority of wounds could be closed primarily without a significant risk of wound infection. Complex reconstructions were required in more severe cases. The majority of families (77%) opted for scar revision between 9 and 18 months after initial treatment to improve the aesthetic outcome.Lawsuit actions resulted in 39 of the cases making good documentation an essential part of treatment. Dogbite injuries to the face in children frequently require multiple scar revisions to obtain the best possible aesthetic outcome, and the family should be so counseled at the onset of treatment.
Picasso and the art of distortion and dislocation: the artist as researcher and experimentalist.
Cohen, M M
1991-01-01
This paper is divided into four parts. In the first part, some general ideas about Picasso are set forth including: his association with multiple artistic innovations; his use of many different media; his notions of beauty and the relationship between art and nature; his ideas about the placement of body parts, symmetry, and color; and his relish in producing paintings with shock value. Emphasis is placed on the relationship between art and science and on Picasso's role as a researcher and experimentalist. In the second part, works of art during the cubist and postcubist years are discussed with emphasis on the development of simultaneity--the coexistence of different views of an object in the same picture. Topics included are: facial grafting; facial accommodation; multifacialism; profile insertion; snout formation, elevation, and rotation; concurrent faces; and more comprehensive simultaneity. In the third part, other influences on Picasso are presented including: the effects of action, motion, and activity; Surrealism and sexual symbolism; and Picasso's artistic treatment of women. In the fourth part, a comparison is made between Picasso's experiments and nature's experiments.
Facial expressions and pair bonds in hylobatids.
Florkiewicz, Brittany; Skollar, Gabriella; Reichard, Ulrich H
2018-06-06
Facial expressions are an important component of primate communication that functions to transmit social information and modulate intentions and motivations. Chimpanzees and macaques, for example, produce a variety of facial expressions when communicating with conspecifics. Hylobatids also produce various facial expressions; however, the origin and function of these facial expressions are still largely unclear. It has been suggested that larger facial expression repertoires may have evolved in the context of social complexity, but this link has yet to be tested at a broader empirical basis. The social complexity hypothesis offers a possible explanation for the evolution of complex communicative signals such as facial expressions, because as the complexity of an individual's social environment increases so does the need for communicative signals. We used an intraspecies, pair-focused study design to test the link between facial expressions and sociality within hylobatids, specifically the strength of pair-bonds. The current study compared 206 hr of video and 103 hr of focal animal data for ten hylobatid pairs from three genera (Nomascus, Hoolock, and Hylobates) living at the Gibbon Conservation Center. Using video footage, we explored 5,969 facial expressions along three dimensions: repertoire use, repertoire breadth, and facial expression synchrony [FES]. We then used focal animal data to compare dimensions of facial expressiveness to pair bond strength and behavioral synchrony. Hylobatids in our study overlapped in only half of their facial expressions (50%) with the only other detailed, quantitative study of hylobatid facial expressions, while 27 facial expressions were uniquely observed in our study animals. Taken together, hylobatids have a large facial expression repertoire of at least 80 unique facial expressions. Contrary to our prediction, facial repertoire composition was not significantly correlated with pair bond strength, rates of territorial synchrony, or rates of behavioral synchrony. We found that FES was the strongest measure of hylobatid expressiveness and was significantly positively correlated with higher sociality index scores; however, FES showed no significant correlation with behavioral synchrony. No noticeable differences between pairs were found regarding rates of behavioral or territorial synchrony. Facial repertoire sizes and FES were not significantly correlated with rates of behavioral synchrony or territorial synchrony. Our study confirms an important role of facial expressions in maintaining pair bonds and coordinating activities in hylobatids. Data support the hypothesis that facial expressions and sociality have been linked in hylobatid and primate evolution. It is possible that larger facial repertoires may have contributed to strengthening pair bonds in primates, because richer facial repertoires provide more opportunities for FES which can effectively increase the "understanding" between partners through smoother coordination of interaction patterns. This study supports the social complexity hypothesis as the driving force for the evolution of complex communication signaling. © 2018 Wiley Periodicals, Inc.
Yetiser, Sertac
2018-06-08
Three patients with large intratemporal facial schwannomas underwent tumor removal and facial nerve reconstruction with hypoglossal anastomosis. The surgical strategy for the cases was tailored to the location of the mass and its extension along the facial nerve. To provide data on the different clinical aspects of facial nerve schwannoma, the appropriate planning for management, and the predictive outcomes of facial function. Three patients with facial schwannomas (two men and one woman, ages 45, 36, and 52 years, respectively) who presented to the clinic between 2009 and 2015 were reviewed. They all had hearing loss but normal facial function. All patients were operated on with radical tumor removal via mastoidectomy and subtotal petrosectomy and simultaneous cranial nerve (CN) 7- CN 12 anastomosis. Multiple segments of the facial nerve were involved ranging in size from 3 to 7 cm. In the follow-up period of 9 to 24 months, there was no tumor recurrence. Facial function was scored House-Brackmann grades II and III, but two patients are still in the process of functional recovery. Conservative treatment with sparing of the nerve is considered in patients with small tumors. Excision of a large facial schwannoma with immediate hypoglossal nerve grafting as a primary procedure can provide satisfactory facial nerve function. One of the disadvantages of performing anastomosis is that there is not enough neural tissue just before the bifurcation of the main stump to provide neural suturing without tension because middle fossa extension of the facial schwannoma frequently involves the main facial nerve at the stylomastoid foramen. Reanimation should be processed with extensive backward mobilization of the hypoglossal nerve. Georg Thieme Verlag KG Stuttgart · New York.
Imaging the Facial Nerve: A Contemporary Review
Gupta, Sachin; Mends, Francine; Hagiwara, Mari; Fatterpekar, Girish; Roehm, Pamela C.
2013-01-01
Imaging plays a critical role in the evaluation of a number of facial nerve disorders. The facial nerve has a complex anatomical course; thus, a thorough understanding of the course of the facial nerve is essential to localize the sites of pathology. Facial nerve dysfunction can occur from a variety of causes, which can often be identified on imaging. Computed tomography and magnetic resonance imaging are helpful for identifying bony facial canal and soft tissue abnormalities, respectively. Ultrasound of the facial nerve has been used to predict functional outcomes in patients with Bell's palsy. More recently, diffusion tensor tractography has appeared as a new modality which allows three-dimensional display of facial nerve fibers. PMID:23766904
Dynamic facial expression recognition based on geometric and texture features
NASA Astrophysics Data System (ADS)
Li, Ming; Wang, Zengfu
2018-04-01
Recently, dynamic facial expression recognition in videos has attracted growing attention. In this paper, we propose a novel dynamic facial expression recognition method by using geometric and texture features. In our system, the facial landmark movements and texture variations upon pairwise images are used to perform the dynamic facial expression recognition tasks. For one facial expression sequence, pairwise images are created between the first frame and each of its subsequent frames. Integration of both geometric and texture features further enhances the representation of the facial expressions. Finally, Support Vector Machine is used for facial expression recognition. Experiments conducted on the extended Cohn-Kanade database show that our proposed method can achieve a competitive performance with other methods.
Axelrod, Vadim; Yovel, Galit
2010-08-15
Most studies of face identity have excluded external facial features by either removing them or covering them with a hat. However, external facial features may modify the representation of internal facial features. Here we assessed whether the representation of face identity in the fusiform face area (FFA), which has been primarily studied for internal facial features, is modified by differences in external facial features. We presented faces in which external and internal facial features were manipulated independently. Our findings show that the FFA was sensitive to differences in external facial features, but this effect was significantly larger when the external and internal features were aligned than misaligned. We conclude that the FFA generates a holistic representation in which the internal and the external facial features are integrated. These results indicate that to better understand real-life face recognition both external and internal features should be included. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Sound-induced facial synkinesis following facial nerve paralysis.
Ma, Ming-San; van der Hoeven, Johannes H; Nicolai, Jean-Philippe A; Meek, Marcel F
2009-08-01
Facial synkinesis (or synkinesia) (FS) occurs frequently after paresis or paralysis of the facial nerve and is in most cases due to aberrant regeneration of (branches of) the facial nerve. Patients suffer from inappropriate and involuntary synchronous facial muscle contractions. Here we describe two cases of sound-induced facial synkinesis (SFS) after facial nerve injury. As far as we know, this phenomenon has not been described in the English literature before. Patient A presented with right hemifacial palsy after lesion of the facial nerve due to skull base fracture. He reported involuntary muscle activity at the right corner of the mouth, specifically on hearing ringing keys. Patient B suffered from left hemifacial palsy following otitis media and developed involuntary muscle contraction in the facial musculature specifically on hearing clapping hands or a trumpet sound. Both patients were evaluated by means of video, audio and EMG analysis. Possible mechanisms in the pathophysiology of SFS are postulated and therapeutic options are discussed.
Noninvasive Facial Rejuvenation. Part 1: Patient-Directed
Commander, Sarah Jane; Chang, Daniel; Fakhro, Abdulla; Nigro, Marjory G.; Lee, Edward I.
2016-01-01
A proper knowledge of noninvasive facial rejuvenation is integral to the practice of a cosmetic surgeon. Noninvasive facial rejuvenation can be divided into patient- versus physician-directed modalities. Patient-directed facial rejuvenation combines the use of facial products such as sunscreen, moisturizers, retinoids, α-hydroxy acids, and various antioxidants to both maintain youthful skin and rejuvenate damaged skin. Physicians may recommend and often prescribe certain products, but the patients are in control of this type of facial rejuvenation. On the other hand, physician-directed facial rejuvenation entails modalities that require direct physician involvement, such as neuromodulators, filler injections, laser resurfacing, microdermabrasion, and chemical peels. With the successful integration of each of these modalities, a complete facial regimen can be established and patient satisfaction can be maximized. This article is the first in a three-part series describing noninvasive facial rejuvenation. The authors focus on patient-directed facial rejuvenation. It is important, however, to emphasize that even in a patient-directed modality, a physician's involvement through education and guidance is integral to its success. PMID:27478421