Sample records for facial information processing

  1. Relation between facial affect recognition and configural face processing in antipsychotic-free schizophrenia.

    PubMed

    Fakra, Eric; Jouve, Elisabeth; Guillaume, Fabrice; Azorin, Jean-Michel; Blin, Olivier

    2015-03-01

    Deficit in facial affect recognition is a well-documented impairment in schizophrenia, closely connected to social outcome. This deficit could be related to psychopathology, but also to a broader dysfunction in processing facial information. In addition, patients with schizophrenia inadequately use configural information-a type of processing that relies on spatial relationships between facial features. To date, no study has specifically examined the link between symptoms and misuse of configural information in the deficit in facial affect recognition. Unmedicated schizophrenia patients (n = 30) and matched healthy controls (n = 30) performed a facial affect recognition task and a face inversion task, which tests aptitude to rely on configural information. In patients, regressions were carried out between facial affect recognition, symptom dimensions and inversion effect. Patients, compared with controls, showed a deficit in facial affect recognition and a lower inversion effect. Negative symptoms and lower inversion effect could account for 41.2% of the variance in facial affect recognition. This study confirms the presence of a deficit in facial affect recognition, and also of dysfunctional manipulation in configural information in antipsychotic-free patients. Negative symptoms and poor processing of configural information explained a substantial part of the deficient recognition of facial affect. We speculate that this deficit may be caused by several factors, among which independently stand psychopathology and failure in correctly manipulating configural information. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  2. A shape-based account for holistic face processing.

    PubMed

    Zhao, Mintao; Bülthoff, Heinrich H; Bülthoff, Isabelle

    2016-04-01

    Faces are processed holistically, so selective attention to 1 face part without any influence of the others often fails. In this study, 3 experiments investigated what type of facial information (shape or surface) underlies holistic face processing and whether generalization of holistic processing to nonexperienced faces requires extensive discrimination experience. Results show that facial shape information alone is sufficient to elicit the composite face effect (CFE), 1 of the most convincing demonstrations of holistic processing, whereas facial surface information is unnecessary (Experiment 1). The CFE is eliminated when faces differ only in surface but not shape information, suggesting that variation of facial shape information is necessary to observe holistic face processing (Experiment 2). Removing 3-dimensional (3D) facial shape information also eliminates the CFE, indicating the necessity of 3D shape information for holistic face processing (Experiment 3). Moreover, participants show similar holistic processing for faces with and without extensive discrimination experience (i.e., own- and other-race faces), suggesting that generalization of holistic processing to nonexperienced faces requires facial shape information, but does not necessarily require further individuation experience. These results provide compelling evidence that facial shape information underlies holistic face processing. This shape-based account not only offers a consistent explanation for previous studies of holistic face processing, but also suggests a new ground-in addition to expertise-for the generalization of holistic processing to different types of faces and to nonface objects. (c) 2016 APA, all rights reserved).

  3. Two Ways to Facial Expression Recognition? Motor and Visual Information Have Different Effects on Facial Expression Recognition.

    PubMed

    de la Rosa, Stephan; Fademrecht, Laura; Bülthoff, Heinrich H; Giese, Martin A; Curio, Cristóbal

    2018-06-01

    Motor-based theories of facial expression recognition propose that the visual perception of facial expression is aided by sensorimotor processes that are also used for the production of the same expression. Accordingly, sensorimotor and visual processes should provide congruent emotional information about a facial expression. Here, we report evidence that challenges this view. Specifically, the repeated execution of facial expressions has the opposite effect on the recognition of a subsequent facial expression than the repeated viewing of facial expressions. Moreover, the findings of the motor condition, but not of the visual condition, were correlated with a nonsensory condition in which participants imagined an emotional situation. These results can be well accounted for by the idea that facial expression recognition is not always mediated by motor processes but can also be recognized on visual information alone.

  4. Dynamics of processing invisible faces in the brain: automatic neural encoding of facial expression information.

    PubMed

    Jiang, Yi; Shannon, Robert W; Vizueta, Nathalie; Bernat, Edward M; Patrick, Christopher J; He, Sheng

    2009-02-01

    The fusiform face area (FFA) and the superior temporal sulcus (STS) are suggested to process facial identity and facial expression information respectively. We recently demonstrated a functional dissociation between the FFA and the STS as well as correlated sensitivity of the STS and the amygdala to facial expressions using an interocular suppression paradigm [Jiang, Y., He, S., 2006. Cortical responses to invisible faces: dissociating subsystems for facial-information processing. Curr. Biol. 16, 2023-2029.]. In the current event-related brain potential (ERP) study, we investigated the temporal dynamics of facial information processing. Observers viewed neutral, fearful, and scrambled face stimuli, either visibly or rendered invisible through interocular suppression. Relative to scrambled face stimuli, intact visible faces elicited larger positive P1 (110-130 ms) and larger negative N1 or N170 (160-180 ms) potentials at posterior occipital and bilateral occipito-temporal regions respectively, with the N170 amplitude significantly greater for fearful than neutral faces. Invisible intact faces generated a stronger signal than scrambled faces at 140-200 ms over posterior occipital areas whereas invisible fearful faces (compared to neutral and scrambled faces) elicited a significantly larger negative deflection starting at 220 ms along the STS. These results provide further evidence for cortical processing of facial information without awareness and elucidate the temporal sequence of automatic facial expression information extraction.

  5. Symmetrical and Asymmetrical Interactions between Facial Expressions and Gender Information in Face Perception.

    PubMed

    Liu, Chengwei; Liu, Ying; Iqbal, Zahida; Li, Wenhui; Lv, Bo; Jiang, Zhongqing

    2017-01-01

    To investigate the interaction between facial expressions and facial gender information during face perception, the present study matched the intensities of the two types of information in face images and then adopted the orthogonal condition of the Garner Paradigm to present the images to participants who were required to judge the gender and expression of the faces; the gender and expression presentations were varied orthogonally. Gender and expression processing displayed a mutual interaction. On the one hand, the judgment of angry expressions occurred faster when presented with male facial images; on the other hand, the classification of the female gender occurred faster when presented with a happy facial expression than when presented with an angry facial expression. According to the evoked-related potential results, the expression classification was influenced by gender during the face structural processing stage (as indexed by N170), which indicates the promotion or interference of facial gender with the coding of facial expression features. However, gender processing was affected by facial expressions in more stages, including the early (P1) and late (LPC) stages of perceptual processing, reflecting that emotional expression influences gender processing mainly by directing attention.

  6. Proposal of Self-Learning and Recognition System of Facial Expression

    NASA Astrophysics Data System (ADS)

    Ogawa, Yukihiro; Kato, Kunihito; Yamamoto, Kazuhiko

    We describe realization of more complicated function by using the information acquired from some equipped unripe functions. The self-learning and recognition system of the human facial expression, which achieved under the natural relation between human and robot, are proposed. The robot with this system can understand human facial expressions and behave according to their facial expressions after the completion of learning process. The system modelled after the process that a baby learns his/her parents’ facial expressions. Equipping the robot with a camera the system can get face images and equipping the CdS sensors on the robot’s head the robot can get the information of human action. Using the information of these sensors, the robot can get feature of each facial expression. After self-learning is completed, when a person changed his facial expression in front of the robot, the robot operates actions under the relevant facial expression.

  7. Effects of facial color on the subliminal processing of fearful faces.

    PubMed

    Nakajima, K; Minami, T; Nakauchi, S

    2015-12-03

    Recent studies have suggested that both configural information, such as face shape, and surface information is important for face perception. In particular, facial color is sufficiently suggestive of emotional states, as in the phrases: "flushed with anger" and "pale with fear." However, few studies have examined the relationship between facial color and emotional expression. On the other hand, event-related potential (ERP) studies have shown that emotional expressions, such as fear, are processed unconsciously. In this study, we examined how facial color modulated the supraliminal and subliminal processing of fearful faces. We recorded electroencephalograms while participants performed a facial emotion identification task involving masked target faces exhibiting facial expressions (fearful or neutral) and colors (natural or bluish). The results indicated that there was a significant interaction between facial expression and color for the latency of the N170 component. Subsequent analyses revealed that the bluish-colored faces increased the latency effect of facial expressions compared to the natural-colored faces, indicating that the bluish color modulated the processing of fearful expressions. We conclude that the unconscious processing of fearful faces is affected by facial color. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  8. Identifying differences in biased affective information processing in major depression.

    PubMed

    Gollan, Jackie K; Pane, Heather T; McCloskey, Michael S; Coccaro, Emil F

    2008-05-30

    This study investigates the extent to which participants with major depression differ from healthy comparison participants in the irregularities in affective information processing, characterized by deficits in facial expression recognition, intensity categorization, and reaction time to identifying emotionally salient and neutral information. Data on diagnoses, symptom severity, and affective information processing using a facial recognition task were collected from 66 participants, male and female between ages 18 and 54 years, grouped by major depressive disorder (N=37) or healthy non-psychiatric (N=29) status. Findings from MANCOVAs revealed that major depression was associated with a significantly longer reaction time to sad facial expressions compared with healthy status. Also, depressed participants demonstrated a negative bias towards interpreting neutral facial expressions as sad significantly more often than healthy participants. In turn, healthy participants interpreted neutral faces as happy significantly more often than depressed participants. No group differences were observed for facial expression recognition and intensity categorization. The observed effects suggest that depression has significant effects on the perception of the intensity of negative affective stimuli, delayed speed of processing sad affective information, and biases towards interpreting neutral faces as sad.

  9. Role of temporal processing stages by inferior temporal neurons in facial recognition.

    PubMed

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji

    2011-01-01

    In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition.

  10. Role of Temporal Processing Stages by Inferior Temporal Neurons in Facial Recognition

    PubMed Central

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji

    2011-01-01

    In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition. PMID:21734904

  11. On the facilitative effects of face motion on face recognition and its development

    PubMed Central

    Xiao, Naiqi G.; Perrotta, Steve; Quinn, Paul C.; Wang, Zhe; Sun, Yu-Hao P.; Lee, Kang

    2014-01-01

    For the past century, researchers have extensively studied human face processing and its development. These studies have advanced our understanding of not only face processing, but also visual processing in general. However, most of what we know about face processing was investigated using static face images as stimuli. Therefore, an important question arises: to what extent does our understanding of static face processing generalize to face processing in real-life contexts in which faces are mostly moving? The present article addresses this question by examining recent studies on moving face processing to uncover the influence of facial movements on face processing and its development. First, we describe evidence on the facilitative effects of facial movements on face recognition and two related theoretical hypotheses: the supplementary information hypothesis and the representation enhancement hypothesis. We then highlight several recent studies suggesting that facial movements optimize face processing by activating specific face processing strategies that accommodate to task requirements. Lastly, we review the influence of facial movements on the development of face processing in the first year of life. We focus on infants' sensitivity to facial movements and explore the facilitative effects of facial movements on infants' face recognition performance. We conclude by outlining several future directions to investigate moving face processing and emphasize the importance of including dynamic aspects of facial information to further understand face processing in real-life contexts. PMID:25009517

  12. Facial affect processing and depression susceptibility: cognitive biases and cognitive neuroscience.

    PubMed

    Bistricky, Steven L; Ingram, Rick E; Atchley, Ruth Ann

    2011-11-01

    Facial affect processing is essential to social development and functioning and is particularly relevant to models of depression. Although cognitive and interpersonal theories have long described different pathways to depression, cognitive-interpersonal and evolutionary social risk models of depression focus on the interrelation of interpersonal experience, cognition, and social behavior. We therefore review the burgeoning depressive facial affect processing literature and examine its potential for integrating disciplines, theories, and research. In particular, we evaluate studies in which information processing or cognitive neuroscience paradigms were used to assess facial affect processing in depressed and depression-susceptible populations. Most studies have assessed and supported cognitive models. This research suggests that depressed and depression-vulnerable groups show abnormal facial affect interpretation, attention, and memory, although findings vary based on depression severity, comorbid anxiety, or length of time faces are viewed. Facial affect processing biases appear to correspond with distinct neural activity patterns and increased depressive emotion and thought. Biases typically emerge in depressed moods but are occasionally found in the absence of such moods. Indirect evidence suggests that childhood neglect might cultivate abnormal facial affect processing, which can impede social functioning in ways consistent with cognitive-interpersonal and interpersonal models. However, reviewed studies provide mixed support for the social risk model prediction that depressive states prompt cognitive hypervigilance to social threat information. We recommend prospective interdisciplinary research examining whether facial affect processing abnormalities promote-or are promoted by-depressogenic attachment experiences, negative thinking, and social dysfunction.

  13. Recognition of facial expressions and prosodic cues with graded emotional intensities in adults with Asperger syndrome.

    PubMed

    Doi, Hirokazu; Fujisawa, Takashi X; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-09-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group difference in facial expression recognition was prominent for stimuli with low or intermediate emotional intensities. In contrast to this, the individuals with Asperger syndrome exhibited lower recognition accuracy than typically-developed controls mainly for emotional prosody with high emotional intensity. In facial expression recognition, Asperger and control groups showed an inversion effect for all categories. The magnitude of this effect was less in the Asperger group for angry and sad expressions, presumably attributable to reduced recruitment of the configural mode of face processing. The individuals with Asperger syndrome outperformed the control participants in recognizing inverted sad expressions, indicating enhanced processing of local facial information representing sad emotion. These results suggest that the adults with Asperger syndrome rely on modality-specific strategies in emotion recognition from facial expression and prosodic information.

  14. Impaired holistic coding of facial expression and facial identity in congenital prosopagnosia.

    PubMed

    Palermo, Romina; Willis, Megan L; Rivolta, Davide; McKone, Elinor; Wilson, C Ellie; Calder, Andrew J

    2011-04-01

    We test 12 individuals with congenital prosopagnosia (CP), who replicate a common pattern of showing severe difficulty in recognising facial identity in conjunction with normal recognition of facial expressions (both basic and 'social'). Strength of holistic processing was examined using standard expression composite and identity composite tasks. Compared to age- and sex-matched controls, group analyses demonstrated that CPs showed weaker holistic processing, for both expression and identity information. Implications are (a) normal expression recognition in CP can derive from compensatory strategies (e.g., over-reliance on non-holistic cues to expression); (b) the split between processing of expression and identity information may take place after a common stage of holistic processing; and (c) contrary to a recent claim, holistic processing of identity is functionally involved in face identification ability. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Impaired holistic coding of facial expression and facial identity in congenital prosopagnosia

    PubMed Central

    Palermo, Romina; Willis, Megan L.; Rivolta, Davide; McKone, Elinor; Wilson, C. Ellie; Calder, Andrew J.

    2011-01-01

    We test 12 individuals with congenital prosopagnosia (CP), who replicate a common pattern of showing severe difficulty in recognising facial identity in conjunction with normal recognition of facial expressions (both basic and ‘social’). Strength of holistic processing was examined using standard expression composite and identity composite tasks. Compared to age- and sex-matched controls, group analyses demonstrated that CPs showed weaker holistic processing, for both expression and identity information. Implications are (a) normal expression recognition in CP can derive from compensatory strategies (e.g., over-reliance on non-holistic cues to expression); (b) the split between processing of expression and identity information may take place after a common stage of holistic processing; and (c) contrary to a recent claim, holistic processing of identity is functionally involved in face identification ability. PMID:21333662

  16. Functional integration of the posterior superior temporal sulcus correlates with facial expression recognition.

    PubMed

    Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia

    2016-05-01

    Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  17. Exploring the Role of Spatial Frequency Information during Neural Emotion Processing in Human Infants.

    PubMed

    Jessen, Sarah; Grossmann, Tobias

    2017-01-01

    Enhanced attention to fear expressions in adults is primarily driven by information from low as opposed to high spatial frequencies contained in faces. However, little is known about the role of spatial frequency information in emotion processing during infancy. In the present study, we examined the role of low compared to high spatial frequencies in the processing of happy and fearful facial expressions by using filtered face stimuli and measuring event-related brain potentials (ERPs) in 7-month-old infants ( N = 26). Our results revealed that infants' brains discriminated between emotional facial expressions containing high but not between expressions containing low spatial frequencies. Specifically, happy faces containing high spatial frequencies elicited a smaller Nc amplitude than fearful faces containing high spatial frequencies and happy and fearful faces containing low spatial frequencies. Our results demonstrate that already in infancy spatial frequency content influences the processing of facial emotions. Furthermore, we observed that fearful facial expressions elicited a comparable Nc response for high and low spatial frequencies, suggesting a robust detection of fearful faces irrespective of spatial frequency content, whereas the detection of happy facial expressions was contingent upon frequency content. In summary, these data provide new insights into the neural processing of facial emotions in early development by highlighting the differential role played by spatial frequencies in the detection of fear and happiness.

  18. Looking at faces from different angles: Europeans fixate different features in Asian and Caucasian faces.

    PubMed

    Brielmann, Aenne A; Bülthoff, Isabelle; Armann, Regine

    2014-07-01

    Race categorization of faces is a fast and automatic process and is known to affect further face processing profoundly and at earliest stages. Whether processing of own- and other-race faces might rely on different facial cues, as indicated by diverging viewing behavior, is much under debate. We therefore aimed to investigate two open questions in our study: (1) Do observers consider information from distinct facial features informative for race categorization or do they prefer to gain global face information by fixating the geometrical center of the face? (2) Does the fixation pattern, or, if facial features are considered relevant, do these features differ between own- and other-race faces? We used eye tracking to test where European observers look when viewing Asian and Caucasian faces in a race categorization task. Importantly, in order to disentangle centrally located fixations from those towards individual facial features, we presented faces in frontal, half-profile and profile views. We found that observers showed no general bias towards looking at the geometrical center of faces, but rather directed their first fixations towards distinct facial features, regardless of face race. However, participants looked at the eyes more often in Caucasian faces than in Asian faces, and there were significantly more fixations to the nose for Asian compared to Caucasian faces. Thus, observers rely on information from distinct facial features rather than facial information gained by centrally fixating the face. To what extent specific features are looked at is determined by the face's race. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Reduced Reliance on Optimal Facial Information for Identity Recognition in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Leonard, Hayley C.; Annaz, Dagmara; Karmiloff-Smith, Annette; Johnson, Mark H.

    2013-01-01

    Previous research into face processing in autism spectrum disorder (ASD) has revealed atypical biases toward particular facial information during identity recognition. Specifically, a focus on features (or high spatial frequencies [HSFs]) has been reported for both face and nonface processing in ASD. The current study investigated the development…

  20. Faces in Context: A Review and Systematization of Contextual Influences on Affective Face Processing

    PubMed Central

    Wieser, Matthias J.; Brosch, Tobias

    2012-01-01

    Facial expressions are of eminent importance for social interaction as they convey information about other individuals’ emotions and social intentions. According to the predominant “basic emotion” approach, the perception of emotion in faces is based on the rapid, automatic categorization of prototypical, universal expressions. Consequently, the perception of facial expressions has typically been investigated using isolated, de-contextualized, static pictures of facial expressions that maximize the distinction between categories. However, in everyday life, an individual’s face is not perceived in isolation, but almost always appears within a situational context, which may arise from other people, the physical environment surrounding the face, as well as multichannel information from the sender. Furthermore, situational context may be provided by the perceiver, including already present social information gained from affective learning and implicit processing biases such as race bias. Thus, the perception of facial expressions is presumably always influenced by contextual variables. In this comprehensive review, we aim at (1) systematizing the contextual variables that may influence the perception of facial expressions and (2) summarizing experimental paradigms and findings that have been used to investigate these influences. The studies reviewed here demonstrate that perception and neural processing of facial expressions are substantially modified by contextual information, including verbal, visual, and auditory information presented together with the face as well as knowledge or processing biases already present in the observer. These findings further challenge the assumption of automatic, hardwired categorical emotion extraction mechanisms predicted by basic emotion theories. Taking into account a recent model on face processing, we discuss where and when these different contextual influences may take place, thus outlining potential avenues in future research. PMID:23130011

  1. Faces in context: a review and systematization of contextual influences on affective face processing.

    PubMed

    Wieser, Matthias J; Brosch, Tobias

    2012-01-01

    Facial expressions are of eminent importance for social interaction as they convey information about other individuals' emotions and social intentions. According to the predominant "basic emotion" approach, the perception of emotion in faces is based on the rapid, automatic categorization of prototypical, universal expressions. Consequently, the perception of facial expressions has typically been investigated using isolated, de-contextualized, static pictures of facial expressions that maximize the distinction between categories. However, in everyday life, an individual's face is not perceived in isolation, but almost always appears within a situational context, which may arise from other people, the physical environment surrounding the face, as well as multichannel information from the sender. Furthermore, situational context may be provided by the perceiver, including already present social information gained from affective learning and implicit processing biases such as race bias. Thus, the perception of facial expressions is presumably always influenced by contextual variables. In this comprehensive review, we aim at (1) systematizing the contextual variables that may influence the perception of facial expressions and (2) summarizing experimental paradigms and findings that have been used to investigate these influences. The studies reviewed here demonstrate that perception and neural processing of facial expressions are substantially modified by contextual information, including verbal, visual, and auditory information presented together with the face as well as knowledge or processing biases already present in the observer. These findings further challenge the assumption of automatic, hardwired categorical emotion extraction mechanisms predicted by basic emotion theories. Taking into account a recent model on face processing, we discuss where and when these different contextual influences may take place, thus outlining potential avenues in future research.

  2. Poor sleep quality predicts deficient emotion information processing over time in early adolescence.

    PubMed

    Soffer-Dudek, Nirit; Sadeh, Avi; Dahl, Ronald E; Rosenblat-Stein, Shiran

    2011-11-01

    There is deepening understanding of the effects of sleep on emotional information processing. Emotion information processing is a key aspect of social competence, which undergoes important maturational and developmental changes in adolescence; however, most research in this area has focused on adults. Our aim was to test the links between sleep and emotion information processing during early adolescence. Sleep and facial information processing were assessed objectively during 3 assessment waves, separated by 1-year lags. Data were obtained in natural environments-sleep was assessed in home settings, and facial information processing was assessed at school. 94 healthy children (53 girls, 41 boys), aged 10 years at Time 1. N/A. Facial information processing was tested under neutral (gender identification) and emotional (emotional expression identification) conditions. Sleep was assessed in home settings using actigraphy for 7 nights at each assessment wave. Waking > 5 min was considered a night awakening. Using multilevel modeling, elevated night awakenings and decreased sleep efficiency significantly predicted poor performance only in the emotional information processing condition (e.g., b = -1.79, SD = 0.52, confidence interval: lower boundary = -2.82, upper boundary = -0.076, t(416.94) = -3.42, P = 0.001). Poor sleep quality is associated with compromised emotional information processing during early adolescence, a sensitive period in socio-emotional development.

  3. Facial Electromyographic Responses to Emotional Information from Faces and Voices in Individuals with Pervasive Developmental Disorder

    ERIC Educational Resources Information Center

    Magnee, Maurice J. C. M.; de Gelder, Beatrice; van Engeland, Herman; Kemner, Chantal

    2007-01-01

    Background: Despite extensive research, it is still debated whether impairments in social skills of individuals with pervasive developmental disorder (PDD) are related to specific deficits in the early processing of emotional information. We aimed to test both automatic processing of facial affect as well as the integration of auditory and visual…

  4. Facial emotion processing in pediatric social anxiety disorder: Relevance of situational context.

    PubMed

    Schwab, Daniela; Schienle, Anne

    2017-08-01

    Social anxiety disorder (SAD) typically begins in childhood. Previous research has demonstrated that adult patients respond with elevated late positivity (LP) to negative facial expressions. In the present study on pediatric SAD, we investigated responses to negative facial expressions and the role of social context information. Fifteen children with SAD and 15 non-anxious controls were first presented with images of negative facial expressions with masked backgrounds. Following this, the complete images which included context information, were shown. The negative expressions were either a result of an emotion-relevant (e.g., social exclusion) or emotion-irrelevant elicitor (e.g., weight lifting). Relative to controls, the clinical group showed elevated parietal LP during face processing with and without context information. Both groups differed in their frontal LP depending on the type of context. In SAD patients, frontal LP was lower in emotion-relevant than emotion-irrelevant contexts. We conclude that SAD patients direct more automatic attention towards negative facial expressions (parietal effect) and are less capable in integrating affective context information (frontal effect). Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. In the face of emotions: event-related potentials in supraliminal and subliminal facial expression recognition.

    PubMed

    Balconi, Michela; Lucchiari, Claudio

    2005-02-01

    Is facial expression recognition marked by specific event-related potentials (ERPs) effects? Are conscious and unconscious elaborations of emotional facial stimuli qualitatively different processes? In Experiment 1, ERPs elicited by supraliminal stimuli were recorded when 21 participants viewed emotional facial expressions of four emotions and a neutral stimulus. Two ERP components (N2 and P3) were analyzed for their peak amplitude and latency measures. First, emotional face-specificity was observed for the negative deflection N2, whereas P3 was not affected by the content of the stimulus (emotional or neutral). A more posterior distribution of ERPs was found for N2. Moreover, a lateralization effect was revealed for negative (right lateralization) and positive (left lateralization) facial expressions. In Experiment 2 (20 participants), 1-ms subliminal stimulation was carried out. Unaware information processing was revealed to be quite similar to aware information processing for peak amplitude but not for latency. In fact, unconscious stimulation produced a more delayed peak variation than conscious stimulation.

  6. A Lack of Left Visual Field Bias when Individuals with Autism Process Faces

    ERIC Educational Resources Information Center

    Dundas, Eva M.; Best, Catherine A.; Minshew, Nancy J.; Strauss, Mark S.

    2012-01-01

    It has been established that typically developing individuals have a bias to attend to facial information in the left visual field (LVF) more than in the right visual field. This bias is thought to arise from the right hemisphere's advantage for processing facial information, with evidence suggesting it to be driven by the configural demands of…

  7. The face-selective N170 component is modulated by facial color.

    PubMed

    Nakajima, Kae; Minami, Tetsuto; Nakauchi, Shigeki

    2012-08-01

    Faces play an important role in social interaction by conveying information and emotion. Of the various components of the face, color particularly provides important clues with regard to perception of age, sex, health status, and attractiveness. In event-related potential (ERP) studies, the N170 component has been identified as face-selective. To determine the effect of color on face processing, we investigated the modulation of N170 by facial color. We recorded ERPs while subjects viewed facial color stimuli at 8 hue angles, which were generated by rotating the original facial color distribution around the white point by 45° for each human face. Responses to facial color were localized to the left, but not to the right hemisphere. N170 amplitudes gradually increased in proportion to the increase in hue angle from the natural-colored face. This suggests that N170 amplitude in the left hemisphere reflects processing of facial color information. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Developmental Changes in the Primacy of Facial Cues for Emotion Recognition

    ERIC Educational Resources Information Center

    Leitzke, Brian T.; Pollak, Seth D.

    2016-01-01

    There have been long-standing differences of opinion regarding the influence of the face relative to that of contextual information on how individuals process and judge facial expressions of emotion. However, developmental changes in how individuals use such information have remained largely unexplored and could be informative in attempting to…

  9. Facial color processing in the face-selective regions: an fMRI study.

    PubMed

    Nakajima, Kae; Minami, Tetsuto; Tanabe, Hiroki C; Sadato, Norihiro; Nakauchi, Shigeki

    2014-09-01

    Facial color is important information for social communication as it provides important clues to recognize a person's emotion and health condition. Our previous EEG study suggested that N170 at the left occipito-temporal site is related to facial color processing (Nakajima et al., [2012]: Neuropsychologia 50:2499-2505). However, because of the low spatial resolution of EEG experiment, the brain region is involved in facial color processing remains controversial. In the present study, we examined the neural substrates of facial color processing using functional magnetic resonance imaging (fMRI). We measured brain activity from 25 subjects during the presentation of natural- and bluish-colored face and their scrambled images. The bilateral fusiform face (FFA) area and occipital face area (OFA) were localized by the contrast of natural-colored faces versus natural-colored scrambled images. Moreover, region of interest (ROI) analysis showed that the left FFA was sensitive to facial color, whereas the right FFA and the right and left OFA were insensitive to facial color. In combination with our previous EEG results, these data suggest that the left FFA may play an important role in facial color processing. Copyright © 2014 Wiley Periodicals, Inc.

  10. What the Human Brain Likes About Facial Motion

    PubMed Central

    Schultz, Johannes; Brockhaus, Matthias; Bülthoff, Heinrich H.; Pilz, Karin S.

    2013-01-01

    Facial motion carries essential information about other people's emotions and intentions. Most previous studies have suggested that facial motion is mainly processed in the superior temporal sulcus (STS), but several recent studies have also shown involvement of ventral temporal face-sensitive regions. Up to now, it is not known whether the increased response to facial motion is due to an increased amount of static information in the stimulus, to the deformation of the face over time, or to increased attentional demands. We presented nonrigidly moving faces and control stimuli to participants performing a demanding task unrelated to the face stimuli. We manipulated the amount of static information by using movies with different frame rates. The fluidity of the motion was manipulated by presenting movies with frames either in the order in which they were recorded or in scrambled order. Results confirm higher activation for moving compared with static faces in STS and under certain conditions in ventral temporal face-sensitive regions. Activation was maximal at a frame rate of 12.5 Hz and smaller for scrambled movies. These results indicate that both the amount of static information and the fluid facial motion per se are important factors for the processing of dynamic faces. PMID:22535907

  11. Contributions of feature shapes and surface cues to the recognition of facial expressions.

    PubMed

    Sormaz, Mladen; Young, Andrew W; Andrews, Timothy J

    2016-10-01

    Theoretical accounts of face processing often emphasise feature shapes as the primary visual cue to the recognition of facial expressions. However, changes in facial expression also affect the surface properties of the face. In this study, we investigated whether this surface information can also be used in the recognition of facial expression. First, participants identified facial expressions (fear, anger, disgust, sadness, happiness) from images that were manipulated such that they varied mainly in shape or mainly in surface properties. We found that the categorization of facial expression is possible in either type of image, but that different expressions are relatively dependent on surface or shape properties. Next, we investigated the relative contributions of shape and surface information to the categorization of facial expressions. This employed a complementary method that involved combining the surface properties of one expression with the shape properties from a different expression. Our results showed that the categorization of facial expressions in these hybrid images was equally dependent on the surface and shape properties of the image. Together, these findings provide a direct demonstration that both feature shape and surface information make significant contributions to the recognition of facial expressions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Action recognition is sensitive to the identity of the actor.

    PubMed

    Ferstl, Ylva; Bülthoff, Heinrich; de la Rosa, Stephan

    2017-09-01

    Recognizing who is carrying out an action is essential for successful human interaction. The cognitive mechanisms underlying this ability are little understood and have been subject of discussions in embodied approaches to action recognition. Here we examine one solution, that visual action recognition processes are at least partly sensitive to the actor's identity. We investigated the dependency between identity information and action related processes by testing the sensitivity of neural action recognition processes to clothing and facial identity information with a behavioral adaptation paradigm. Our results show that action adaptation effects are in fact modulated by both clothing information and the actor's facial identity. The finding demonstrates that neural processes underlying action recognition are sensitive to identity information (including facial identity) and thereby not exclusively tuned to actions. We suggest that such response properties are useful to help humans in knowing who carried out an action. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  13. Developmental changes in the primacy of facial cues for emotion recognition.

    PubMed

    Leitzke, Brian T; Pollak, Seth D

    2016-04-01

    There have been long-standing differences of opinion regarding the influence of the face relative to that of contextual information on how individuals process and judge facial expressions of emotion. However, developmental changes in how individuals use such information have remained largely unexplored and could be informative in attempting to reconcile these opposing views. The current study tested for age-related differences in how individuals prioritize viewing emotional faces versus contexts when making emotion judgments. To do so, we asked 4-, 8-, and 12-year-old children as well as college students to categorize facial expressions of emotion that were presented with scenes that were either congruent or incongruent with the facial displays. During this time, we recorded participants' gaze patterns via eye tracking. College students directed their visual attention primarily to the face, regardless of contextual information. Children, however, divided their attention between both the face and the context as sources of emotional information depending on the valence of the context. These findings reveal a developmental shift in how individuals process and integrate emotional cues. (c) 2016 APA, all rights reserved).

  14. Influence of spatial frequency and emotion expression on face processing in patients with panic disorder.

    PubMed

    Shim, Miseon; Kim, Do-Won; Yoon, Sunkyung; Park, Gewnhi; Im, Chang-Hwan; Lee, Seung-Hwan

    2016-06-01

    Deficits in facial emotion processing is a major characteristic of patients with panic disorder. It is known that visual stimuli with different spatial frequencies take distinct neural pathways. This study investigated facial emotion processing involving stimuli presented at broad, high, and low spatial frequencies in patients with panic disorder. Eighteen patients with panic disorder and 19 healthy controls were recruited. Seven event-related potential (ERP) components: (P100, N170, early posterior negativity (EPN); vertex positive potential (VPP), N250, P300; and late positive potential (LPP)) were evaluated while the participants looked at fearful and neutral facial stimuli presented at three spatial frequencies. When a fearful face was presented, panic disorder patients showed a significantly increased P100 amplitude in response to low spatial frequency compared to high spatial frequency; whereas healthy controls demonstrated significant broad spatial frequency dependent processing in P100 amplitude. Vertex positive potential amplitude was significantly increased in high and broad spatial frequency, compared to low spatial frequency in panic disorder. Early posterior negativity amplitude was significantly different between HSF and BSF, and between LSF and BSF processing in both groups, regardless of facial expression. The possibly confounding effects of medication could not be controlled. During early visual processing, patients with panic disorder prefer global to detailed information. However, in later processing, panic disorder patients overuse detailed information for the perception of facial expressions. These findings suggest that unique spatial frequency-dependent facial processing could shed light on the neural pathology associated with panic disorder. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Processing of subliminal facial expressions of emotion: a behavioral and fMRI study.

    PubMed

    Prochnow, D; Kossack, H; Brunheim, S; Müller, K; Wittsack, H-J; Markowitsch, H-J; Seitz, R J

    2013-01-01

    The recognition of emotional facial expressions is an important means to adjust behavior in social interactions. As facial expressions widely differ in their duration and degree of expressiveness, they often manifest with short and transient expressions below the level of awareness. In this combined behavioral and fMRI study, we aimed at examining whether or not consciously accessible (subliminal) emotional facial expressions influence empathic judgments and which brain activations are related to it. We hypothesized that subliminal facial expressions of emotions masked with neutral expressions of the same faces induce an empathic processing similar to consciously accessible (supraliminal) facial expressions. Our behavioral data in 23 healthy subjects showed that subliminal emotional facial expressions of 40 ms duration affect the judgments of the subsequent neutral facial expressions. In the fMRI study in 12 healthy subjects it was found that both, supra- and subliminal emotional facial expressions shared a widespread network of brain areas including the fusiform gyrus, the temporo-parietal junction, and the inferior, dorsolateral, and medial frontal cortex. Compared with subliminal facial expressions, supraliminal facial expressions led to a greater activation of left occipital and fusiform face areas. We conclude that masked subliminal emotional information is suited to trigger processing in brain areas which have been implicated in empathy and, thereby in social encounters.

  16. Individual differences and the effect of face configuration information in the McGurk effect.

    PubMed

    Ujiie, Yuta; Asai, Tomohisa; Wakabayashi, Akio

    2018-04-01

    The McGurk effect, which denotes the influence of visual information on audiovisual speech perception, is less frequently observed in individuals with autism spectrum disorder (ASD) compared to those without it; the reason for this remains unclear. Several studies have suggested that facial configuration context might play a role in this difference. More specifically, people with ASD show a local processing bias for faces-that is, they process global face information to a lesser extent. This study examined the role of facial configuration context in the McGurk effect in 46 healthy students. Adopting an analogue approach using the Autism-Spectrum Quotient (AQ), we sought to determine whether this facial configuration context is crucial to previously observed reductions in the McGurk effect in people with ASD. Lip-reading and audiovisual syllable identification tasks were assessed via presentation of upright normal, inverted normal, upright Thatcher-type, and inverted Thatcher-type faces. When the Thatcher-type face was presented, perceivers were found to be sensitive to the misoriented facial characteristics, causing them to perceive a weaker McGurk effect than when the normal face was presented (this is known as the McThatcher effect). Additionally, the McGurk effect was weaker in individuals with high AQ scores than in those with low AQ scores in the incongruent audiovisual condition, regardless of their ability to read lips or process facial configuration contexts. Our findings, therefore, do not support the assumption that individuals with ASD show a weaker McGurk effect due to a difficulty in processing facial configuration context.

  17. Holistic processing of static and moving faces.

    PubMed

    Zhao, Mintao; Bülthoff, Isabelle

    2017-07-01

    Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability-holistic face processing-remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based face processing by manipulating the presence of facial motion during study and at test in a composite face task. The results showed that rigidly moving faces were processed as holistically as static faces (Experiment 1). Holistic processing of moving faces persisted whether facial motion was presented during study, at test, or both (Experiment 2). Moreover, when faces were inverted to eliminate the contributions of both an upright face template and observers' expertise with upright faces, rigid facial motion facilitated holistic face processing (Experiment 3). Thus, holistic processing represents a general principle of face perception that applies to both static and dynamic faces, rather than being limited to static faces. These results support an emerging view that both perceiver-based and face-based factors contribute to holistic face processing, and they offer new insights on what underlies holistic face processing, how information supporting holistic face processing interacts with each other, and why facial motion may affect face recognition and holistic face processing differently. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. Facial and semantic emotional interference: A pilot study on the behavioral and cortical responses to the dual valence association task

    PubMed Central

    2011-01-01

    Background Integration of compatible or incompatible emotional valence and semantic information is an essential aspect of complex social interactions. A modified version of the Implicit Association Test (IAT) called Dual Valence Association Task (DVAT) was designed in order to measure conflict resolution processing from compatibility/incompatibly of semantic and facial valence. The DVAT involves two emotional valence evaluative tasks which elicits two forms of emotional compatible/incompatible associations (facial and semantic). Methods Behavioural measures and Event Related Potentials were recorded while participants performed the DVAT. Results Behavioural data showed a robust effect that distinguished compatible/incompatible tasks. The effects of valence and contextual association (between facial and semantic stimuli) showed early discrimination in N170 of faces. The LPP component was modulated by the compatibility of the DVAT. Conclusions Results suggest that DVAT is a robust paradigm for studying the emotional interference effect in the processing of simultaneous information from semantic and facial stimuli. PMID:21489277

  19. Brain response during the M170 time interval is sensitive to socially relevant information.

    PubMed

    Arviv, Oshrit; Goldstein, Abraham; Weeting, Janine C; Becker, Eni S; Lange, Wolf-Gero; Gilboa-Schechtman, Eva

    2015-11-01

    Deciphering the social meaning of facial displays is a highly complex neurological process. The M170, an event related field component of MEG recording, like its EEG counterpart N170, was repeatedly shown to be associated with structural encoding of faces. However, the scope of information encoded during the M170 time window is still being debated. We investigated the neuronal origin of facial processing of integrated social rank cues (SRCs) and emotional facial expressions (EFEs) during the M170 time interval. Participants viewed integrated facial displays of emotion (happy, angry, neutral) and SRCs (indicated by upward, downward, or straight head tilts). We found that the activity during the M170 time window is sensitive to both EFEs and SRCs. Specifically, highly prominent activation was observed in response to SRC connoting dominance as compared to submissive or egalitarian head cues. Interestingly, the processing of EFEs and SRCs appeared to rely on different circuitry. Our findings suggest that vertical head tilts are processed not only for their sheer structural variance, but as social information. Exploring the temporal unfolding and brain localization of non-verbal cues processing may assist in understanding the functioning of the social rank biobehavioral system. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Threat processing in generalized social phobia: an investigation of interpretation biases in ambiguous facial affect.

    PubMed

    Jusyte, Aiste; Schönenberg, Michael

    2014-06-30

    Facial affect is one of the most important information sources during the course of social interactions, but it is susceptible to distortion due to the complex and dynamic nature. Socially anxious individuals have been shown to exhibit alterations in the processing of social information, such as an attentional and interpretative bias toward threatening information. This may be one of the key factors contributing to the development and maintenance of anxious psychopathology. The aim of the current study was to investigate whether a threat-related interpretation bias is evident for ambiguous facial stimuli in a population of individuals with a generalized Social Anxiety Disorder (gSAD) as compared to healthy controls. Participants judged ambiguous happy/fearful, angry/fearful and angry/happy blends varying in intensity and rated the predominant affective expression. The results obtained in this study do not indicate that gSAD is associated with a biased interpretation of ambiguous facial affect. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  1. Does dynamic information about the speaker's face contribute to semantic speech processing? ERP evidence.

    PubMed

    Hernández-Gutiérrez, David; Abdel Rahman, Rasha; Martín-Loeches, Manuel; Muñoz, Francisco; Schacht, Annekathrin; Sommer, Werner

    2018-07-01

    Face-to-face interactions characterize communication in social contexts. These situations are typically multimodal, requiring the integration of linguistic auditory input with facial information from the speaker. In particular, eye gaze and visual speech provide the listener with social and linguistic information, respectively. Despite the importance of this context for an ecological study of language, research on audiovisual integration has mainly focused on the phonological level, leaving aside effects on semantic comprehension. Here we used event-related potentials (ERPs) to investigate the influence of facial dynamic information on semantic processing of connected speech. Participants were presented with either a video or a still picture of the speaker, concomitant to auditory sentences. Along three experiments, we manipulated the presence or absence of the speaker's dynamic facial features (mouth and eyes) and compared the amplitudes of the semantic N400 elicited by unexpected words. Contrary to our predictions, the N400 was not modulated by dynamic facial information; therefore, semantic processing seems to be unaffected by the speaker's gaze and visual speech. Even though, during the processing of expected words, dynamic faces elicited a long-lasting late posterior positivity compared to the static condition. This effect was significantly reduced when the mouth of the speaker was covered. Our findings may indicate an increase of attentional processing to richer communicative contexts. The present findings also demonstrate that in natural communicative face-to-face encounters, perceiving the face of a speaker in motion provides supplementary information that is taken into account by the listener, especially when auditory comprehension is non-demanding. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Aging disrupts the neural transformations that link facial identity across views.

    PubMed

    Habak, Claudine; Wilkinson, Frances; Wilson, Hugh R

    2008-01-01

    Healthy human aging can have adverse effects on cortical function and on the brain's ability to integrate visual information to form complex representations. Facial identification is crucial to successful social discourse, and yet, it remains unclear whether the neuronal mechanisms underlying face perception per se, and the speed with which they process information, change with age. We present face images whose discrimination relies strictly on the shape and geometry of a face at various stimulus durations. Interestingly, we demonstrate that facial identity matching is maintained with age when faces are shown in the same view (e.g., front-front or side-side), regardless of exposure duration, but degrades when faces are shown in different views (e.g., front and turned 20 degrees to the side) and does not improve at longer durations. Our results indicate that perceptual processing speed for complex representations and the mechanisms underlying same-view facial identity discrimination are maintained with age. In contrast, information is degraded in the neural transformations that represent facial identity across views. We suggest that the accumulation of useful information over time to refine a representation within a population of neurons saturates earlier in the aging visual system than it does in the younger system and contributes to the age-related deterioration of face discrimination across views.

  3. Real-time face and gesture analysis for human-robot interaction

    NASA Astrophysics Data System (ADS)

    Wallhoff, Frank; Rehrl, Tobias; Mayer, Christoph; Radig, Bernd

    2010-05-01

    Human communication relies on a large number of different communication mechanisms like spoken language, facial expressions, or gestures. Facial expressions and gestures are one of the main nonverbal communication mechanisms and pass large amounts of information between human dialog partners. Therefore, to allow for intuitive human-machine interaction, a real-time capable processing and recognition of facial expressions, hand and head gestures are of great importance. We present a system that is tackling these challenges. The input features for the dynamic head gestures and facial expressions are obtained from a sophisticated three-dimensional model, which is fitted to the user in a real-time capable manner. Applying this model different kinds of information are extracted from the image data and afterwards handed over to a real-time capable data-transferring framework, the so-called Real-Time DataBase (RTDB). In addition to the head and facial-related features, also low-level image features regarding the human hand - optical flow, Hu-moments are stored into the RTDB for the evaluation process of hand gestures. In general, the input of a single camera is sufficient for the parallel evaluation of the different gestures and facial expressions. The real-time capable recognition of the dynamic hand and head gestures are performed via different Hidden Markov Models, which have proven to be a quick and real-time capable classification method. On the other hand, for the facial expressions classical decision trees or more sophisticated support vector machines are used for the classification process. These obtained results of the classification processes are again handed over to the RTDB, where other processes (like a Dialog Management Unit) can easily access them without any blocking effects. In addition, an adjustable amount of history can be stored by the RTDB buffer unit.

  4. Do facial movements express emotions or communicate motives?

    PubMed

    Parkinson, Brian

    2005-01-01

    This article addresses the debate between emotion-expression and motive-communication approaches to facial movements, focusing on Ekman's (1972) and Fridlund's (1994) contrasting models and their historical antecedents. Available evidence suggests that the presence of others either reduces or increases facial responses, depending on the quality and strength of the emotional manipulation and on the nature of the relationship between interactants. Although both display rules and social motives provide viable explanations of audience "inhibition" effects, some audience facilitation effects are less easily accommodated within an emotion-expression perspective. In particular, emotion is not a sufficient condition for a corresponding "expression," even discounting explicit regulation, and, apparently, "spontaneous" facial movements may be facilitated by the presence of others. Further, there is no direct evidence that any particular facial movement provides an unambiguous expression of a specific emotion. However, information communicated by facial movements is not necessarily extrinsic to emotion. Facial movements not only transmit emotion-relevant information but also contribute to ongoing processes of emotional action in accordance with pragmatic theories.

  5. Association of impaired facial affect recognition with basic facial and visual processing deficits in schizophrenia.

    PubMed

    Norton, Daniel; McBain, Ryan; Holt, Daphne J; Ongur, Dost; Chen, Yue

    2009-06-15

    Impaired emotion recognition has been reported in schizophrenia, yet the nature of this impairment is not completely understood. Recognition of facial emotion depends on processing affective and nonaffective facial signals, as well as basic visual attributes. We examined whether and how poor facial emotion recognition in schizophrenia is related to basic visual processing and nonaffective face recognition. Schizophrenia patients (n = 32) and healthy control subjects (n = 29) performed emotion discrimination, identity discrimination, and visual contrast detection tasks, where the emotionality, distinctiveness of identity, or visual contrast was systematically manipulated. Subjects determined which of two presentations in a trial contained the target: the emotional face for emotion discrimination, a specific individual for identity discrimination, and a sinusoidal grating for contrast detection. Patients had significantly higher thresholds (worse performance) than control subjects for discriminating both fearful and happy faces. Furthermore, patients' poor performance in fear discrimination was predicted by performance in visual detection and face identity discrimination. Schizophrenia patients require greater emotional signal strength to discriminate fearful or happy face images from neutral ones. Deficient emotion recognition in schizophrenia does not appear to be determined solely by affective processing but is also linked to the processing of basic visual and facial information.

  6. Spatially generalizable representations of facial expressions: Decoding across partial face samples.

    PubMed

    Greening, Steven G; Mitchell, Derek G V; Smith, Fraser W

    2018-04-01

    A network of cortical and sub-cortical regions is known to be important in the processing of facial expression. However, to date no study has investigated whether representations of facial expressions present in this network permit generalization across independent samples of face information (e.g., eye region vs mouth region). We presented participants with partial face samples of five expression categories in a rapid event-related fMRI experiment. We reveal a network of face-sensitive regions that contain information about facial expression categories regardless of which part of the face is presented. We further reveal that the neural information present in a subset of these regions: dorsal prefrontal cortex (dPFC), superior temporal sulcus (STS), lateral occipital and ventral temporal cortex, and even early visual cortex, enables reliable generalization across independent visual inputs (faces depicting the 'eyes only' vs 'eyes removed'). Furthermore, classification performance was correlated to behavioral performance in STS and dPFC. Our results demonstrate that both higher (e.g., STS, dPFC) and lower level cortical regions contain information useful for facial expression decoding that go beyond the visual information presented, and implicate a key role for contextual mechanisms such as cortical feedback in facial expression perception under challenging conditions of visual occlusion. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Unconscious processing of facial attractiveness: invisible attractive faces orient visual attention

    PubMed Central

    Hung, Shao-Min; Nieh, Chih-Hsuan; Hsieh, Po-Jang

    2016-01-01

    Past research has proven human’s extraordinary ability to extract information from a face in the blink of an eye, including its emotion, gaze direction, and attractiveness. However, it remains elusive whether facial attractiveness can be processed and influences our behaviors in the complete absence of conscious awareness. Here we demonstrate unconscious processing of facial attractiveness with three distinct approaches. In Experiment 1, the time taken for faces to break interocular suppression was measured. The results showed that attractive faces enjoyed the privilege of breaking suppression and reaching consciousness earlier. In Experiment 2, we further showed that attractive faces had lower visibility thresholds, again suggesting that facial attractiveness could be processed more easily to reach consciousness. Crucially, in Experiment 3, a significant decrease of accuracy on an orientation discrimination task subsequent to an invisible attractive face showed that attractive faces, albeit suppressed and invisible, still exerted an effect by orienting attention. Taken together, for the first time, we show that facial attractiveness can be processed in the complete absence of consciousness, and an unconscious attractive face is still capable of directing our attention. PMID:27848992

  8. Unconscious processing of facial attractiveness: invisible attractive faces orient visual attention.

    PubMed

    Hung, Shao-Min; Nieh, Chih-Hsuan; Hsieh, Po-Jang

    2016-11-16

    Past research has proven human's extraordinary ability to extract information from a face in the blink of an eye, including its emotion, gaze direction, and attractiveness. However, it remains elusive whether facial attractiveness can be processed and influences our behaviors in the complete absence of conscious awareness. Here we demonstrate unconscious processing of facial attractiveness with three distinct approaches. In Experiment 1, the time taken for faces to break interocular suppression was measured. The results showed that attractive faces enjoyed the privilege of breaking suppression and reaching consciousness earlier. In Experiment 2, we further showed that attractive faces had lower visibility thresholds, again suggesting that facial attractiveness could be processed more easily to reach consciousness. Crucially, in Experiment 3, a significant decrease of accuracy on an orientation discrimination task subsequent to an invisible attractive face showed that attractive faces, albeit suppressed and invisible, still exerted an effect by orienting attention. Taken together, for the first time, we show that facial attractiveness can be processed in the complete absence of consciousness, and an unconscious attractive face is still capable of directing our attention.

  9. Seeing Life through Positive-Tinted Glasses: Color–Meaning Associations

    PubMed Central

    Gil, Sandrine; Le Bigot, Ludovic

    2014-01-01

    There is a growing body of literature to show that color can convey information, owing to its emotionally meaningful associations. Most research so far has focused on negative hue–meaning associations (e.g., red) with the exception of the positive aspects associated with green. We therefore set out to investigate the positive associations of two colors (i.e., green and pink), using an emotional facial expression recognition task in which colors provided the emotional contextual information for the face processing. In two experiments, green and pink backgrounds enhanced happy face recognition and impaired sad face recognition, compared with a control color (gray). Our findings therefore suggest that because green and pink both convey positive information, they facilitate the processing of emotionally congruent facial expressions (i.e., faces expressing happiness) and interfere with that of incongruent facial expressions (i.e., faces expressing sadness). Data also revealed a positive association for white. Results are discussed within the theoretical framework of emotional cue processing and color meaning. PMID:25098167

  10. Seeing life through positive-tinted glasses: color-meaning associations.

    PubMed

    Gil, Sandrine; Le Bigot, Ludovic

    2014-01-01

    There is a growing body of literature to show that color can convey information, owing to its emotionally meaningful associations. Most research so far has focused on negative hue-meaning associations (e.g., red) with the exception of the positive aspects associated with green. We therefore set out to investigate the positive associations of two colors (i.e., green and pink), using an emotional facial expression recognition task in which colors provided the emotional contextual information for the face processing. In two experiments, green and pink backgrounds enhanced happy face recognition and impaired sad face recognition, compared with a control color (gray). Our findings therefore suggest that because green and pink both convey positive information, they facilitate the processing of emotionally congruent facial expressions (i.e., faces expressing happiness) and interfere with that of incongruent facial expressions (i.e., faces expressing sadness). Data also revealed a positive association for white. Results are discussed within the theoretical framework of emotional cue processing and color meaning.

  11. Early and late temporo-spatial effects of contextual interference during perception of facial affect.

    PubMed

    Frühholz, Sascha; Fehr, Thorsten; Herrmann, Manfred

    2009-10-01

    Contextual features during recognition of facial affect are assumed to modulate the temporal course of emotional face processing. Here, we simultaneously presented colored backgrounds during valence categorizations of facial expressions. Subjects incidentally learned to perceive negative, neutral and positive expressions within a specific colored context. Subsequently, subjects made fast valence judgments while presented with the same face-color-combinations as in the first run (congruent trials) or with different face-color-combinations (incongruent trials). Incongruent trials induced significantly increased response latencies and significantly decreased performance accuracy. Contextual incongruent information during processing of neutral expressions modulated the P1 and the early posterior negativity (EPN) both localized in occipito-temporal areas. Contextual congruent information during emotional face perception revealed an emotion-related modulation of the P1 for positive expressions and of the N170 and the EPN for negative expressions. Highest amplitude of the N170 was found for negative expressions in a negatively associated context and the N170 amplitude varied with the amount of overall negative information. Incongruent trials with negative expressions elicited a parietal negativity which was localized to superior parietal cortex and which most likely represents a posterior manifestation of the N450 as an indicator of conflict processing. A sustained activation of the late LPP over parietal cortex for all incongruent trials might reflect enhanced engagement with facial expression during task conditions of contextual interference. In conclusion, whereas early components seem to be sensitive to the emotional valence of facial expression in specific contexts, late components seem to subserve interference resolution during emotional face processing.

  12. The processing of facial identity and expression is interactive, but dependent on task and experience

    PubMed Central

    Yankouskaya, Alla; Humphreys, Glyn W.; Rotshtein, Pia

    2014-01-01

    Facial identity and emotional expression are two important sources of information for daily social interaction. However the link between these two aspects of face processing has been the focus of an unresolved debate for the past three decades. Three views have been advocated: (1) separate and parallel processing of identity and emotional expression signals derived from faces; (2) asymmetric processing with the computation of emotion in faces depending on facial identity coding but not vice versa; and (3) integrated processing of facial identity and emotion. We present studies with healthy participants that primarily apply methods from mathematical psychology, formally testing the relations between the processing of facial identity and emotion. Specifically, we focused on the “Garner” paradigm, the composite face effect and the divided attention tasks. We further ask whether the architecture of face-related processes is fixed or flexible and whether (and how) it can be shaped by experience. We conclude that formal methods of testing the relations between processes show that the processing of facial identity and expressions interact, and hence are not fully independent. We further demonstrate that the architecture of the relations depends on experience; where experience leads to higher degree of inter-dependence in the processing of identity and expressions. We propose that this change occurs as integrative processes are more efficient than parallel. Finally, we argue that the dynamic aspects of face processing need to be incorporated into theories in this field. PMID:25452722

  13. Contrasting Specializations for Facial Motion Within the Macaque Face-Processing System

    PubMed Central

    Fisher, Clark; Freiwald, Winrich A.

    2014-01-01

    SUMMARY Facial motion transmits rich and ethologically vital information [1, 2], but how the brain interprets this complex signal is poorly understood. Facial form is analyzed by anatomically distinct face patches in the macaque brain [3, 4], and facial motion activates these patches and surrounding areas [5, 6]. Yet it is not known whether facial motion is processed by its own distinct and specialized neural machinery, and if so, what that machinery’s organization might be. To address these questions, we used functional magnetic resonance imaging (fMRI) to monitor the brain activity of macaque monkeys while they viewed low- and high-level motion and form stimuli. We found that, beyond classical motion areas and the known face patch system, moving faces recruited a heretofore-unrecognized face patch. Although all face patches displayed distinctive selectivity for face motion over object motion, only two face patches preferred naturally moving faces, while three others preferred randomized, rapidly varying sequences of facial form. This functional divide was anatomically specific, segregating dorsal from ventral face patches, thereby revealing a new organizational principle of the macaque face-processing system. PMID:25578903

  14. Sex differences in social cognition: The case of face processing.

    PubMed

    Proverbio, Alice Mado

    2017-01-02

    Several studies have demonstrated that women show a greater interest for social information and empathic attitude than men. This article reviews studies on sex differences in the brain, with particular reference to how males and females process faces and facial expressions, social interactions, pain of others, infant faces, faces in things (pareidolia phenomenon), opposite-sex faces, humans vs. landscapes, incongruent behavior, motor actions, biological motion, erotic pictures, and emotional information. Sex differences in oxytocin-based attachment response and emotional memory are also mentioned. In addition, we investigated how 400 different human faces were evaluated for arousal and valence dimensions by a group of healthy male and female University students. Stimuli were carefully balanced for sensory and perceptual characteristics, age, facial expression, and sex. As a whole, women judged all human faces as more positive and more arousing than men. Furthermore, they showed a preference for the faces of children and the elderly in the arousal evaluation. Regardless of face aesthetics, age, or facial expression, women rated human faces higher than men. The preference for opposite- vs. same-sex faces strongly interacted with facial age. Overall, both women and men exhibited differences in facial processing that could be interpreted in the light of evolutionary psychobiology. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  15. Face processing regions are sensitive to distinct aspects of temporal sequence in facial dynamics.

    PubMed

    Reinl, Maren; Bartels, Andreas

    2014-11-15

    Facial movement conveys important information for social interactions, yet its neural processing is poorly understood. Computational models propose that shape- and temporal sequence sensitive mechanisms interact in processing dynamic faces. While face processing regions are known to respond to facial movement, their sensitivity to particular temporal sequences has barely been studied. Here we used fMRI to examine the sensitivity of human face-processing regions to two aspects of directionality in facial movement trajectories. We presented genuine movie recordings of increasing and decreasing fear expressions, each of which were played in natural or reversed frame order. This two-by-two factorial design matched low-level visual properties, static content and motion energy within each factor, emotion-direction (increasing or decreasing emotion) and timeline (natural versus artificial). The results showed sensitivity for emotion-direction in FFA, which was timeline-dependent as it only occurred within the natural frame order, and sensitivity to timeline in the STS, which was emotion-direction-dependent as it only occurred for decreased fear. The occipital face area (OFA) was sensitive to the factor timeline. These findings reveal interacting temporal sequence sensitive mechanisms that are responsive to both ecological meaning and to prototypical unfolding of facial dynamics. These mechanisms are temporally directional, provide socially relevant information regarding emotional state or naturalness of behavior, and agree with predictions from modeling and predictive coding theory. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  16. The role of spatial frequency information for ERP components sensitive to faces and emotional facial expression.

    PubMed

    Holmes, Amanda; Winston, Joel S; Eimer, Martin

    2005-10-01

    To investigate the impact of spatial frequency on emotional facial expression analysis, ERPs were recorded in response to low spatial frequency (LSF), high spatial frequency (HSF), and unfiltered broad spatial frequency (BSF) faces with fearful or neutral expressions, houses, and chairs. In line with previous findings, BSF fearful facial expressions elicited a greater frontal positivity than BSF neutral facial expressions, starting at about 150 ms after stimulus onset. In contrast, this emotional expression effect was absent for HSF and LSF faces. Given that some brain regions involved in emotion processing, such as amygdala and connected structures, are selectively tuned to LSF visual inputs, these data suggest that ERP effects of emotional facial expression do not directly reflect activity in these regions. It is argued that higher order neocortical brain systems are involved in the generation of emotion-specific waveform modulations. The face-sensitive N170 component was neither affected by emotional facial expression nor by spatial frequency information.

  17. Do different fairness contexts and facial emotions motivate 'irrational' social decision-making in major depression? An exploratory patient study.

    PubMed

    Radke, Sina; Schäfer, Ina C; Müller, Bernhard W; de Bruijn, Ellen R A

    2013-12-15

    Although 'irrational' decision-making has been linked to depression, the contribution of biases in information processing to these findings remains unknown. To investigate the impact of cognitive biases and aberrant processing of facial emotions on social decision-making, we manipulated both context-related and emotion-related information in a modified Ultimatum Game. Unfair offers were (1) paired with different unselected alternatives, establishing the context in which an offer was made, and (2) accompanied by emotional facial expressions of proposers. Responder behavior was assessed in patients with major depressive disorder and healthy controls. In both groups alike, rejection rates were highest following unambiguous signals of unfairness, i.e. an angry proposer face or when an unfair distribution had deliberately been chosen over an equal split. However, depressed patients showed overall higher rejection rates than healthy volunteers, without exhibiting differential processing biases. This suggests that depressed patients were, as healthy individuals, basing their decisions on informative, salient features and differentiating between (i) fair and unfair offers, (ii) alternatives to unfair offers and (iii) proposers' facial emotions. Although more fundamental processes, e.g. reduced reward sensitivity, might underlie increased rejection in depression, the current study provides insight into mechanisms that shape fairness considerations in both depressed and healthy individuals. © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Emotional facial expressions reduce neural adaptation to face identity.

    PubMed

    Gerlicher, Anna M V; van Loon, Anouk M; Scholte, H Steven; Lamme, Victor A F; van der Leij, Andries R

    2014-05-01

    In human social interactions, facial emotional expressions are a crucial source of information. Repeatedly presented information typically leads to an adaptation of neural responses. However, processing seems sustained with emotional facial expressions. Therefore, we tested whether sustained processing of emotional expressions, especially threat-related expressions, would attenuate neural adaptation. Neutral and emotional expressions (happy, mixed and fearful) of same and different identity were presented at 3 Hz. We used electroencephalography to record the evoked steady-state visual potentials (ssVEP) and tested to what extent the ssVEP amplitude adapts to the same when compared with different face identities. We found adaptation to the identity of a neutral face. However, for emotional faces, adaptation was reduced, decreasing linearly with negative valence, with the least adaptation to fearful expressions. This short and straightforward method may prove to be a valuable new tool in the study of emotional processing.

  19. Spontaneous Gender Categorization in Masking and Priming Studies: Key for Distinguishing Jane from John Doe but Not Madonna from Sinatra

    PubMed Central

    Habibi, Ruth; Khurana, Beena

    2012-01-01

    Facial recognition is key to social interaction, however with unfamiliar faces only generic information, in the form of facial stereotypes such as gender and age is available. Therefore is generic information more prominent in unfamiliar versus familiar face processing? In order to address the question we tapped into two relatively disparate stages of face processing. At the early stages of encoding, we employed perceptual masking to reveal that only perception of unfamiliar face targets is affected by the gender of the facial masks. At the semantic end; using a priming paradigm, we found that while to-be-ignored unfamiliar faces prime lexical decisions to gender congruent stereotypic words, familiar faces do not. Our findings indicate that gender is a more salient dimension in unfamiliar relative to familiar face processing, both in early perceptual stages as well as later semantic stages of person construal. PMID:22389697

  20. Are event-related potentials to dynamic facial expressions of emotion related to individual differences in the accuracy of processing facial expressions and identity?

    PubMed

    Recio, Guillermo; Wilhelm, Oliver; Sommer, Werner; Hildebrandt, Andrea

    2017-04-01

    Despite a wealth of knowledge about the neural mechanisms behind emotional facial expression processing, little is known about how they relate to individual differences in social cognition abilities. We studied individual differences in the event-related potentials (ERPs) elicited by dynamic facial expressions. First, we assessed the latent structure of the ERPs, reflecting structural face processing in the N170, and the allocation of processing resources and reflexive attention to emotionally salient stimuli, in the early posterior negativity (EPN) and the late positive complex (LPC). Then we estimated brain-behavior relationships between the ERP factors and behavioral indicators of facial identity and emotion-processing abilities. Structural models revealed that the participants who formed faster structural representations of neutral faces (i.e., shorter N170 latencies) performed better at face perception (r = -.51) and memory (r = -.42). The N170 amplitude was not related to individual differences in face cognition or emotion processing. The latent EPN factor correlated with emotion perception (r = .47) and memory (r = .32), and also with face perception abilities (r = .41). Interestingly, the latent factor representing the difference in EPN amplitudes between the two neutral control conditions (chewing and blinking movements) also correlated with emotion perception (r = .51), highlighting the importance of tracking facial changes in the perception of emotional facial expressions. The LPC factor for negative expressions correlated with the memory for emotional facial expressions. The links revealed between the latency and strength of activations of brain systems and individual differences in processing socio-emotional information provide new insights into the brain mechanisms involved in social communication.

  1. Infrared thermal facial image sequence registration analysis and verification

    NASA Astrophysics Data System (ADS)

    Chen, Chieh-Li; Jian, Bo-Lin

    2015-03-01

    To study the emotional responses of subjects to the International Affective Picture System (IAPS), infrared thermal facial image sequence is preprocessed for registration before further analysis such that the variance caused by minor and irregular subject movements is reduced. Without affecting the comfort level and inducing minimal harm, this study proposes an infrared thermal facial image sequence registration process that will reduce the deviations caused by the unconscious head shaking of the subjects. A fixed image for registration is produced through the localization of the centroid of the eye region as well as image translation and rotation processes. Thermal image sequencing will then be automatically registered using the two-stage genetic algorithm proposed. The deviation before and after image registration will be demonstrated by image quality indices. The results show that the infrared thermal image sequence registration process proposed in this study is effective in localizing facial images accurately, which will be beneficial to the correlation analysis of psychological information related to the facial area.

  2. Modulation of Alpha Oscillations in the Human EEG with Facial Preference

    PubMed Central

    Kang, Jae-Hwan; Kim, Su Jin; Cho, Yang Seok; Kim, Sung-Phil

    2015-01-01

    Facial preference that results from the processing of facial information plays an important role in social interactions as well as the selection of a mate, friend, candidate, or favorite actor. However, it still remains elusive which brain regions are implicated in the neural mechanisms underlying facial preference, and how neural activities in these regions are modulated during the formation of facial preference. In the present study, we investigated the modulation of electroencephalography (EEG) oscillatory power with facial preference. For the reliable assessments of facial preference, we designed a series of passive viewing and active choice tasks. In the former task, twenty-four face stimuli were passively viewed by participants for multiple times in random order. In the latter task, the same stimuli were then evaluated by participants for their facial preference judgments. In both tasks, significant differences between the preferred and non-preferred faces groups were found in alpha band power (8–13 Hz) but not in other frequency bands. The preferred faces generated more decreases in alpha power. During the passive viewing task, significant differences in alpha power between the preferred and non-preferred face groups were observed at the left frontal regions in the early (0.15–0.4 s) period during the 1-s presentation. By contrast, during the active choice task when participants consecutively watched the first and second face for 1 s and then selected the preferred one, an alpha power difference was found for the late (0.65–0.8 s) period over the whole brain during the first face presentation and over the posterior regions during the second face presentation. These results demonstrate that the modulation of alpha activity by facial preference is a top-down process, which requires additional cognitive resources to facilitate information processing of the preferred faces that capture more visual attention than the non-preferred faces. PMID:26394328

  3. On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information

    NASA Astrophysics Data System (ADS)

    Stathopoulou, I.-O.; Alepis, E.; Tsihrintzis, G. A.; Virvou, M.

    Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.

  4. Discrimination of gender using facial image with expression change

    NASA Astrophysics Data System (ADS)

    Kuniyada, Jun; Fukuda, Takahiro; Terada, Kenji

    2005-12-01

    By carrying out marketing research, the managers of large-sized department stores or small convenience stores obtain the information such as ratio of men and women of visitors and an age group, and improve their management plan. However, these works are carried out in the manual operations, and it becomes a big burden to small stores. In this paper, the authors propose a method of men and women discrimination by extracting difference of the facial expression change from color facial images. Now, there are a lot of methods of the automatic recognition of the individual using a motion facial image or a still facial image in the field of image processing. However, it is very difficult to discriminate gender under the influence of the hairstyle and clothes, etc. Therefore, we propose the method which is not affected by personality such as size and position of facial parts by paying attention to a change of an expression. In this method, it is necessary to obtain two facial images with an expression and an expressionless. First, a region of facial surface and the regions of facial parts such as eyes, nose, and mouth are extracted in the facial image with color information of hue and saturation in HSV color system and emphasized edge information. Next, the features are extracted by calculating the rate of the change of each facial part generated by an expression change. In the last step, the values of those features are compared between the input data and the database, and the gender is discriminated. In this paper, it experimented for the laughing expression and smile expression, and good results were provided for discriminating gender.

  5. Three-dimensional visualization system as an aid for facial surgical planning

    NASA Astrophysics Data System (ADS)

    Barre, Sebastien; Fernandez-Maloigne, Christine; Paume, Patricia; Subrenat, Gilles

    2001-05-01

    We present an aid for facial deformities treatment. We designed a system for surgical planning and prediction of human facial aspect after maxillo-facial surgery. We study the 3D reconstruction process of the tissues involved in the simulation, starting from CT acquisitions. 3D iso-surfaces meshes of soft tissues and bone structures are built. A sparse set of still photographs is used to reconstruct a 360 degree(s) texture of the facial surface and increase its visual realism. Reconstructed objects are inserted into an object-oriented, portable and scriptable visualization software allowing the practitioner to manipulate and visualize them interactively. Several LODs (Level-Of- Details) techniques are used to ensure usability. Bone structures are separated and moved by means of cut planes matching orthognatic surgery procedures. We simulate soft tissue deformations by creating a physically-based springs model between both tissues. The new static state of the facial model is computed by minimizing the energy of the springs system to achieve equilibrium. This process is optimized by transferring informations like participation hints at vertex-level between a warped generic model and the facial mesh.

  6. Influence of aggression on information processing in the emotional stroop task--an event-related potential study.

    PubMed

    Bertsch, Katja; Böhnke, Robina; Kruk, Menno R; Naumann, Ewald

    2009-01-01

    Aggression is a common behavior which has frequently been explained as involving changes in higher level information processing patterns. Although researchers have started only recently to investigate information processing in healthy individuals while engaged in aggressive behavior, the impact of aggression on information processing beyond an aggressive encounter remains unclear. In an event-related potential study, we investigated the processing of facial expressions (happy, angry, fearful, and neutral) in an emotional Stroop task after experimentally provoking aggressive behavior in healthy participants. Compared to a non-provoked group, these individuals showed increased early (P2) and late (P3) positive amplitudes for all facial expressions. For the P2 amplitude, the effect of provocation was greatest for threat-related expressions. Beyond this, a bias for emotional expressions, i.e., slower reaction times to all emotional expressions, was found in provoked participants with a high level of trait anger. These results indicate significant effects of aggression on information processing, which last beyond the aggressive encounter even in healthy participants.

  7. Emotion unfolded by motion: a role for parietal lobe in decoding dynamic facial expressions.

    PubMed

    Sarkheil, Pegah; Goebel, Rainer; Schneider, Frank; Mathiak, Klaus

    2013-12-01

    Facial expressions convey important emotional and social information and are frequently applied in investigations of human affective processing. Dynamic faces may provide higher ecological validity to examine perceptual and cognitive processing of facial expressions. Higher order processing of emotional faces was addressed by varying the task and virtual face models systematically. Blood oxygenation level-dependent activation was assessed using functional magnetic resonance imaging in 20 healthy volunteers while viewing and evaluating either emotion or gender intensity of dynamic face stimuli. A general linear model analysis revealed that high valence activated a network of motion-responsive areas, indicating that visual motion areas support perceptual coding for the motion-based intensity of facial expressions. The comparison of emotion with gender discrimination task revealed increased activation of inferior parietal lobule, which highlights the involvement of parietal areas in processing of high level features of faces. Dynamic emotional stimuli may help to emphasize functions of the hypothesized 'extended' over the 'core' system for face processing.

  8. The role of spatial frequency information in the decoding of facial expressions of pain: a novel hybrid task.

    PubMed

    Wang, Shan; Eccleston, Christopher; Keogh, Edmund

    2017-11-01

    Spatial frequency (SF) information contributes to the recognition of facial expressions, including pain. Low-SF encodes facial configuration and structure and often dominates over high-SF information, which encodes fine details in facial features. This low-SF preference has not been investigated within the context of pain. In this study, we investigated whether perpetual preference differences exist for low-SF and high-SF pain information. A novel hybrid expression paradigm was used in which 2 different expressions, one containing low-SF information and the other high-SF information, were combined in a facial hybrid. Participants are instructed to identify the core expression contained within the hybrid, allowing for the measurement of SF information preference. Three experiments were conducted (46 participants in each) that varied the expressions within the hybrid faces: respectively pain-neutral, pain-fear, and pain-happiness. In order to measure the temporal aspects of image processing, each hybrid image was presented for 33, 67, 150, and 300 ms. As expected, identification of pain and other expressions was dominated by low-SF information across the 3 experiments. The low-SF preference was largest when the presentation of hybrid faces was brief and reduced as the presentation duration increased. A sex difference was also found in experiment 1. For women, the low-SF preference was dampened by high-SF pain information, when viewing low-SF neutral expressions. These results not only confirm the role that SF information has in the recognition of pain in facial expressions but suggests that in some situations, there may be sex differences in how pain is communicated.

  9. Dissociation of Neural Substrates of Response Inhibition to Negative Information between Implicit and Explicit Facial Go/Nogo Tasks: Evidence from an Electrophysiological Study

    PubMed Central

    Sun, Shiyue; Carretié, Luis; Zhang, Lei; Dong, Yi; Zhu, Chunyan; Luo, Yuejia; Wang, Kai

    2014-01-01

    Background Although ample evidence suggests that emotion and response inhibition are interrelated at the behavioral and neural levels, neural substrates of response inhibition to negative facial information remain unclear. Thus we used event-related potential (ERP) methods to explore the effects of explicit and implicit facial expression processing in response inhibition. Methods We used implicit (gender categorization) and explicit emotional Go/Nogo tasks (emotion categorization) in which neutral and sad faces were presented. Electrophysiological markers at the scalp and the voxel level were analyzed during the two tasks. Results We detected a task, emotion and trial type interaction effect in the Nogo-P3 stage. Larger Nogo-P3 amplitudes during sad conditions versus neutral conditions were detected with explicit tasks. However, the amplitude differences between the two conditions were not significant for implicit tasks. Source analyses on P3 component revealed that right inferior frontal junction (rIFJ) was involved during this stage. The current source density (CSD) of rIFJ was higher with sad conditions compared to neutral conditions for explicit tasks, rather than for implicit tasks. Conclusions The findings indicated that response inhibition was modulated by sad facial information at the action inhibition stage when facial expressions were processed explicitly rather than implicitly. The rIFJ may be a key brain region in emotion regulation. PMID:25330212

  10. Emotion Unchained: Facial Expression Modulates Gaze Cueing under Cognitive Load.

    PubMed

    Pecchinenda, Anna; Petrucci, Manuel

    2016-01-01

    Direction of eye gaze cues spatial attention, and typically this cueing effect is not modulated by the expression of a face unless top-down processes are explicitly or implicitly involved. To investigate the role of cognitive control on gaze cueing by emotional faces, participants performed a gaze cueing task with happy, angry, or neutral faces under high (i.e., counting backward by 7) or low cognitive load (i.e., counting forward by 2). Results show that high cognitive load enhances gaze cueing effects for angry facial expressions. In addition, cognitive load reduces gaze cueing for neutral faces, whereas happy facial expressions and gaze affected object preferences regardless of load. This evidence clearly indicates a differential role of cognitive control in processing gaze direction and facial expression, suggesting that under typical conditions, when we shift attention based on social cues from another person, cognitive control processes are used to reduce interference from emotional information.

  11. Emotion Unchained: Facial Expression Modulates Gaze Cueing under Cognitive Load

    PubMed Central

    Petrucci, Manuel

    2016-01-01

    Direction of eye gaze cues spatial attention, and typically this cueing effect is not modulated by the expression of a face unless top-down processes are explicitly or implicitly involved. To investigate the role of cognitive control on gaze cueing by emotional faces, participants performed a gaze cueing task with happy, angry, or neutral faces under high (i.e., counting backward by 7) or low cognitive load (i.e., counting forward by 2). Results show that high cognitive load enhances gaze cueing effects for angry facial expressions. In addition, cognitive load reduces gaze cueing for neutral faces, whereas happy facial expressions and gaze affected object preferences regardless of load. This evidence clearly indicates a differential role of cognitive control in processing gaze direction and facial expression, suggesting that under typical conditions, when we shift attention based on social cues from another person, cognitive control processes are used to reduce interference from emotional information. PMID:27959925

  12. Dynamic Facial Expressions Prime the Processing of Emotional Prosody.

    PubMed

    Garrido-Vásquez, Patricia; Pell, Marc D; Paulmann, Silke; Kotz, Sonja A

    2018-01-01

    Evidence suggests that emotion is represented supramodally in the human brain. Emotional facial expressions, which often precede vocally expressed emotion in real life, can modulate event-related potentials (N100 and P200) during emotional prosody processing. To investigate these cross-modal emotional interactions, two lines of research have been put forward: cross-modal integration and cross-modal priming. In cross-modal integration studies, visual and auditory channels are temporally aligned, while in priming studies they are presented consecutively. Here we used cross-modal emotional priming to study the interaction of dynamic visual and auditory emotional information. Specifically, we presented dynamic facial expressions (angry, happy, neutral) as primes and emotionally-intoned pseudo-speech sentences (angry, happy) as targets. We were interested in how prime-target congruency would affect early auditory event-related potentials, i.e., N100 and P200, in order to shed more light on how dynamic facial information is used in cross-modal emotional prediction. Results showed enhanced N100 amplitudes for incongruently primed compared to congruently and neutrally primed emotional prosody, while the latter two conditions did not significantly differ. However, N100 peak latency was significantly delayed in the neutral condition compared to the other two conditions. Source reconstruction revealed that the right parahippocampal gyrus was activated in incongruent compared to congruent trials in the N100 time window. No significant ERP effects were observed in the P200 range. Our results indicate that dynamic facial expressions influence vocal emotion processing at an early point in time, and that an emotional mismatch between a facial expression and its ensuing vocal emotional signal induces additional processing costs in the brain, potentially because the cross-modal emotional prediction mechanism is violated in case of emotional prime-target incongruency.

  13. Monkeys preferentially process body information while viewing affective displays.

    PubMed

    Bliss-Moreau, Eliza; Moadab, Gilda; Machado, Christopher J

    2017-08-01

    Despite evolutionary claims about the function of facial behaviors across phylogeny, rarely are those hypotheses tested in a comparative context-that is, by evaluating how nonhuman animals process such behaviors. Further, while increasing evidence indicates that humans make meaning of faces by integrating contextual information, including that from the body, the extent to which nonhuman animals process contextual information during affective displays is unknown. In the present study, we evaluated the extent to which rhesus macaques (Macaca mulatta) process dynamic affective displays of conspecifics that included both facial and body behaviors. Contrary to hypotheses that they would preferentially attend to faces during affective displays, monkeys looked for longest, most frequently, and first at conspecifics' bodies rather than their heads. These findings indicate that macaques, like humans, attend to available contextual information during the processing of affective displays, and that the body may also be providing unique information about affective states. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Long-term academic stress enhances early processing of facial expressions.

    PubMed

    Zhang, Liang; Qin, Shaozheng; Yao, Zhuxi; Zhang, Kan; Wu, Jianhui

    2016-11-01

    Exposure to long-term stress can lead to a variety of emotional and behavioral problems. Although widely investigated, the neural basis of how long-term stress impacts emotional processing in humans remains largely elusive. Using event-related brain potentials (ERPs), we investigated the effects of long-term stress on the neural dynamics of emotionally facial expression processing. Thirty-nine male college students undergoing preparation for a major examination and twenty-one matched controls performed a gender discrimination task for faces displaying angry, happy, and neutral expressions. The results of the Perceived Stress Scale showed that participants in the stress group perceived higher levels of long-term stress relative to the control group. ERP analyses revealed differential effects of long-term stress on two early stages of facial expression processing: 1) long-term stress generally augmented posterior P1 amplitudes to facial stimuli irrespective of expression valence, suggesting that stress can increase sensitization to visual inputs in general, and 2) long-term stress selectively augmented fronto-central P2 amplitudes for angry but not for neutral or positive facial expressions, suggesting that stress may lead to increased attentional prioritization to processing negative emotional stimuli. Together, our findings suggest that long-term stress has profound impacts on the early stages of facial expression processing, with an increase at the very early stage of general information inputs and a subsequent attentional bias toward processing emotionally negative stimuli. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Effective connectivity during processing of facial affect: evidence for multiple parallel pathways.

    PubMed

    Dima, Danai; Stephan, Klaas E; Roiser, Jonathan P; Friston, Karl J; Frangou, Sophia

    2011-10-05

    The perception of facial affect engages a distributed cortical network. We used functional magnetic resonance imaging and dynamic causal modeling to characterize effective connectivity during explicit (conscious) categorization of affective stimuli in the human brain. Specifically, we examined the modulation of connectivity from posterior regions of the face-processing network to the lateral ventral prefrontal cortex (VPFC) during affective categorization and we tested for a potential role of the amygdala (AMG) in mediating this modulation. We found that explicit processing of facial affect led to prominent modulation (increase) in the effective connectivity from the inferior occipital gyrus (IOG) to the VPFC, while there was less evidence for modulation of the afferent connections from fusiform gyrus and AMG to VPFC. More specifically, the forward connection from IOG to the VPFC exhibited a selective increase under anger (as opposed to fear or sadness). Furthermore, Bayesian model comparison suggested that the modulation of afferent connections to the VPFC was mediated directly by facial affect, as opposed to an indirect modulation mediated by the AMG. Our results thus suggest that affective information is conveyed to the VPFC along multiple parallel pathways and that AMG activity is not sufficient to account for the gating of information transfer to the VPFC during explicit emotional processing.

  16. Spontaneous Facial Mimicry Is Enhanced by the Goal of Inferring Emotional States: Evidence for Moderation of "Automatic" Mimicry by Higher Cognitive Processes.

    PubMed

    Murata, Aiko; Saito, Hisamichi; Schug, Joanna; Ogawa, Kenji; Kameda, Tatsuya

    2016-01-01

    A number of studies have shown that individuals often spontaneously mimic the facial expressions of others, a tendency known as facial mimicry. This tendency has generally been considered a reflex-like "automatic" response, but several recent studies have shown that the degree of mimicry may be moderated by contextual information. However, the cognitive and motivational factors underlying the contextual moderation of facial mimicry require further empirical investigation. In this study, we present evidence that the degree to which participants spontaneously mimic a target's facial expressions depends on whether participants are motivated to infer the target's emotional state. In the first study we show that facial mimicry, assessed by facial electromyography, occurs more frequently when participants are specifically instructed to infer a target's emotional state than when given no instruction. In the second study, we replicate this effect using the Facial Action Coding System to show that participants are more likely to mimic facial expressions of emotion when they are asked to infer the target's emotional state, rather than make inferences about a physical trait unrelated to emotion. These results provide convergent evidence that the explicit goal of understanding a target's emotional state affects the degree of facial mimicry shown by the perceiver, suggesting moderation of reflex-like motor activities by higher cognitive processes.

  17. Spontaneous Facial Mimicry Is Enhanced by the Goal of Inferring Emotional States: Evidence for Moderation of “Automatic” Mimicry by Higher Cognitive Processes

    PubMed Central

    Murata, Aiko; Saito, Hisamichi; Schug, Joanna; Ogawa, Kenji; Kameda, Tatsuya

    2016-01-01

    A number of studies have shown that individuals often spontaneously mimic the facial expressions of others, a tendency known as facial mimicry. This tendency has generally been considered a reflex-like “automatic” response, but several recent studies have shown that the degree of mimicry may be moderated by contextual information. However, the cognitive and motivational factors underlying the contextual moderation of facial mimicry require further empirical investigation. In this study, we present evidence that the degree to which participants spontaneously mimic a target’s facial expressions depends on whether participants are motivated to infer the target’s emotional state. In the first study we show that facial mimicry, assessed by facial electromyography, occurs more frequently when participants are specifically instructed to infer a target’s emotional state than when given no instruction. In the second study, we replicate this effect using the Facial Action Coding System to show that participants are more likely to mimic facial expressions of emotion when they are asked to infer the target’s emotional state, rather than make inferences about a physical trait unrelated to emotion. These results provide convergent evidence that the explicit goal of understanding a target’s emotional state affects the degree of facial mimicry shown by the perceiver, suggesting moderation of reflex-like motor activities by higher cognitive processes. PMID:27055206

  18. Intact anger recognition in depression despite aberrant visual facial information usage.

    PubMed

    Clark, Cameron M; Chiu, Carina G; Diaz, Ruth L; Goghari, Vina M

    2014-08-01

    Previous literature has indicated abnormalities in facial emotion recognition abilities, as well as deficits in basic visual processes in major depression. However, the literature is unclear on a number of important factors including whether or not these abnormalities represent deficient or enhanced emotion recognition abilities compared to control populations, and the degree to which basic visual deficits might impact this process. The present study investigated emotion recognition abilities for angry versus neutral facial expressions in a sample of undergraduate students with Beck Depression Inventory-II (BDI-II) scores indicative of moderate depression (i.e., ≥20), compared to matched low-BDI-II score (i.e., ≤2) controls via the Bubbles Facial Emotion Perception Task. Results indicated unimpaired behavioural performance in discriminating angry from neutral expressions in the high depressive symptoms group relative to the minimal depressive symptoms group, despite evidence of an abnormal pattern of visual facial information usage. The generalizability of the current findings is limited by the highly structured nature of the facial emotion recognition task used, as well as the use of an analog sample undergraduates scoring high in self-rated symptoms of depression rather than a clinical sample. Our findings suggest that basic visual processes are involved in emotion recognition abnormalities in depression, demonstrating consistency with the emotion recognition literature in other psychopathologies (e.g., schizophrenia, autism, social anxiety). Future research should seek to replicate these findings in clinical populations with major depression, and assess the association between aberrant face gaze behaviours and symptom severity and social functioning. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. The MPI Facial Expression Database — A Validated Database of Emotional and Conversational Facial Expressions

    PubMed Central

    Kaulard, Kathrin; Cunningham, Douglas W.; Bülthoff, Heinrich H.; Wallraven, Christian

    2012-01-01

    The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions. PMID:22438875

  20. Perceiving emotions in neutral faces: expression processing is biased by affective person knowledge.

    PubMed

    Suess, Franziska; Rabovsky, Milena; Abdel Rahman, Rasha

    2015-04-01

    According to a widely held view, basic emotions such as happiness or anger are reflected in facial expressions that are invariant and uniquely defined by specific facial muscle movements. Accordingly, expression perception should not be vulnerable to influences outside the face. Here, we test this assumption by manipulating the emotional valence of biographical knowledge associated with individual persons. Faces of well-known and initially unfamiliar persons displaying neutral expressions were associated with socially relevant negative, positive or comparatively neutral biographical information. The expressions of faces associated with negative information were classified as more negative than faces associated with neutral information. Event-related brain potential modulations in the early posterior negativity, a component taken to reflect early sensory processing of affective stimuli such as emotional facial expressions, suggest that negative affective knowledge can bias the perception of faces with neutral expressions toward subjectively displaying negative emotions. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  1. Automatic three-dimensional quantitative analysis for evaluation of facial movement.

    PubMed

    Hontanilla, B; Aubá, C

    2008-01-01

    The aim of this study is to present a new 3D capture system of facial movements called FACIAL CLIMA. It is an automatic optical motion system that involves placing special reflecting dots on the subject's face and video recording with three infrared-light cameras the subject performing several face movements such as smile, mouth puckering, eye closure and forehead elevation. Images from the cameras are automatically processed with a software program that generates customised information such as 3D data on velocities and areas. The study has been performed in 20 healthy volunteers. The accuracy of the measurement process and the intrarater and interrater reliabilities have been evaluated. Comparison of a known distance and angle with those obtained by FACIAL CLIMA shows that this system is accurate to within 0.13 mm and 0.41 degrees . In conclusion, the accuracy of the FACIAL CLIMA system for evaluation of facial movements is demonstrated and also the high intrarater and interrater reliability. It has advantages with respect to other systems that have been developed for evaluation of facial movements, such as short calibration time, short measuring time, easiness to use and it provides not only distances but also velocities and areas. Thus the FACIAL CLIMA system could be considered as an adequate tool to assess the outcome of facial paralysis reanimation surgery. Thus, patients with facial paralysis could be compared between surgical centres such that effectiveness of facial reanimation operations could be evaluated.

  2. The association between PTSD and facial affect recognition.

    PubMed

    Williams, Christian L; Milanak, Melissa E; Judah, Matt R; Berenbaum, Howard

    2018-05-05

    The major aims of this study were to examine how, if at all, having higher levels of PTSD would be associated with performance on a facial affect recognition task in which facial expressions of emotion are superimposed on emotionally valenced, non-face images. College students with trauma histories (N = 90) completed a facial affect recognition task as well as measures of exposure to traumatic events, and PTSD symptoms. When the face and context matched, participants with higher levels of PTSD were significantly more accurate. When the face and context were mismatched, participants with lower levels of PTSD were more accurate than were those with higher levels of PTSD. These findings suggest that PTSD is associated with how people process affective information. Furthermore, these results suggest that the enhanced attention of people with higher levels of PTSD to affective information can be either beneficial or detrimental to their ability to accurately identify facial expressions of emotion. Limitations, future directions and clinical implications are discussed. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Attachment anxiety moderates the relationship between childhood maltreatment and attention bias for emotion in adults

    PubMed Central

    Davis, Jennifer S; Fani, Negar; Ressler, Kerry; Jovanovic, Tanja; Tone, Erin B.; Bradley, Bekh

    2014-01-01

    Research indicates that some individuals who were maltreated in childhood demonstrate biases in social information processing. However, the mechanisms through which these biases develop remain unclear—one possible mechanism is via attachment-related processes. Childhood maltreatment increases risk for insecure attachment. The internal working models of self and others associated with insecure attachment may impact the processing of socially relevant information, particularly emotion conveyed in facial expressions. We investigated associations among child abuse, attachment anxiety and avoidance, and attention biases for emotion in an adult population. Specifically, we examined how self-reported attachment influences the relationship between childhood abuse and attention bias for emotion. A dot probe task consisting of happy, threatening, and neutral female facial stimuli was used to assess possible biases in attention for socially relevant stimuli. Our findings indicate that attachment anxiety moderated the relationship between maltreatment and attention bias for happy emotion; among individuals with a child abuse history, attachment anxiety significantly predicted an attention bias away from happy facial stimuli. PMID:24680873

  4. Brief report: Representational momentum for dynamic facial expressions in pervasive developmental disorder.

    PubMed

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2010-03-01

    Individuals with pervasive developmental disorder (PDD) have difficulty with social communication via emotional facial expressions, but behavioral studies involving static images have reported inconsistent findings about emotion recognition. We investigated whether dynamic presentation of facial expression would enhance subjective perception of expressed emotion in 13 individuals with PDD and 13 typically developing controls. We presented dynamic and static emotional (fearful and happy) expressions. Participants were asked to match a changeable emotional face display with the last presented image. The results showed that both groups perceived the last image of dynamic facial expression to be more emotionally exaggerated than the static facial expression. This finding suggests that individuals with PDD have an intact perceptual mechanism for processing dynamic information in another individual's face.

  5. Flexible and inflexible task sets: asymmetric interference when switching between emotional expression, sex, and age classification of perceived faces.

    PubMed

    Schuch, Stefanie; Werheid, Katja; Koch, Iring

    2012-01-01

    The present study investigated whether the processing characteristics of categorizing emotional facial expressions are different from those of categorizing facial age and sex information. Given that emotions change rapidly, it was hypothesized that processing facial expressions involves a more flexible task set that causes less between-task interference than the task sets involved in processing age or sex of a face. Participants switched between three tasks: categorizing a face as looking happy or angry (emotion task), young or old (age task), and male or female (sex task). Interference between tasks was measured by global interference and response interference. Both measures revealed patterns of asymmetric interference. Global between-task interference was reduced when a task was mixed with the emotion task. Response interference, as measured by congruency effects, was larger for the emotion task than for the nonemotional tasks. The results support the idea that processing emotional facial expression constitutes a more flexible task set that causes less interference (i.e., task-set "inertia") than processing the age or sex of a face.

  6. The Associations between Visual Attention and Facial Expression Identification in Patients with Schizophrenia.

    PubMed

    Lin, I-Mei; Fan, Sheng-Yu; Huang, Tiao-Lai; Wu, Wan-Ting; Li, Shi-Ming

    2013-12-01

    Visual search is an important attention process that precedes the information processing. Visual search also mediates the relationship between cognition function (attention) and social cognition (such as facial expression identification). However, the association between visual attention and social cognition in patients with schizophrenia remains unknown. The purposes of this study were to examine the differences in visual search performance and facial expression identification between patients with schizophrenia and normal controls, and to explore the relationship between visual search performance and facial expression identification in patients with schizophrenia. Fourteen patients with schizophrenia (mean age=46.36±6.74) and 15 normal controls (mean age=40.87±9.33) participated this study. The visual search task, including feature search and conjunction search, and Japanese and Caucasian Facial Expression of Emotion were administered. Patients with schizophrenia had worse visual search performance both in feature search and conjunction search than normal controls, as well as had worse facial expression identification, especially in surprised and sadness. In addition, there were negative associations between visual search performance and facial expression identification in patients with schizophrenia, especially in surprised and sadness. However, this phenomenon was not showed in normal controls. Patients with schizophrenia who had visual search deficits had the impairment on facial expression identification. Increasing ability of visual search and facial expression identification may improve their social function and interpersonal relationship.

  7. Influence of Aggression on Information Processing in the Emotional Stroop Task – an Event-Related Potential Study

    PubMed Central

    Bertsch, Katja; Böhnke, Robina; Kruk, Menno R.; Naumann, Ewald

    2009-01-01

    Aggression is a common behavior which has frequently been explained as involving changes in higher level information processing patterns. Although researchers have started only recently to investigate information processing in healthy individuals while engaged in aggressive behavior, the impact of aggression on information processing beyond an aggressive encounter remains unclear. In an event-related potential study, we investigated the processing of facial expressions (happy, angry, fearful, and neutral) in an emotional Stroop task after experimentally provoking aggressive behavior in healthy participants. Compared to a non-provoked group, these individuals showed increased early (P2) and late (P3) positive amplitudes for all facial expressions. For the P2 amplitude, the effect of provocation was greatest for threat-related expressions. Beyond this, a bias for emotional expressions, i.e., slower reaction times to all emotional expressions, was found in provoked participants with a high level of trait anger. These results indicate significant effects of aggression on information processing, which last beyond the aggressive encounter even in healthy participants. PMID:19826616

  8. Do Valenced Odors and Trait Body Odor Disgust Affect Evaluation of Emotion in Dynamic Faces?

    PubMed

    Syrjänen, Elmeri; Liuzza, Marco Tullio; Fischer, Håkan; Olofsson, Jonas K

    2017-12-01

    Disgust is a core emotion evolved to detect and avoid the ingestion of poisonous food as well as the contact with pathogens and other harmful agents. Previous research has shown that multisensory presentation of olfactory and visual information may strengthen the processing of disgust-relevant information. However, it is not known whether these findings extend to dynamic facial stimuli that changes from neutral to emotionally expressive, or if individual differences in trait body odor disgust may influence the processing of disgust-related information. In this preregistered study, we tested whether a classification of dynamic facial expressions as happy or disgusted, and an emotional evaluation of these facial expressions, would be affected by individual differences in body odor disgust sensitivity, and by exposure to a sweat-like, negatively valenced odor (valeric acid), as compared with a soap-like, positively valenced odor (lilac essence) or a no-odor control. Using Bayesian hypothesis testing, we found evidence that odors do not affect recognition of emotion in dynamic faces even when body odor disgust sensitivity was used as moderator. However, an exploratory analysis suggested that an unpleasant odor context may cause faster RTs for faces, independent of their emotional expression. Our results further our understanding of the scope and limits of odor effects on facial perception affect and suggest further studies should focus on reproducibility, specifying experimental circumstances where odor effects on facial expressions may be present versus absent.

  9. Major depression is associated with impaired processing of emotion in music as well as in facial and vocal stimuli.

    PubMed

    Naranjo, C; Kornreich, C; Campanella, S; Noël, X; Vandriette, Y; Gillain, B; de Longueville, X; Delatte, B; Verbanck, P; Constant, E

    2011-02-01

    The processing of emotional stimuli is thought to be negatively biased in major depression. This study investigates this issue using musical, vocal and facial affective stimuli. 23 depressed in-patients and 23 matched healthy controls were recruited. Affective information processing was assessed through musical, vocal and facial emotion recognition tasks. Depression, anxiety level and attention capacity were controlled. The depressed participants demonstrated less accurate identification of emotions than the control group in all three sorts of emotion-recognition tasks. The depressed group also gave higher intensity ratings than the controls when scoring negative emotions, and they were more likely to attribute negative emotions to neutral voices and faces. Our in-patient group might differ from the more general population of depressed adults. They were all taking anti-depressant medication, which may have had an influence on their emotional information processing. Major depression is associated with a general negative bias in the processing of emotional stimuli. Emotional processing impairment in depression is not confined to interpersonal stimuli (faces and voices), being also present in the ability to feel music accurately. © 2010 Elsevier B.V. All rights reserved.

  10. Aberrant patterns of visual facial information usage in schizophrenia.

    PubMed

    Clark, Cameron M; Gosselin, Frédéric; Goghari, Vina M

    2013-05-01

    Deficits in facial emotion perception have been linked to poorer functional outcome in schizophrenia. However, the relationship between abnormal emotion perception and functional outcome remains poorly understood. To better understand the nature of facial emotion perception deficits in schizophrenia, we used the Bubbles Facial Emotion Perception Task to identify differences in usage of visual facial information in schizophrenia patients (n = 20) and controls (n = 20), when differentiating between angry and neutral facial expressions. As hypothesized, schizophrenia patients required more facial information than controls to accurately differentiate between angry and neutral facial expressions, and they relied on different facial features and spatial frequencies to differentiate these facial expressions. Specifically, schizophrenia patients underutilized the eye regions, overutilized the nose and mouth regions, and virtually ignored information presented at the lowest levels of spatial frequency. In addition, a post hoc one-tailed t test revealed a positive relationship of moderate strength between the degree of divergence from "normal" visual facial information usage in the eye region and lower overall social functioning. These findings provide direct support for aberrant patterns of visual facial information usage in schizophrenia in differentiating between socially salient emotional states. © 2013 American Psychological Association

  11. Cross-modal Association between Auditory and Visuospatial Information in Mandarin Tone Perception in Noise by Native and Non-native Perceivers.

    PubMed

    Hannah, Beverly; Wang, Yue; Jongman, Allard; Sereno, Joan A; Cao, Jiguo; Nie, Yunlong

    2017-01-01

    Speech perception involves multiple input modalities. Research has indicated that perceivers establish cross-modal associations between auditory and visuospatial events to aid perception. Such intermodal relations can be particularly beneficial for speech development and learning, where infants and non-native perceivers need additional resources to acquire and process new sounds. This study examines how facial articulatory cues and co-speech hand gestures mimicking pitch contours in space affect non-native Mandarin tone perception. Native English as well as Mandarin perceivers identified tones embedded in noise with either congruent or incongruent Auditory-Facial (AF) and Auditory-FacialGestural (AFG) inputs. Native Mandarin results showed the expected ceiling-level performance in the congruent AF and AFG conditions. In the incongruent conditions, while AF identification was primarily auditory-based, AFG identification was partially based on gestures, demonstrating the use of gestures as valid cues in tone identification. The English perceivers' performance was poor in the congruent AF condition, but improved significantly in AFG. While the incongruent AF identification showed some reliance on facial information, incongruent AFG identification relied more on gestural than auditory-facial information. These results indicate positive effects of facial and especially gestural input on non-native tone perception, suggesting that cross-modal (visuospatial) resources can be recruited to aid auditory perception when phonetic demands are high. The current findings may inform patterns of tone acquisition and development, suggesting how multi-modal speech enhancement principles may be applied to facilitate speech learning.

  12. Cross-modal Association between Auditory and Visuospatial Information in Mandarin Tone Perception in Noise by Native and Non-native Perceivers

    PubMed Central

    Hannah, Beverly; Wang, Yue; Jongman, Allard; Sereno, Joan A.; Cao, Jiguo; Nie, Yunlong

    2017-01-01

    Speech perception involves multiple input modalities. Research has indicated that perceivers establish cross-modal associations between auditory and visuospatial events to aid perception. Such intermodal relations can be particularly beneficial for speech development and learning, where infants and non-native perceivers need additional resources to acquire and process new sounds. This study examines how facial articulatory cues and co-speech hand gestures mimicking pitch contours in space affect non-native Mandarin tone perception. Native English as well as Mandarin perceivers identified tones embedded in noise with either congruent or incongruent Auditory-Facial (AF) and Auditory-FacialGestural (AFG) inputs. Native Mandarin results showed the expected ceiling-level performance in the congruent AF and AFG conditions. In the incongruent conditions, while AF identification was primarily auditory-based, AFG identification was partially based on gestures, demonstrating the use of gestures as valid cues in tone identification. The English perceivers’ performance was poor in the congruent AF condition, but improved significantly in AFG. While the incongruent AF identification showed some reliance on facial information, incongruent AFG identification relied more on gestural than auditory-facial information. These results indicate positive effects of facial and especially gestural input on non-native tone perception, suggesting that cross-modal (visuospatial) resources can be recruited to aid auditory perception when phonetic demands are high. The current findings may inform patterns of tone acquisition and development, suggesting how multi-modal speech enhancement principles may be applied to facilitate speech learning. PMID:29255435

  13. Priming the Secure Attachment Schema Affects the Emotional Face Processing Bias in Attachment Anxiety: An fMRI Research

    PubMed Central

    Tang, Qingting; Chen, Xu; Hu, Jia; Liu, Ying

    2017-01-01

    Our study explored how priming with a secure base schema affects the processing of emotional facial stimuli in individuals with attachment anxiety. We enrolled 42 undergraduate students between 18 and 27 years of age, and divided them into two groups: attachment anxiety and attachment secure. All participants were primed under two conditions, the secure priming using references to the partner, and neutral priming using neutral references. We performed repeated attachment security priming combined with a dual-task paradigm and functional magnetic resonance imaging. Participants’ reaction times in terms of responding to the facial stimuli were also measured. Attachment security priming can facilitate an individual’s processing of positive emotional faces; for instance, the presentation of the partner’s name was associated with stronger activities in a wide range of brain regions and faster reaction times for positive facial expressions in the subjects. The current finding of higher activity in the left-hemisphere regions for secure priming rather than neutral priming is consistent with the prediction that attachment security priming triggers the spread of the activation of a positive emotional state. However, the difference in brain activity during processing of both, positive and negative emotional facial stimuli between the two priming conditions appeared in the attachment anxiety group alone. This study indicates that the effect of attachment secure priming on the processing of emotional facial stimuli could be mediated by chronic attachment anxiety. In addition, it highlights the association between higher-order processes of the attachment system (secure attachment schema priming) and early-stage information processing system (attention), given the increased attention toward the effects of secure base schema on the processing of emotion- and attachment-related information among the insecure population. Thus, the following study has applications in providing directions for clinical treatment of mood disorders in attachment anxiety. PMID:28473796

  14. Priming the Secure Attachment Schema Affects the Emotional Face Processing Bias in Attachment Anxiety: An fMRI Research.

    PubMed

    Tang, Qingting; Chen, Xu; Hu, Jia; Liu, Ying

    2017-01-01

    Our study explored how priming with a secure base schema affects the processing of emotional facial stimuli in individuals with attachment anxiety. We enrolled 42 undergraduate students between 18 and 27 years of age, and divided them into two groups: attachment anxiety and attachment secure. All participants were primed under two conditions, the secure priming using references to the partner, and neutral priming using neutral references. We performed repeated attachment security priming combined with a dual-task paradigm and functional magnetic resonance imaging. Participants' reaction times in terms of responding to the facial stimuli were also measured. Attachment security priming can facilitate an individual's processing of positive emotional faces; for instance, the presentation of the partner's name was associated with stronger activities in a wide range of brain regions and faster reaction times for positive facial expressions in the subjects. The current finding of higher activity in the left-hemisphere regions for secure priming rather than neutral priming is consistent with the prediction that attachment security priming triggers the spread of the activation of a positive emotional state. However, the difference in brain activity during processing of both, positive and negative emotional facial stimuli between the two priming conditions appeared in the attachment anxiety group alone. This study indicates that the effect of attachment secure priming on the processing of emotional facial stimuli could be mediated by chronic attachment anxiety. In addition, it highlights the association between higher-order processes of the attachment system (secure attachment schema priming) and early-stage information processing system (attention), given the increased attention toward the effects of secure base schema on the processing of emotion- and attachment-related information among the insecure population. Thus, the following study has applications in providing directions for clinical treatment of mood disorders in attachment anxiety.

  15. Incongruence between Verbal and Non-Verbal Information Enhances the Late Positive Potential.

    PubMed

    Morioka, Shu; Osumi, Michihiro; Shiotani, Mayu; Nobusako, Satoshi; Maeoka, Hiroshi; Okada, Yohei; Hiyamizu, Makoto; Matsuo, Atsushi

    2016-01-01

    Smooth social communication consists of both verbal and non-verbal information. However, when presented with incongruence between verbal information and nonverbal information, the relationship between an individual judging trustworthiness in those who present the verbal-nonverbal incongruence and the brain activities observed during judgment for trustworthiness are not clear. In the present study, we attempted to identify the impact of incongruencies between verbal information and facial expression on the value of trustworthiness and brain activity using event-related potentials (ERP). Combinations of verbal information [positive/negative] and facial expressions [smile/angry] expressions were presented randomly on a computer screen to 17 healthy volunteers. The value of trustworthiness of the presented facial expression was evaluated by the amount of donation offered by the observer to the person depicted on the computer screen. In addition, the time required to judge the value of trustworthiness was recorded for each trial. Using electroencephalography, ERP were obtained by averaging the wave patterns recorded while the participants judged the value of trustworthiness. The amount of donation offered was significantly lower when the verbal information and facial expression were incongruent, particularly for [negative × smile]. The amplitude of the early posterior negativity (EPN) at the temporal lobe showed no significant difference between all conditions. However, the amplitude of the late positive potential (LPP) at the parietal electrodes for the incongruent condition [negative × smile] was higher than that for the congruent condition [positive × smile]. These results suggest that the LPP amplitude observed from the parietal cortex is involved in the processing of incongruence between verbal information and facial expression.

  16. Retention interval affects visual short-term memory encoding.

    PubMed

    Bankó, Eva M; Vidnyánszky, Zoltán

    2010-03-01

    Humans can efficiently store fine-detailed facial emotional information in visual short-term memory for several seconds. However, an unresolved question is whether the same neural mechanisms underlie high-fidelity short-term memory for emotional expressions at different retention intervals. Here we show that retention interval affects the neural processes of short-term memory encoding using a delayed facial emotion discrimination task. The early sensory P100 component of the event-related potentials (ERP) was larger in the 1-s interstimulus interval (ISI) condition than in the 6-s ISI condition, whereas the face-specific N170 component was larger in the longer ISI condition. Furthermore, the memory-related late P3b component of the ERP responses was also modulated by retention interval: it was reduced in the 1-s ISI as compared with the 6-s condition. The present findings cannot be explained based on differences in sensory processing demands or overall task difficulty because there was no difference in the stimulus information and subjects' performance between the two different ISI conditions. These results reveal that encoding processes underlying high-precision short-term memory for facial emotional expressions are modulated depending on whether information has to be stored for one or for several seconds.

  17. Non-rigid, but not rigid, motion interferes with the processing of structural face information in developmental prosopagnosia.

    PubMed

    Maguinness, Corrina; Newell, Fiona N

    2015-04-01

    There is growing evidence to suggest that facial motion is an important cue for face recognition. However, it is poorly understood whether motion is integrated with facial form information or whether it provides an independent cue to identity. To provide further insight into this issue, we compared the effect of motion on face perception in two developmental prosopagnosics and age-matched controls. Participants first learned faces presented dynamically (video), or in a sequence of static images, in which rigid (viewpoint) or non-rigid (expression) changes occurred. Immediately following learning, participants were required to match a static face image to the learned face. Test face images varied by viewpoint (Experiment 1) or expression (Experiment 2) and were learned or novel face images. We found similar performance across prosopagnosics and controls in matching facial identity across changes in viewpoint when the learned face was shown moving in a rigid manner. However, non-rigid motion interfered with face matching across changes in expression in both individuals with prosopagnosia compared to the performance of control participants. In contrast, non-rigid motion did not differentially affect the matching of facial expressions across changes in identity for either prosopagnosics (Experiment 3). Our results suggest that whilst the processing of rigid motion information of a face may be preserved in developmental prosopagnosia, non-rigid motion can specifically interfere with the representation of structural face information. Taken together, these results suggest that both form and motion cues are important in face perception and that these cues are likely integrated in the representation of facial identity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Enhanced Facial Symmetry Assessment in Orthodontists

    PubMed Central

    Jackson, Tate H.; Clark, Kait; Mitroff, Stephen R.

    2013-01-01

    Assessing facial symmetry is an evolutionarily important process, which suggests that individual differences in this ability should exist. As existing data are inconclusive, the current study explored whether a group trained in facial symmetry assessment, orthodontists, possessed enhanced abilities. Symmetry assessment was measured using face and non-face stimuli among orthodontic residents and two control groups: university participants with no symmetry training and airport security luggage screeners, a group previously shown to possess expert visual search skills unrelated to facial symmetry. Orthodontic residents were more accurate at assessing symmetry in both upright and inverted faces compared to both control groups, but not for non-face stimuli. These differences are not likely due to motivational biases or a speed-accuracy tradeoff—orthodontic residents were slower than the university participants but not the security screeners. Understanding such individual differences in facial symmetry assessment may inform the perception of facial attractiveness. PMID:24319342

  19. Event-related theta synchronization predicts deficit in facial affect recognition in schizophrenia.

    PubMed

    Csukly, Gábor; Stefanics, Gábor; Komlósi, Sarolta; Czigler, István; Czobor, Pál

    2014-02-01

    Growing evidence suggests that abnormalities in the synchronized oscillatory activity of neurons in schizophrenia may lead to impaired neural activation and temporal coding and thus lead to neurocognitive dysfunctions, such as deficits in facial affect recognition. To gain an insight into the neurobiological processes linked to facial affect recognition, we investigated both induced and evoked oscillatory activity by calculating the Event Related Spectral Perturbation (ERSP) and the Inter Trial Coherence (ITC) during facial affect recognition. Fearful and neutral faces as well as nonface patches were presented to 24 patients with schizophrenia and 24 matched healthy controls while EEG was recorded. The participants' task was to recognize facial expressions. Because previous findings with healthy controls showed that facial feature decoding was associated primarily with oscillatory activity in the theta band, we analyzed ERSP and ITC in this frequency band in the time interval of 140-200 ms, which corresponds to the N170 component. Event-related theta activity and phase-locking to facial expressions, but not to nonface patches, predicted emotion recognition performance in both controls and patients. Event-related changes in theta amplitude and phase-locking were found to be significantly weaker in patients compared with healthy controls, which is in line with previous investigations showing decreased neural synchronization in the low frequency bands in patients with schizophrenia. Neural synchrony is thought to underlie distributed information processing. Our results indicate a less effective functioning in the recognition process of facial features, which may contribute to a less effective social cognition in schizophrenia. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  20. Functionally dissociated aspects in anterior and posterior electrocortical processing of facial threat.

    PubMed

    Schutter, Dennis J L G; de Haan, Edward H F; van Honk, Jack

    2004-06-01

    The angry facial expression is an important socially threatening stimulus argued to have evolved to regulate social hierarchies. In the present study, event-related potentials (ERP) were used to investigate the involvement and temporal dynamics of the frontal and parietal regions in the processing of angry facial expressions. Angry, happy and neutral faces were shown to eighteen healthy right-handed volunteers in a passive viewing task. Stimulus-locked ERPs were recorded from the frontal and parietal scalp sites. The P200, N300 and early contingent negativity variation (eCNV) components of the electric brain potentials were investigated. Analyses revealed statistical significant reductions in P200 amplitudes for the angry facial expression on both frontal and parietal electrode sites. Furthermore, apart from being strongly associated with the anterior P200, the N300 showed to be more negative for the angry facial expression in the anterior regions also. Finally, the eCNV was more pronounced over the parietal sites for the angry facial expressions. The present study demonstrated specific electrocortical correlates underlying the processing of angry facial expressions in the anterior and posterior brain sectors. The P200 is argued to indicate valence tagging by a fast and early detection mechanism. The lowered N300 with an anterior distribution for the angry facial expressions indicates more elaborate evaluation of stimulus relevance. The fact that the P200 and the N300 are highly correlated suggests that they reflect different stages of the same anterior evaluation mechanism. The more pronounced posterior eCNV suggests sustained attention to socially threatening information. Copyright 2004 Elsevier B.V.

  1. Facing the Problem: Impaired Emotion Recognition During Multimodal Social Information Processing in Borderline Personality Disorder.

    PubMed

    Niedtfeld, Inga; Defiebre, Nadine; Regenbogen, Christina; Mier, Daniela; Fenske, Sabrina; Kirsch, Peter; Lis, Stefanie; Schmahl, Christian

    2017-04-01

    Previous research has revealed alterations and deficits in facial emotion recognition in patients with borderline personality disorder (BPD). During interpersonal communication in daily life, social signals such as speech content, variation in prosody, and facial expression need to be considered simultaneously. We hypothesized that deficits in higher level integration of social stimuli contribute to difficulties in emotion recognition in BPD, and heightened arousal might explain this effect. Thirty-one patients with BPD and thirty-one healthy controls were asked to identify emotions in short video clips, which were designed to represent different combinations of the three communication channels: facial expression, speech content, and prosody. Skin conductance was recorded as a measure of sympathetic arousal, while controlling for state dissociation. Patients with BPD showed lower mean accuracy scores than healthy control subjects in all conditions comprising emotional facial expressions. This was true for the condition with facial expression only, and for the combination of all three communication channels. Electrodermal responses were enhanced in BPD only in response to auditory stimuli. In line with the major body of facial emotion recognition studies, we conclude that deficits in the interpretation of facial expressions lead to the difficulties observed in multimodal emotion processing in BPD.

  2. Interactions between facial emotion and identity in face processing: evidence based on redundancy gains.

    PubMed

    Yankouskaya, Alla; Booth, David A; Humphreys, Glyn

    2012-11-01

    Interactions between the processing of emotion expression and form-based information from faces (facial identity) were investigated using the redundant-target paradigm, in which we specifically tested whether identity and emotional expression are integrated in a superadditive manner (Miller, Cognitive Psychology 14:247-279, 1982). In Experiments 1 and 2, participants performed emotion and face identity judgments on faces with sad or angry emotional expressions. Responses to redundant targets were faster than responses to either single target when a universal emotion was conveyed, and performance violated the predictions from a model assuming independent processing of emotion and face identity. Experiment 4 showed that these effects were not modulated by varying interstimulus and nontarget contingencies, and Experiment 5 demonstrated that the redundancy gains were eliminated when faces were inverted. Taken together, these results suggest that the identification of emotion and facial identity interact in face processing.

  3. Multimodal processing of emotional information in 9-month-old infants I: emotional faces and voices.

    PubMed

    Otte, R A; Donkers, F C L; Braeken, M A K A; Van den Bergh, B R H

    2015-04-01

    Making sense of emotions manifesting in human voice is an important social skill which is influenced by emotions in other modalities, such as that of the corresponding face. Although processing emotional information from voices and faces simultaneously has been studied in adults, little is known about the neural mechanisms underlying the development of this ability in infancy. Here we investigated multimodal processing of fearful and happy face/voice pairs using event-related potential (ERP) measures in a group of 84 9-month-olds. Infants were presented with emotional vocalisations (fearful/happy) preceded by the same or a different facial expression (fearful/happy). The ERP data revealed that the processing of emotional information appearing in human voice was modulated by the emotional expression appearing on the corresponding face: Infants responded with larger auditory ERPs after fearful compared to happy facial primes. This finding suggests that infants dedicate more processing capacities to potentially threatening than to non-threatening stimuli. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. System for face recognition under expression variations of neutral-sampled individuals using recognized expression warping and a virtual expression-face database

    NASA Astrophysics Data System (ADS)

    Petpairote, Chayanut; Madarasmi, Suthep; Chamnongthai, Kosin

    2018-01-01

    The practical identification of individuals using facial recognition techniques requires the matching of faces with specific expressions to faces from a neutral face database. A method for facial recognition under varied expressions against neutral face samples of individuals via recognition of expression warping and the use of a virtual expression-face database is proposed. In this method, facial expressions are recognized and the input expression faces are classified into facial expression groups. To aid facial recognition, the virtual expression-face database is sorted into average facial-expression shapes and by coarse- and fine-featured facial textures. Wrinkle information is also employed in classification by using a process of masking to adjust input faces to match the expression-face database. We evaluate the performance of the proposed method using the CMU multi-PIE, Cohn-Kanade, and AR expression-face databases, and we find that it provides significantly improved results in terms of face recognition accuracy compared to conventional methods and is acceptable for facial recognition under expression variation.

  5. Differential roles of low and high spatial frequency content in abnormal facial emotion perception in schizophrenia.

    PubMed

    McBain, Ryan; Norton, Daniel; Chen, Yue

    2010-09-01

    While schizophrenia patients are impaired at facial emotion perception, the role of basic visual processing in this deficit remains relatively unclear. We examined emotion perception when spatial frequency content of facial images was manipulated via high-pass and low-pass filtering. Unlike controls (n=29), patients (n=30) perceived images with low spatial frequencies as more fearful than those without this information, across emotional salience levels. Patients also perceived images with high spatial frequencies as happier. In controls, this effect was found only at low emotional salience. These results indicate that basic visual processing has an amplified modulatory effect on emotion perception in schizophrenia. (c) 2010 Elsevier B.V. All rights reserved.

  6. Priming Facial Gender and Emotional Valence: The Influence of Spatial Frequency on Face Perception in ASD.

    PubMed

    Vanmarcke, Steven; Wagemans, Johan

    2017-04-01

    Adolescents with and without autism spectrum disorder (ASD) performed two priming experiments in which they implicitly processed a prime stimulus, containing high and/or low spatial frequency information, and then explicitly categorized a target face either as male/female (gender task) or as positive/negative (Valence task). Adolescents with ASD made more categorization errors than typically developing adolescents. They also showed an age-dependent improvement in categorization speed and had more difficulties with categorizing facial expressions than gender. However, in neither of the categorization tasks, we found group differences in the processing of coarse versus fine prime information. This contradicted our expectations, and indicated that the perceptual differences between adolescents with and without ASD critically depended on the processing time available for the primes.

  7. Global-Local Precedence in the Perception of Facial Age and Emotional Expression by Children with Autism and Other Developmental Disabilities

    ERIC Educational Resources Information Center

    Gross, Thomas F.

    2005-01-01

    Global information processing and perception of facial age and emotional expression was studied in children with autism, language disorders, mental retardation, and a clinical control group. Children were given a global-local task and asked to recognize age and emotion in human and canine faces. Children with autism made fewer global responses and…

  8. Implicit and explicit processing of emotional facial expressions in Parkinson's disease.

    PubMed

    Wagenbreth, Caroline; Wattenberg, Lena; Heinze, Hans-Jochen; Zaehle, Tino

    2016-04-15

    Besides motor problems, Parkinson's disease (PD) is associated with detrimental emotional and cognitive functioning. Deficient explicit emotional processing has been observed, whilst patients also show impaired Theory of Mind (ToM) abilities. However, it is unclear whether this PD patients' ToM deficit is based on an inability to infer otherś emotional states or whether it is due to explicit emotional processing deficits. We investigated implicit and explicit emotional processing in PD with an affective priming paradigm in which we used pictures of human eyes for emotional primes and a lexical decision task (LDT) with emotional connoted words for target stimuli. Sixteen PD patients and sixteen matched healthy controls performed a LTD combined with an emotional priming paradigm providing emotional information through the facial eye region to assess implicit emotional processing. Second, participants explicitly evaluated the emotional status of eyes and words used in the implicit task. Compared to controls implicit emotional processing abilities were generally preserved in PD with, however, considerable alterations for happiness and disgust processing. Furthermore, we observed a general impairment of patients for explicit evaluation of emotional stimuli, which was augmented for the rating of facial expressions. This is the first study reporting results for affective priming with facial eye expressions in PD patients. Our findings indicate largely preserved implicit emotional processing, with a specific altered processing of disgust and happiness. Explicit emotional processing was considerably impaired for semantic and especially for facial stimulus material. Poor ToM abilities in PD patients might be based on deficient explicit emotional processing, with preserved ability to implicitly infer other people's feelings. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Towards Emotion Detection in Educational Scenarios from Facial Expressions and Body Movements through Multimodal Approaches

    PubMed Central

    Saneiro, Mar; Salmeron-Majadas, Sergio

    2014-01-01

    We report current findings when considering video recordings of facial expressions and body movements to provide affective personalized support in an educational context from an enriched multimodal emotion detection approach. In particular, we describe an annotation methodology to tag facial expression and body movements that conform to changes in the affective states of learners while dealing with cognitive tasks in a learning process. The ultimate goal is to combine these annotations with additional affective information collected during experimental learning sessions from different sources such as qualitative, self-reported, physiological, and behavioral information. These data altogether are to train data mining algorithms that serve to automatically identify changes in the learners' affective states when dealing with cognitive tasks which help to provide emotional personalized support. PMID:24892055

  10. Towards emotion detection in educational scenarios from facial expressions and body movements through multimodal approaches.

    PubMed

    Saneiro, Mar; Santos, Olga C; Salmeron-Majadas, Sergio; Boticario, Jesus G

    2014-01-01

    We report current findings when considering video recordings of facial expressions and body movements to provide affective personalized support in an educational context from an enriched multimodal emotion detection approach. In particular, we describe an annotation methodology to tag facial expression and body movements that conform to changes in the affective states of learners while dealing with cognitive tasks in a learning process. The ultimate goal is to combine these annotations with additional affective information collected during experimental learning sessions from different sources such as qualitative, self-reported, physiological, and behavioral information. These data altogether are to train data mining algorithms that serve to automatically identify changes in the learners' affective states when dealing with cognitive tasks which help to provide emotional personalized support.

  11. Appraisals Generate Specific Configurations of Facial Muscle Movements in a Gambling Task: Evidence for the Component Process Model of Emotion.

    PubMed

    Gentsch, Kornelia; Grandjean, Didier; Scherer, Klaus R

    2015-01-01

    Scherer's Component Process Model provides a theoretical framework for research on the production mechanism of emotion and facial emotional expression. The model predicts that appraisal results drive facial expressions, which unfold sequentially and cumulatively over time. In two experiments, we examined facial muscle activity changes (via facial electromyography recordings over the corrugator, cheek, and frontalis regions) in response to events in a gambling task. These events were experimentally manipulated feedback stimuli which presented simultaneous information directly affecting goal conduciveness (gambling outcome: win, loss, or break-even) and power appraisals (Experiment 1 and 2), as well as control appraisal (Experiment 2). We repeatedly found main effects of goal conduciveness (starting ~600 ms), and power appraisals (starting ~800 ms after feedback onset). Control appraisal main effects were inconclusive. Interaction effects of goal conduciveness and power appraisals were obtained in both experiments (Experiment 1: over the corrugator and cheek regions; Experiment 2: over the frontalis region) suggesting amplified goal conduciveness effects when power was high in contrast to invariant goal conduciveness effects when power was low. Also an interaction of goal conduciveness and control appraisals was found over the cheek region, showing differential goal conduciveness effects when control was high and invariant effects when control was low. These interaction effects suggest that the appraisal of having sufficient control or power affects facial responses towards gambling outcomes. The result pattern suggests that corrugator and frontalis regions are primarily related to cognitive operations that process motivational pertinence, whereas the cheek region would be more influenced by coping implications. Our results provide first evidence demonstrating that cognitive-evaluative mechanisms related to goal conduciveness, control, and power appraisals affect facial expressions dynamically over time, immediately after an event is perceived. In addition, our results provide further indications for the chronography of appraisal-driven facial movements and the underlying cognitive processes.

  12. Appraisals Generate Specific Configurations of Facial Muscle Movements in a Gambling Task: Evidence for the Component Process Model of Emotion

    PubMed Central

    Gentsch, Kornelia; Grandjean, Didier; Scherer, Klaus R.

    2015-01-01

    Scherer’s Component Process Model provides a theoretical framework for research on the production mechanism of emotion and facial emotional expression. The model predicts that appraisal results drive facial expressions, which unfold sequentially and cumulatively over time. In two experiments, we examined facial muscle activity changes (via facial electromyography recordings over the corrugator, cheek, and frontalis regions) in response to events in a gambling task. These events were experimentally manipulated feedback stimuli which presented simultaneous information directly affecting goal conduciveness (gambling outcome: win, loss, or break-even) and power appraisals (Experiment 1 and 2), as well as control appraisal (Experiment 2). We repeatedly found main effects of goal conduciveness (starting ~600 ms), and power appraisals (starting ~800 ms after feedback onset). Control appraisal main effects were inconclusive. Interaction effects of goal conduciveness and power appraisals were obtained in both experiments (Experiment 1: over the corrugator and cheek regions; Experiment 2: over the frontalis region) suggesting amplified goal conduciveness effects when power was high in contrast to invariant goal conduciveness effects when power was low. Also an interaction of goal conduciveness and control appraisals was found over the cheek region, showing differential goal conduciveness effects when control was high and invariant effects when control was low. These interaction effects suggest that the appraisal of having sufficient control or power affects facial responses towards gambling outcomes. The result pattern suggests that corrugator and frontalis regions are primarily related to cognitive operations that process motivational pertinence, whereas the cheek region would be more influenced by coping implications. Our results provide first evidence demonstrating that cognitive-evaluative mechanisms related to goal conduciveness, control, and power appraisals affect facial expressions dynamically over time, immediately after an event is perceived. In addition, our results provide further indications for the chronography of appraisal-driven facial movements and the underlying cognitive processes. PMID:26295338

  13. Interference among the Processing of Facial Emotion, Face Race, and Face Gender.

    PubMed

    Li, Yongna; Tse, Chi-Shing

    2016-01-01

    People can process multiple dimensions of facial properties simultaneously. Facial processing models are based on the processing of facial properties. The current study examined the processing of facial emotion, face race, and face gender using categorization tasks. The same set of Chinese, White and Black faces, each posing a neutral, happy or angry expression, was used in three experiments. Facial emotion interacted with face race in all the tasks. The interaction of face race and face gender was found in the race and gender categorization tasks, whereas the interaction of facial emotion and face gender was significant in the emotion and gender categorization tasks. These results provided evidence for a symmetric interaction between variant facial properties (emotion) and invariant facial properties (race and gender).

  14. Interference among the Processing of Facial Emotion, Face Race, and Face Gender

    PubMed Central

    Li, Yongna; Tse, Chi-Shing

    2016-01-01

    People can process multiple dimensions of facial properties simultaneously. Facial processing models are based on the processing of facial properties. The current study examined the processing of facial emotion, face race, and face gender using categorization tasks. The same set of Chinese, White and Black faces, each posing a neutral, happy or angry expression, was used in three experiments. Facial emotion interacted with face race in all the tasks. The interaction of face race and face gender was found in the race and gender categorization tasks, whereas the interaction of facial emotion and face gender was significant in the emotion and gender categorization tasks. These results provided evidence for a symmetric interaction between variant facial properties (emotion) and invariant facial properties (race and gender). PMID:27840621

  15. (Social) Cognitive Skills and Social Information Processing in Children with Mild to Borderline Intellectual Disabilities

    ERIC Educational Resources Information Center

    van Nieuwenhuijzen, M.; Vriens, A.

    2012-01-01

    The purpose of this study was to examine the unique contributions of (social) cognitive skills such as inhibition, working memory, perspective taking, facial emotion recognition, and interpretation of situations to the variance in social information processing in children with mild to borderline intellectual disabilities. Respondents were 79…

  16. Distinct facial processing in schizophrenia and schizoaffective disorders

    PubMed Central

    Chen, Yue; Cataldo, Andrea; Norton, Daniel J; Ongur, Dost

    2011-01-01

    Although schizophrenia and schizoaffective disorders have both similar and differing clinical features, it is not well understood whether similar or differing pathophysiological processes mediate patients’ cognitive functions. Using psychophysical methods, this study compared the performances of schizophrenia (SZ) patients, patients with schizoaffective disorder (SA), and a healthy control group in two face-related cognitive tasks: emotion discrimination, which tested perception of facial affect, and identity discrimination, which tested perception of non-affective facial features. Compared to healthy controls, SZ patients, but not SA patients, exhibited deficient performance in both fear and happiness discrimination, as well as identity discrimination. SZ patients, but not SA patients, also showed impaired performance in a theory-of-mind task for which emotional expressions are identified based upon the eye regions of face images. This pattern of results suggests distinct processing of face information in schizophrenia and schizoaffective disorders. PMID:21868199

  17. Face inversion decreased information about facial identity and expression in face-responsive neurons in macaque area TE.

    PubMed

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Ohyama, Kaoru; Kawano, Kenji

    2014-09-10

    To investigate the effect of face inversion and thatcherization (eye inversion) on temporal processing stages of facial information, single neuron activities in the temporal cortex (area TE) of two rhesus monkeys were recorded. Test stimuli were colored pictures of monkey faces (four with four different expressions), human faces (three with four different expressions), and geometric shapes. Modifications were made in each face-picture, and its four variations were used as stimuli: upright original, inverted original, upright thatcherized, and inverted thatcherized faces. A total of 119 neurons responded to at least one of the upright original facial stimuli. A majority of the neurons (71%) showed activity modulations depending on upright and inverted presentations, and a lesser number of neurons (13%) showed activity modulations depending on original and thatcherized face conditions. In the case of face inversion, information about the fine category (facial identity and expression) decreased, whereas information about the global category (monkey vs human vs shape) was retained for both the original and thatcherized faces. Principal component analysis on the neuronal population responses revealed that the global categorization occurred regardless of the face inversion and that the inverted faces were represented near the upright faces in the principal component analysis space. By contrast, the face inversion decreased the ability to represent human facial identity and monkey facial expression. Thus, the neuronal population represented inverted faces as faces but failed to represent the identity and expression of the inverted faces, indicating that the neuronal representation in area TE cause the perceptual effect of face inversion. Copyright © 2014 the authors 0270-6474/14/3412457-13$15.00/0.

  18. Facial affect processing in patients receiving opioid treatment in palliative care: preferential processing of threat in pain catastrophizers.

    PubMed

    Carroll, Erin M A; Kamboj, Sunjeev K; Conroy, Laura; Tookman, Adrian; Williams, Amanda C de C; Jones, Louise; Morgan, Celia J A; Curran, H Valerie

    2011-06-01

    As a multidimensional phenomenon, pain is influenced by various psychological factors. One such factor is catastrophizing, which is associated with higher pain intensity and emotional distress in cancer and noncancer pain. One possibility is that catastrophizing represents a general cognitive style that preferentially supports the processing of negative affective stimuli. Such preferential processing of threat--toward negative facial expressions, for example--is seen in emotional disorders and is sensitive to pharmacological treatment. Whether pharmacological (analgesic) treatment might also influence the processing of threat in pain patients is currently unclear. This study investigates the effects catastrophizing on processing of facial affect in those receiving an acute opioid dose. In a double-blind crossover design, the performance of 20 palliative care patients after their usual dose of immediate-release opioid was compared with their performance following matched-placebo administration on a facial affect recognition (i.e., speed and accuracy) and threat-pain estimation task (i.e., ratings of pain intensity). The influence of catastrophizing was examined by splitting the sample according to their score on the Pain Catastrophizing Scale (PCS). Opioid administration had no effect on facial affect processing compared with placebo. However, the main finding was that enhanced processing of fear, sadness, and disgust was found only in patients who scored highly on the PCS. There was no difference in performance between the two PCS groups on the other emotions (i.e., happiness, surprise, and anger). These findings suggest that catastrophizing is associated with an affective information-processing bias in patients with severe pain conditions. Copyright © 2011 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.

  19. Emotion recognition in borderline personality disorder: effects of emotional information on negative bias.

    PubMed

    Fenske, Sabrina; Lis, Stefanie; Liebke, Lisa; Niedtfeld, Inga; Kirsch, Peter; Mier, Daniela

    2015-01-01

    Borderline Personality Disorder (BPD) is characterized by severe deficits in social interactions, which might be linked to deficits in emotion recognition. Research on emotion recognition abilities in BPD revealed heterogeneous results, ranging from deficits to heightened sensitivity. The most stable findings point to an impairment in the evaluation of neutral facial expressions as neutral, as well as to a negative bias in emotion recognition; that is the tendency to attribute negative emotions to neutral expressions, or in a broader sense to report a more negative emotion category than depicted. However, it remains unclear which contextual factors influence the occurrence of this negative bias. Previous studies suggest that priming by preceding emotional information and also constrained processing time might augment the emotion recognition deficit in BPD. To test these assumptions, 32 female BPD patients and 31 healthy females, matched for age and education, participated in an emotion recognition study, in which every facial expression was preceded by either a positive, neutral or negative scene. Furthermore, time constraints for processing were varied by presenting the facial expressions with short (100 ms) or long duration (up to 3000 ms) in two separate blocks. BPD patients showed a significant deficit in emotion recognition for neutral and positive facial expression, associated with a significant negative bias. In BPD patients, this emotion recognition deficit was differentially affected by preceding emotional information and time constraints, with a greater influence of emotional information during long face presentations and a greater influence of neutral information during short face presentations. Our results are in line with previous findings supporting the existence of a negative bias in emotion recognition in BPD patients, and provide further insights into biased social perceptions in BPD patients.

  20. Perceptual and affective mechanisms in facial expression recognition: An integrative review.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2016-09-01

    Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.

  1. Associations among facial masculinity, physical strength, fluctuating asymmetry and attractiveness in young men and women.

    PubMed

    Van Dongen, Stefan

    2014-01-01

    Studies of the process of human mate selection and attractiveness have assumed that selection favours morphological features that correlate with (genetic) quality. Degree of masculinity/femininity and fluctuating asymmetry (FA) may signal (genetic) quality, but what information they harboured and how they relate to fitness is still debated. To study strength of associations between facial masculinity/femininity, facial FA, attractiveness and physical strength in humans. Two-hundred young males and females were studied by measuring facial asymmetry and masculinity on the basis of frontal photographs. Attractiveness was determined on the basis of scores given by an anonymous panel, and physical strength using hand grip strength. Patterns differed markedly between males and females and analysis method used (univariate vs multivariate). Overall, no associations between FA and attractiveness, masculinity and physical strength were found. In females, but not males, masculinity and attractiveness correlated negatively and masculinity and physical strength correlated positively. Further research into the differences between males and females in associations between facial morphology, attractiveness and physical strength is clearly needed. The use of a multivariate approach can increase our understanding of which regions of the face harbour specific information of hormone levels and perhaps behavioural traits.

  2. Evaluation of Domain-Specific Collaboration Interfaces for Team Command and Control Tasks

    DTIC Science & Technology

    2012-05-01

    Technologies 1.1.1. Virtual Whiteboard Cognitive theories relating the utilization, storage, and retrieval of verbal and spatial information, such as...AE Spatial emergent SE Auditory linguistic AL Spatial positional SP Facial figural FF Spatial quantitative SQ Facial motive FM Tactile figural...driven by the auditory linguistic (AL), short-term memory (STM), spatial attentive (SA), visual temporal (VT), and vocal process (V) subscales. 0

  3. Face Processing in Children with Autism Spectrum Disorder: Independent or Interactive Processing of Facial Identity and Facial Expression?

    ERIC Educational Resources Information Center

    Krebs, Julia F.; Biswas, Ajanta; Pascalis, Olivier; Kamp-Becker, Inge; Remschmidt, Helmuth; Schwarzer, Gudrun

    2011-01-01

    The current study investigated if deficits in processing emotional expression affect facial identity processing and vice versa in children with autism spectrum disorder. Children with autism and IQ and age matched typically developing children classified faces either by emotional expression, thereby ignoring facial identity or by facial identity…

  4. Multivariate Pattern Classification of Facial Expressions Based on Large-Scale Functional Connectivity.

    PubMed

    Liang, Yin; Liu, Baolin; Li, Xianglin; Wang, Peiyuan

    2018-01-01

    It is an important question how human beings achieve efficient recognition of others' facial expressions in cognitive neuroscience, and it has been identified that specific cortical regions show preferential activation to facial expressions in previous studies. However, the potential contributions of the connectivity patterns in the processing of facial expressions remained unclear. The present functional magnetic resonance imaging (fMRI) study explored whether facial expressions could be decoded from the functional connectivity (FC) patterns using multivariate pattern analysis combined with machine learning algorithms (fcMVPA). We employed a block design experiment and collected neural activities while participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise). Both static and dynamic expression stimuli were included in our study. A behavioral experiment after scanning confirmed the validity of the facial stimuli presented during the fMRI experiment with classification accuracies and emotional intensities. We obtained whole-brain FC patterns for each facial expression and found that both static and dynamic facial expressions could be successfully decoded from the FC patterns. Moreover, we identified the expression-discriminative networks for the static and dynamic facial expressions, which span beyond the conventional face-selective areas. Overall, these results reveal that large-scale FC patterns may also contain rich expression information to accurately decode facial expressions, suggesting a novel mechanism, which includes general interactions between distributed brain regions, and that contributes to the human facial expression recognition.

  5. Multivariate Pattern Classification of Facial Expressions Based on Large-Scale Functional Connectivity

    PubMed Central

    Liang, Yin; Liu, Baolin; Li, Xianglin; Wang, Peiyuan

    2018-01-01

    It is an important question how human beings achieve efficient recognition of others’ facial expressions in cognitive neuroscience, and it has been identified that specific cortical regions show preferential activation to facial expressions in previous studies. However, the potential contributions of the connectivity patterns in the processing of facial expressions remained unclear. The present functional magnetic resonance imaging (fMRI) study explored whether facial expressions could be decoded from the functional connectivity (FC) patterns using multivariate pattern analysis combined with machine learning algorithms (fcMVPA). We employed a block design experiment and collected neural activities while participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise). Both static and dynamic expression stimuli were included in our study. A behavioral experiment after scanning confirmed the validity of the facial stimuli presented during the fMRI experiment with classification accuracies and emotional intensities. We obtained whole-brain FC patterns for each facial expression and found that both static and dynamic facial expressions could be successfully decoded from the FC patterns. Moreover, we identified the expression-discriminative networks for the static and dynamic facial expressions, which span beyond the conventional face-selective areas. Overall, these results reveal that large-scale FC patterns may also contain rich expression information to accurately decode facial expressions, suggesting a novel mechanism, which includes general interactions between distributed brain regions, and that contributes to the human facial expression recognition. PMID:29615882

  6. ALE meta-analysis on facial judgments of trustworthiness and attractiveness.

    PubMed

    Bzdok, D; Langner, R; Caspers, S; Kurth, F; Habel, U; Zilles, K; Laird, A; Eickhoff, Simon B

    2011-01-01

    Faces convey a multitude of information in social interaction, among which are trustworthiness and attractiveness. Humans process and evaluate these two dimensions very quickly due to their great adaptive importance. Trustworthiness evaluation is crucial for modulating behavior toward strangers; attractiveness evaluation is a crucial factor for mate selection, possibly providing cues for reproductive success. As both dimensions rapidly guide social behavior, this study tests the hypothesis that both judgments may be subserved by overlapping brain networks. To this end, we conducted an activation likelihood estimation meta-analysis on 16 functional magnetic resonance imaging studies pertaining to facial judgments of trustworthiness and attractiveness. Throughout combined, individual, and conjunction analyses on those two facial judgments, we observed consistent maxima in the amygdala which corroborates our initial hypothesis. This finding supports the contemporary paradigm shift extending the amygdala's role from dominantly processing negative emotional stimuli to processing socially relevant ones. We speculate that the amygdala filters sensory information with evolutionarily conserved relevance. Our data suggest that such a role includes not only "fight-or-flight" decisions but also social behaviors with longer term pay-off schedules, e.g., trustworthiness and attractiveness evaluation. © Springer-Verlag 2010

  7. Avoiding threat in late adulthood: testing two life span theories of emotion.

    PubMed

    Orgeta, Vasiliki

    2011-07-01

    The purpose of the present research was to explore the time course of age-related attentional biases and the role of emotion regulation as a potential mediator of older adults' performance in an emotion dot probe task. In two studies, younger and older adults (N = 80) completed a visual probe detection task, which presented happy, angry, and sad facial expressions. Across both studies, age influenced attentional responses to angry faces. Results indicated a bias away from angry-related facial emotion information occurring relatively late in attention. Age effects were not attributable to decreasing information processing speed or visuoperceptual function. Current results demonstrated that an age-related attentional preference away from angry facial cues was mediated by efforts to suppress emotion. Findings are discussed in relation to current theories of sociocognitive aging.

  8. A Shape-Based Account for Holistic Face Processing

    ERIC Educational Resources Information Center

    Zhao, Mintao; Bülthoff, Heinrich H.; Bülthoff, Isabelle

    2016-01-01

    Faces are processed holistically, so selective attention to 1 face part without any influence of the others often fails. In this study, 3 experiments investigated what type of facial information (shape or surface) underlies holistic face processing and whether generalization of holistic processing to nonexperienced faces requires extensive…

  9. The face is not an empty canvas: how facial expressions interact with facial appearance.

    PubMed

    Hess, Ursula; Adams, Reginald B; Kleck, Robert E

    2009-12-12

    Faces are not simply blank canvases upon which facial expressions write their emotional messages. In fact, facial appearance and facial movement are both important social signalling systems in their own right. We here provide multiple lines of evidence for the notion that the social signals derived from facial appearance on the one hand and facial movement on the other interact in a complex manner, sometimes reinforcing and sometimes contradicting one another. Faces provide information on who a person is. Sex, age, ethnicity, personality and other characteristics that can define a person and the social group the person belongs to can all be derived from the face alone. The present article argues that faces interact with the perception of emotion expressions because this information informs a decoder's expectations regarding an expresser's probable emotional reactions. Facial appearance also interacts more directly with the interpretation of facial movement because some of the features that are used to derive personality or sex information are also features that closely resemble certain emotional expressions, thereby enhancing or diluting the perceived strength of particular expressions.

  10. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions.

    PubMed

    Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the unconscious perception of peak facial expressions.

  11. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions

    PubMed Central

    Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the unconscious perception of peak facial expressions. PMID:27630604

  12. Facial Emotion Expression Recognition by Children at Familial Risk for Depression: High-Risk Boys are Oversensitive to Sadness

    ERIC Educational Resources Information Center

    Lopez-Duran, Nestor L.; Kuhlman, Kate R.; George, Charles; Kovacs, Maria

    2013-01-01

    Background: Offspring of depressed parents are at greatly increased risk for mood disorders. Among potential mechanisms of risk, recent studies have focused on information processing anomalies, such as attention and memory biases, in the offspring of depressed parents. In this study we examined another information processing domain, perceptual…

  13. Increased positive versus negative affective perception and memory in healthy volunteers following selective serotonin and norepinephrine reuptake inhibition.

    PubMed

    Harmer, Catherine J; Shelley, Nicholas C; Cowen, Philip J; Goodwin, Guy M

    2004-07-01

    Antidepressants that inhibit the reuptake of serotonin (SSRIs) or norepinephrine (SNRIs) are effective in the treatment of disorders such as depression and anxiety. Cognitive psychological theories emphasize the importance of correcting negative biases of information processing in the nonpharmacological treatment of these disorders, but it is not known whether antidepressant drugs can directly modulate the neural processing of affective information. The present study therefore assessed the actions of repeated antidepressant administration on perception and memory for positive and negative emotional information in healthy volunteers. Forty-two male and female volunteers were randomly assigned to 7 days of double-blind intervention with the SSRI citalopram (20 mg/day), the SNRI reboxetine (8 mg/day), or placebo. On the final day, facial expression recognition, emotion-potentiated startle response, and memory for affect-laden words were assessed. Questionnaires monitoring mood, hostility, and anxiety were given before and after treatment. In the facial expression recognition task, citalopram and reboxetine reduced the identification of the negative facial expressions of anger and fear. Citalopram also abolished the increased startle response found in the context of negative affective images. Both antidepressants increased the relative recall of positive (versus negative) emotional material. These changes in emotional processing occurred in the absence of significant differences in ratings of mood and anxiety. However, reboxetine decreased subjective ratings of hostility and elevated energy. Short-term administration of two different antidepressant types had similar effects on emotion-related tasks in healthy volunteers, reducing the processing of negative relative to positive emotional material. Such effects of antidepressants may ameliorate the negative biases in information processing that characterize mood and anxiety disorders. They also suggest a mechanism of action potentially compatible with cognitive theories of anxiety and depression.

  14. Gender differences in facial imitation and verbally reported emotional contagion from spontaneous to emotionally regulated processing levels.

    PubMed

    Sonnby-Borgström, Marianne; Jönsson, Peter; Svensson, Owe

    2008-04-01

    Previous studies on gender differences in facial imitation and verbally reported emotional contagion have investigated emotional responses to pictures of facial expressions at supraliminal exposure times. The aim of the present study was to investigate how gender differences are related to different exposure times, representing information processing levels from subliminal (spontaneous) to supraliminal (emotionally regulated). Further, the study aimed at exploring correlations between verbally reported emotional contagion and facial responses for men and women. Masked pictures of angry, happy and sad facial expressions were presented to 102 participants (51 men) at exposure times from subliminal (23 ms) to clearly supraliminal (2500 ms). Myoelectric activity (EMG) from the corrugator and the zygomaticus was measured and the participants reported their hedonic tone (verbally reported emotional contagion) after stimulus exposures. The results showed an effect of exposure time on gender differences in facial responses as well as in verbally reported emotional contagion. Women amplified imitative responses towards happy vs. angry faces and verbally reported emotional contagion with prolonged exposure times, whereas men did not. No gender differences were detected at the subliminal or borderliminal exposure times, but at the supraliminal exposure gender differences were found in imitation as well as in verbally reported emotional contagion. Women showed correspondence between their facial responses and their verbally reported emotional contagion to a greater extent than men. The results were interpreted in terms of gender differences in emotion regulation, rather than as differences in biologically prepared emotional reactivity.

  15. Enhancing facial features by using clear facial features

    NASA Astrophysics Data System (ADS)

    Rofoo, Fanar Fareed Hanna

    2017-09-01

    The similarity of features between individuals of same ethnicity motivated the idea of this project. The idea of this project is to extract features of clear facial image and impose them on blurred facial image of same ethnic origin as an approach to enhance a blurred facial image. A database of clear images containing 30 individuals equally divided to five different ethnicities which were Arab, African, Chines, European and Indian. Software was built to perform pre-processing on images in order to align the features of clear and blurred images. And the idea was to extract features of clear facial image or template built from clear facial images using wavelet transformation to impose them on blurred image by using reverse wavelet. The results of this approach did not come well as all the features did not align together as in most cases the eyes were aligned but the nose or mouth were not aligned. Then we decided in the next approach to deal with features separately but in the result in some cases a blocky effect was present on features due to not having close matching features. In general the available small database did not help to achieve the goal results, because of the number of available individuals. The color information and features similarity could be more investigated to achieve better results by having larger database as well as improving the process of enhancement by the availability of closer matches in each ethnicity.

  16. Human amygdala response to dynamic facial expressions of positive and negative surprise.

    PubMed

    Vrticka, Pascal; Lordier, Lara; Bediou, Benoît; Sander, David

    2014-02-01

    Although brain imaging evidence accumulates to suggest that the amygdala plays a key role in the processing of novel stimuli, only little is known about its role in processing expressed novelty conveyed by surprised faces, and even less about possible interactive encoding of novelty and valence. Those investigations that have already probed human amygdala involvement in the processing of surprised facial expressions either used static pictures displaying negative surprise (as contained in fear) or "neutral" surprise, and manipulated valence by contextually priming or subjectively associating static surprise with either negative or positive information. Therefore, it still remains unresolved how the human amygdala differentially processes dynamic surprised facial expressions displaying either positive or negative surprise. Here, we created new artificial dynamic 3-dimensional facial expressions conveying surprise with an intrinsic positive (wonderment) or negative (fear) connotation, but also intrinsic positive (joy) or negative (anxiety) emotions not containing any surprise, in addition to neutral facial displays either containing ("typical surprise" expression) or not containing ("neutral") surprise. Results showed heightened amygdala activity to faces containing positive (vs. negative) surprise, which may either correspond to a specific wonderment effect as such, or to the computation of a negative expected value prediction error. Findings are discussed in the light of data obtained from a closely matched nonsocial lottery task, which revealed overlapping activity within the left amygdala to unexpected positive outcomes. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  17. The effects of facial color and inversion on the N170 event-related potential (ERP) component.

    PubMed

    Minami, T; Nakajima, K; Changvisommid, L; Nakauchi, S

    2015-12-17

    Faces are important for social interaction because much can be perceived from facial details, including a person's race, age, and mood. Recent studies have shown that both configural (e.g. face shape and inversion) and surface information (e.g. surface color and reflectance properties) are important for face perception. Therefore, the present study examined the effects of facial color and inverted face properties on event-related potential (ERP) responses, particularly the N170 component. Stimuli consisted of natural and bluish-colored faces. Faces were presented in both upright and upside down orientations. An ANOVA was used to analyze N170 amplitudes and verify the effects of the main independent variables. Analysis of N170 amplitude revealed the significant interactions between stimulus orientation and color. Subsequent analysis indicated that N170 was larger for bluish-colored faces than natural-colored faces, and N170 to natural-colored faces was larger in response to inverted stimulus as compared to upright stimulus. Additionally, a multivariate pattern analysis (MVPA) investigated face-processing dynamics without any prior assumptions. Results distinguished, above chance, both facial color and orientation from single-trial electroencephalogram (EEG) signals. Decoding performance for color classification of inverted faces was significantly diminished as compared to an upright orientation. This suggests that processing orientation is predominant over facial color. Taken together, the present findings elucidate the temporal and spatial distribution of orientation and color processing during face processing. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  18. Time course of implicit processing and explicit processing of emotional faces and emotional words.

    PubMed

    Frühholz, Sascha; Jellinghaus, Anne; Herrmann, Manfred

    2011-05-01

    Facial expressions are important emotional stimuli during social interactions. Symbolic emotional cues, such as affective words, also convey information regarding emotions that is relevant for social communication. Various studies have demonstrated fast decoding of emotions from words, as was shown for faces, whereas others report a rather delayed decoding of information about emotions from words. Here, we introduced an implicit (color naming) and explicit task (emotion judgment) with facial expressions and words, both containing information about emotions, to directly compare the time course of emotion processing using event-related potentials (ERP). The data show that only negative faces affected task performance, resulting in increased error rates compared to neutral faces. Presentation of emotional faces resulted in a modulation of the N170, the EPN and the LPP components and these modulations were found during both the explicit and implicit tasks. Emotional words only affected the EPN during the explicit task, but a task-independent effect on the LPP was revealed. Finally, emotional faces modulated source activity in the extrastriate cortex underlying the generation of the N170, EPN and LPP components. Emotional words led to a modulation of source activity corresponding to the EPN and LPP, but they also affected the N170 source on the right hemisphere. These data show that facial expressions affect earlier stages of emotion processing compared to emotional words, but the emotional value of words may have been detected at early stages of emotional processing in the visual cortex, as was indicated by the extrastriate source activity. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Quantitative analysis of facial paralysis using local binary patterns in biomedical videos.

    PubMed

    He, Shu; Soraghan, John J; O'Reilly, Brian F; Xing, Dongshan

    2009-07-01

    Facial paralysis is the loss of voluntary muscle movement of one side of the face. A quantitative, objective, and reliable assessment system would be an invaluable tool for clinicians treating patients with this condition. This paper presents a novel framework for objective measurement of facial paralysis. The motion information in the horizontal and vertical directions and the appearance features on the apex frames are extracted based on the local binary patterns (LBPs) on the temporal-spatial domain in each facial region. These features are temporally and spatially enhanced by the application of novel block processing schemes. A multiresolution extension of uniform LBP is proposed to efficiently combine the micropatterns and large-scale patterns into a feature vector. The symmetry of facial movements is measured by the resistor-average distance (RAD) between LBP features extracted from the two sides of the face. Support vector machine is applied to provide quantitative evaluation of facial paralysis based on the House-Brackmann (H-B) scale. The proposed method is validated by experiments with 197 subject videos, which demonstrates its accuracy and efficiency.

  20. Digital Image Speckle Correlation for the Quantification of the Cosmetic Treatment with Botulinum Toxin Type A (BTX-A)

    NASA Astrophysics Data System (ADS)

    Bhatnagar, Divya; Conkling, Nicole; Rafailovich, Miriam; Dagum, Alexander

    2012-02-01

    The skin on the face is directly attached to the underlying muscles. Here, we successfully introduce a non-invasive, non-contact technique, Digital Image Speckle Correlation (DISC), to measure the precise magnitude and duration of facial muscle paralysis inflicted by BTX-A. Subjective evaluation by clinicians and patients fail to objectively quantify the direct effect and duration of BTX-A on the facial musculature. By using DISC, we can (a) Directly measure deformation field of the facial skin and determine the locus of facial muscular tension(b)Quantify and monitor muscular paralysis and subsequent re-innervation following injection; (c) Continuously correlate the appearance of wrinkles and muscular tension. Two sequential photographs of slight facial motion (frowning, raising eyebrows) are taken. DISC processes the images to produce a vector map of muscular displacement from which spatially resolved information is obtained regarding facial tension. DISC can track the ability of different muscle groups to contract and can be used to predict the site of injection, quantify muscle paralysis and the rate of recovery following BOTOX injection.

  1. [Neural mechanisms of facial recognition].

    PubMed

    Nagai, Chiyoko

    2007-01-01

    We review recent researches in neural mechanisms of facial recognition in the light of three aspects: facial discrimination and identification, recognition of facial expressions, and face perception in itself. First, it has been demonstrated that the fusiform gyrus has a main role of facial discrimination and identification. However, whether the FFA (fusiform face area) is really a special area for facial processing or not is controversial; some researchers insist that the FFA is related to 'becoming an expert' for some kinds of visual objects, including faces. Neural mechanisms of prosopagnosia would be deeply concerned to this issue. Second, the amygdala seems to be very concerned to recognition of facial expressions, especially fear. The amygdala, connected with the superior temporal sulcus and the orbitofrontal cortex, appears to operate the cortical function. The amygdala and the superior temporal sulcus are related to gaze recognition, which explains why a patient with bilateral amygdala damage could not recognize only a fear expression; the information from eyes is necessary for fear recognition. Finally, even a newborn infant can recognize a face as a face, which is congruent with the innate hypothesis of facial recognition. Some researchers speculate that the neural basis of such face perception is the subcortical network, comprised of the amygdala, the superior colliculus, and the pulvinar. This network would relate to covert recognition that prosopagnosic patients have.

  2. Impaired recognition of facial emotions from low-spatial frequencies in Asperger syndrome.

    PubMed

    Kätsyri, Jari; Saalasti, Satu; Tiippana, Kaisa; von Wendt, Lennart; Sams, Mikko

    2008-01-01

    The theory of 'weak central coherence' [Happe, F., & Frith, U. (2006). The weak coherence account: Detail-focused cognitive style in autism spectrum disorders. Journal of Autism and Developmental Disorders, 36(1), 5-25] implies that persons with autism spectrum disorders (ASDs) have a perceptual bias for local but not for global stimulus features. The recognition of emotional facial expressions representing various different levels of detail has not been studied previously in ASDs. We analyzed the recognition of four basic emotional facial expressions (anger, disgust, fear and happiness) from low-spatial frequencies (overall global shapes without local features) in adults with an ASD. A group of 20 participants with Asperger syndrome (AS) was compared to a group of non-autistic age- and sex-matched controls. Emotion recognition was tested from static and dynamic facial expressions whose spatial frequency contents had been manipulated by low-pass filtering at two levels. The two groups recognized emotions similarly from non-filtered faces and from dynamic vs. static facial expressions. In contrast, the participants with AS were less accurate than controls in recognizing facial emotions from very low-spatial frequencies. The results suggest intact recognition of basic facial emotions and dynamic facial information, but impaired visual processing of global features in ASDs.

  3. Cerebellum and processing of negative facial emotions: cerebellar transcranial DC stimulation specifically enhances the emotional recognition of facial anger and sadness.

    PubMed

    Ferrucci, Roberta; Giannicola, Gaia; Rosa, Manuela; Fumagalli, Manuela; Boggio, Paulo Sergio; Hallett, Mark; Zago, Stefano; Priori, Alberto

    2012-01-01

    Some evidence suggests that the cerebellum participates in the complex network processing emotional facial expression. To evaluate the role of the cerebellum in recognising facial expressions we delivered transcranial direct current stimulation (tDCS) over the cerebellum and prefrontal cortex. A facial emotion recognition task was administered to 21 healthy subjects before and after cerebellar tDCS; we also tested subjects with a visual attention task and a visual analogue scale (VAS) for mood. Anodal and cathodal cerebellar tDCS both significantly enhanced sensory processing in response to negative facial expressions (anodal tDCS, p=.0021; cathodal tDCS, p=.018), but left positive emotion and neutral facial expressions unchanged (p>.05). tDCS over the right prefrontal cortex left facial expressions of both negative and positive emotion unchanged. These findings suggest that the cerebellum is specifically involved in processing facial expressions of negative emotion.

  4. Recognition of schematic facial displays of emotion in parents of children with autism.

    PubMed

    Palermo, Mark T; Pasqualetti, Patrizio; Barbati, Giulia; Intelligente, Fabio; Rossini, Paolo Maria

    2006-07-01

    Performance on an emotional labeling task in response to schematic facial patterns representing five basic emotions without the concurrent presentation of a verbal category was investigated in 40 parents of children with autism and 40 matched controls. 'Autism fathers' performed worse than 'autism mothers', who performed worse than controls in decoding displays representing sadness or disgust. This indicates the need to include facial expression decoding tasks in genetic research of autism. In addition, emotional expression interactions between parents and their children with autism, particularly through play, where affect and prosody are 'physiologically' exaggerated, may stimulate development of social competence. Future studies could benefit from a combination of stimuli including photographs and schematic drawings, with and without associated verbal categories. This may allow the subdivision of patients and relatives on the basis of the amount of information needed to understand and process social-emotionally relevant information.

  5. Eye-Tracking Evidence that Happy Faces Impair Verbal Message Comprehension: The Case of Health Warnings in Direct-to-Consumer Pharmaceutical Television Commercials

    PubMed Central

    Russell, Cristel Antonia; Swasy, John L.; Russell, Dale Wesley; Engel, Larry

    2017-01-01

    Risk warning or disclosure information in advertising is only effective in correcting consumers’ judgments if enough cognitive capacity is available to process that information. Hence, comprehension of verbal warnings in TV commercials may suffer if accompanied by positive visual elements. This research addresses this concern about cross-modality interference in the context of direct-to-consumer (DTC) pharmaceutical commercials in the United States by experimentally testing whether positive facial expressions reduce consumers’ understanding of the mandated health warning. A content analysis of a sample of DTC commercials reveals that positive facial expressions are more prevalent during the verbal warning act of the commercials than during the other acts. An eye-tracking experiment conducted with specially produced DTC commercials, which vary the valence of characters’ facial expressions during the health warning, provides evidence that happy faces reduce objective comprehension of the warning. PMID:29269979

  6. Culture shapes 7-month-olds' perceptual strategies in discriminating facial expressions of emotion.

    PubMed

    Geangu, Elena; Ichikawa, Hiroko; Lao, Junpeng; Kanazawa, So; Yamaguchi, Masami K; Caldara, Roberto; Turati, Chiara

    2016-07-25

    Emotional facial expressions are thought to have evolved because they play a crucial role in species' survival. From infancy, humans develop dedicated neural circuits [1] to exhibit and recognize a variety of facial expressions [2]. But there is increasing evidence that culture specifies when and how certain emotions can be expressed - social norms - and that the mature perceptual mechanisms used to transmit and decode the visual information from emotional signals differ between Western and Eastern adults [3-5]. Specifically, the mouth is more informative for transmitting emotional signals in Westerners and the eye region for Easterners [4], generating culture-specific fixation biases towards these features [5]. During development, it is recognized that cultural differences can be observed at the level of emotional reactivity and regulation [6], and to the culturally dominant modes of attention [7]. Nonetheless, to our knowledge no study has explored whether culture shapes the processing of facial emotional signals early in development. The data we report here show that, by 7 months, infants from both cultures visually discriminate facial expressions of emotion by relying on culturally distinct fixation strategies, resembling those used by the adults from the environment in which they develop [5]. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Objective grading of facial paralysis using Local Binary Patterns in video processing.

    PubMed

    He, Shu; Soraghan, John J; O'Reilly, Brian F

    2008-01-01

    This paper presents a novel framework for objective measurement of facial paralysis in biomedial videos. The motion information in the horizontal and vertical directions and the appearance features on the apex frames are extracted based on the Local Binary Patterns (LBP) on the temporal-spatial domain in each facial region. These features are temporally and spatially enhanced by the application of block schemes. A multi-resolution extension of uniform LBP is proposed to efficiently combine the micro-patterns and large-scale patterns into a feature vector, which increases the algorithmic robustness and reduces noise effects while still retaining computational simplicity. The symmetry of facial movements is measured by the Resistor-Average Distance (RAD) between LBP features extracted from the two sides of the face. Support Vector Machine (SVM) is applied to provide quantitative evaluation of facial paralysis based on the House-Brackmann (H-B) Scale. The proposed method is validated by experiments with 197 subject videos, which demonstrates its accuracy and efficiency.

  8. Enhanced embodied response following ambiguous emotional processing.

    PubMed

    Beffara, Brice; Ouellet, Marc; Vermeulen, Nicolas; Basu, Anamitra; Morisseau, Tiffany; Mermillod, Martial

    2012-08-01

    It has generally been assumed that high-level cognitive and emotional processes are based on amodal conceptual information. In contrast, however, "embodied simulation" theory states that the perception of an emotional signal can trigger a simulation of the related state in the motor, somatosensory, and affective systems. To study the effect of social context on the mimicry effect predicted by the "embodied simulation" theory, we recorded the electromyographic (EMG) activity of participants when looking at emotional facial expressions. We observed an increase in embodied responses when the participants were exposed to a context involving social valence before seeing the emotional facial expressions. An examination of the dynamic EMG activity induced by two socially relevant emotional expressions (namely joy and anger) revealed enhanced EMG responses of the facial muscles associated with the related social prime (either positive or negative). These results are discussed within the general framework of embodiment theory.

  9. Facial soft tissue thickness differences among three skeletal classes in Japanese population.

    PubMed

    Utsuno, Hajime; Kageyama, Toru; Uchida, Keiichi; Kibayashi, Kazuhiko

    2014-03-01

    Facial reconstruction is used in forensic anthropology to recreate the face from unknown human skeletal remains, and to elucidate the antemortem facial appearance. This requires accurate assessment of the skull (age, sex, ancestry, etc.) and thickness data. However, additional information is required to reconstruct the face as the information obtained from the skull is limited. Here, we aimed to examine the information from the skull that is required for accurate facial reconstruction. The human facial profile is classified into 3 shapes: straight, convex, and concave. These facial profiles facilitate recognition of individuals. The skeletal classes used in orthodontics are classified according to these 3 facial types. We have previously reported the differences between Japanese females. In the present study, we applied this classification for facial tissue measurement, compared the differences in tissue depth of each skeletal class for both sexes in the Japanese population, and elucidated the differences between the skeletal classes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. Selective attention modulates early human evoked potentials during emotional face-voice processing.

    PubMed

    Ho, Hao Tam; Schröger, Erich; Kotz, Sonja A

    2015-04-01

    Recent findings on multisensory integration suggest that selective attention influences cross-sensory interactions from an early processing stage. Yet, in the field of emotional face-voice integration, the hypothesis prevails that facial and vocal emotional information interacts preattentively. Using ERPs, we investigated the influence of selective attention on the perception of congruent versus incongruent combinations of neutral and angry facial and vocal expressions. Attention was manipulated via four tasks that directed participants to (i) the facial expression, (ii) the vocal expression, (iii) the emotional congruence between the face and the voice, and (iv) the synchrony between lip movement and speech onset. Our results revealed early interactions between facial and vocal emotional expressions, manifested as modulations of the auditory N1 and P2 amplitude by incongruent emotional face-voice combinations. Although audiovisual emotional interactions within the N1 time window were affected by the attentional manipulations, interactions within the P2 modulation showed no such attentional influence. Thus, we propose that the N1 and P2 are functionally dissociated in terms of emotional face-voice processing and discuss evidence in support of the notion that the N1 is associated with cross-sensory prediction, whereas the P2 relates to the derivation of an emotional percept. Essentially, our findings put the integration of facial and vocal emotional expressions into a new perspective-one that regards the integration process as a composite of multiple, possibly independent subprocesses, some of which are susceptible to attentional modulation, whereas others may be influenced by additional factors.

  11. Sex differences in event-related potentials and attentional biases to emotional facial stimuli.

    PubMed

    Pfabigan, Daniela M; Lamplmayr-Kragl, Elisabeth; Pintzinger, Nina M; Sailer, Uta; Tran, Ulrich S

    2014-01-01

    Attentional processes play an important role in the processing of emotional information. Previous research reported attentional biases during stimulus processing in anxiety and depression. However, sex differences in the processing of emotional stimuli and higher prevalence rates of anxiety disorders among women, compared to men, suggest that attentional biases may also differ between the two sexes. The present study used a modified version of the dot probe task with happy, angry, and neutral facial stimuli to investigate the time course of attentional biases in healthy volunteers. Moreover, associations of attentional biases with alexithymia were examined on the behavioral and physiological level. Event-related potentials were measured while 21 participants (11 women) performed the task, utilizing also for the first time a difference wave approach in the analysis to highlight emotion-specific aspects. Women showed overall enhanced probe P1 amplitudes compared to men, in particular after rewarding facial stimuli. Using the difference wave approach, probe P1 amplitudes appeared specifically enhanced with regard to congruently presented happy facial stimuli among women, compared to men. Both methods yielded enhanced probe P1 amplitudes after presentation of the emotional stimulus in the left compared to the right visual hemifield. Probe P1 amplitudes correlated negatively with self-reported alexithymia, most of these correlations were only observable in women. Our results suggest that women orient their attention to a greater extent to facial stimuli than men and corroborate that alexithymia is a correlate of reduced emotional reactivity on a neuronal level. We recommend using a difference wave approach when addressing attentional processes of orientation and disengagement also in future studies.

  12. The contribution of dynamic visual cues to audiovisual speech perception.

    PubMed

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Neural processing of high and low spatial frequency information in faces changes across development: qualitative changes in face processing during adolescence.

    PubMed

    Peters, Judith C; Vlamings, Petra; Kemner, Chantal

    2013-05-01

    Face perception in adults depends on skilled processing of interattribute distances ('configural' processing), which is disrupted for faces presented in inverted orientation (face inversion effect or FIE). Children are not proficient in configural processing, and this might relate to an underlying immaturity to use facial information in low spatial frequency (SF) ranges, which capture the coarse information needed for configural processing. We hypothesized that during adolescence a shift from use of high to low SF information takes place. Therefore, we studied the influence of SF content on neural face processing in groups of children (9-10 years), adolescents (14-15 years) and young adults (21-29 years) by measuring event-related potentials (ERPs) to upright and inverted faces which varied in SF content. Results revealed that children show a neural FIE in early processing stages (i.e. P1; generated in early visual areas), suggesting a superficial, global facial analysis. In contrast, ERPs of adults revealed an FIE at later processing stages (i.e. N170; generated in face-selective, higher visual areas). Interestingly, adolescents showed FIEs in both processing stages, suggesting a hybrid developmental stage. Furthermore, adolescents and adults showed FIEs for stimuli containing low SF information, whereas such effects were driven by both low and high SF information in children. These results indicate that face processing has a protracted maturational course into adolescence, and is dependent on changes in SF processing. During adolescence, sensitivity to configural cues is developed, which aids the fast and holistic processing that is so special for faces. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  14. Smiling faces, sometimes they don't tell the truth: facial expression in the ultimatum game impacts decision making and event-related potentials.

    PubMed

    Mussel, Patrick; Hewig, Johannes; Allen, John J B; Coles, Michael G H; Miltner, Wolfgang

    2014-04-01

    Facial expressions are an important aspect of social interaction, conveying not only information regarding emotional states, but also regarding intentions, personality, and complex social characteristics. The present research investigates how a smiling, compared to a nonsmiling, expression impacts decision making and underlying cognitive and emotional processes in economic bargaining. Our results using the ultimatum game show that facial expressions have an impact on decision making as well as the feedback-related negativity following the offer. Furthermore, a moderating effect of sex on decision making was observed, with differential effects of facial expressions from male compared to female proposers. It is concluded that predictions of bargaining behavior must account for aspects of social interactions as well as sex effects to obtain more precise estimates of behavior. Copyright © 2014 Society for Psychophysiological Research.

  15. The Relationships between Processing Facial Identity, Emotional Expression, Facial Speech, and Gaze Direction during Development

    ERIC Educational Resources Information Center

    Spangler, Sibylle M.; Schwarzer, Gudrun; Korell, Monika; Maier-Karius, Johanna

    2010-01-01

    Four experiments were conducted with 5- to 11-year-olds and adults to investigate whether facial identity, facial speech, emotional expression, and gaze direction are processed independently of or in interaction with one another. In a computer-based, speeded sorting task, participants sorted faces according to facial identity while disregarding…

  16. Decoding facial expressions based on face-selective and motion-sensitive areas.

    PubMed

    Liang, Yin; Liu, Baolin; Xu, Junhai; Zhang, Gaoyan; Li, Xianglin; Wang, Peiyuan; Wang, Bin

    2017-06-01

    Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes-obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye-related information on facial expression decoding were also examined. It was found that motion-sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face-selective and motion-sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye-related information was also observed. Overall, the findings showed that emotional expressions are represented in motion-sensitive areas in addition to conventional face-selective areas, suggesting that motion-sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye-related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113-3125, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  17. Is empathy necessary to comprehend the emotional faces? The empathic effect on attentional mechanisms (eye movements), cortical correlates (N200 event-related potentials) and facial behaviour (electromyography) in face processing.

    PubMed

    Balconi, Michela; Canavesio, Ylenia

    2016-01-01

    The present research explored the effect of social empathy on processing emotional facial expressions. Previous evidence suggested a close relationship between emotional empathy and both the ability to detect facial emotions and the attentional mechanisms involved. A multi-measure approach was adopted: we investigated the association between trait empathy (Balanced Emotional Empathy Scale) and individuals' performance (response times; RTs), attentional mechanisms (eye movements; number and duration of fixations), correlates of cortical activation (event-related potential (ERP) N200 component), and facial responsiveness (facial zygomatic and corrugator activity). Trait empathy was found to affect face detection performance (reduced RTs), attentional processes (more scanning eye movements in specific areas of interest), ERP salience effect (increased N200 amplitude), and electromyographic activity (more facial responses). A second important result was the demonstration of strong, direct correlations among these measures. We suggest that empathy may function as a social facilitator of the processes underlying the detection of facial emotion, and a general "facial response effect" is proposed to explain these results. We assumed that empathy influences cognitive and the facial responsiveness, such that empathic individuals are more skilful in processing facial emotion.

  18. Facial transplantation for massive traumatic injuries.

    PubMed

    Alam, Daniel S; Chi, John J

    2013-10-01

    This article describes the challenges of facial reconstruction and the role of facial transplantation in certain facial defects and injuries. This information is of value to surgeons assessing facial injuries with massive soft tissue loss or injury. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Neural signatures of conscious and unconscious emotional face processing in human infants.

    PubMed

    Jessen, Sarah; Grossmann, Tobias

    2015-03-01

    Human adults can process emotional information both with and without conscious awareness, and it has been suggested that the two processes rely on partly distinct brain mechanisms. However, the developmental origins of these brain processes are unknown. In the present event-related brain potential (ERP) study, we examined the brain responses of 7-month-old infants in response to subliminally (50 and 100 msec) and supraliminally (500 msec) presented happy and fearful facial expressions. Our results revealed that infants' brain responses (Pb and Nc) over central electrodes distinguished between emotions irrespective of stimulus duration, whereas the discrimination between emotions at occipital electrodes (N290 and P400) only occurred when faces were presented supraliminally (above threshold). This suggests that early in development the human brain not only discriminates between happy and fearful facial expressions irrespective of conscious perception, but also that, similar to adults, supraliminal and subliminal emotion processing relies on distinct neural processes. Our data further suggest that the processing of emotional facial expressions differs across infants depending on their behaviorally shown perceptual sensitivity. The current ERP findings suggest that distinct brain processes underpinning conscious and unconscious emotion perception emerge early in ontogeny and can therefore be seen as a key feature of human social functioning. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Evidence of Rapid Modulation by Social Information of Subjective, Physiological, and Neural Responses to Emotional Expressions

    PubMed Central

    Mermillod, Martial; Grynberg, Delphine; Pio-Lopez, Léo; Rychlowska, Magdalena; Beffara, Brice; Harquel, Sylvain; Vermeulen, Nicolas; Niedenthal, Paula M.; Dutheil, Frédéric; Droit-Volet, Sylvie

    2018-01-01

    Recent research suggests that conceptual or emotional factors could influence the perceptual processing of stimuli. In this article, we aimed to evaluate the effect of social information (positive, negative, or no information related to the character of the target) on subjective (perceived and felt valence and arousal), physiological (facial mimicry) as well as on neural (P100 and N170) responses to dynamic emotional facial expressions (EFE) that varied from neutral to one of the six basic emotions. Across three studies, the results showed reduced ratings of valence and arousal of EFE associated with incongruent social information (Study 1), increased electromyographical responses (Study 2), and significant modulation of P100 and N170 components (Study 3) when EFE were associated with social (positive and negative) information (vs. no information). These studies revealed that positive or negative social information reduces subjective responses to incongruent EFE and produces a similar neural and physiological boost of the early perceptual processing of EFE irrespective of their congruency. In conclusion, the article suggests that the presence of positive or negative social context modulates early physiological and neural activity preceding subsequent behavior. PMID:29375330

  1. Exposure to the self-face facilitates identification of dynamic facial expressions: influences on individual differences.

    PubMed

    Li, Yuan Hang; Tottenham, Nim

    2013-04-01

    A growing literature suggests that the self-face is involved in processing the facial expressions of others. The authors experimentally activated self-face representations to assess its effects on the recognition of dynamically emerging facial expressions of others. They exposed participants to videos of either their own faces (self-face prime) or faces of others (nonself-face prime) prior to a facial expression judgment task. Their results show that experimentally activating self-face representations results in earlier recognition of dynamically emerging facial expression. As a group, participants in the self-face prime condition recognized expressions earlier (when less affective perceptual information was available) compared to participants in the nonself-face prime condition. There were individual differences in performance, such that poorer expression identification was associated with higher autism traits (in this neurocognitively healthy sample). However, when randomized into the self-face prime condition, participants with high autism traits performed as well as those with low autism traits. Taken together, these data suggest that the ability to recognize facial expressions in others is linked with the internal representations of our own faces. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  2. The role of encoding and attention in facial emotion memory: an EEG investigation.

    PubMed

    Brenner, Colleen A; Rumak, Samuel P; Burns, Amy M N; Kieffaber, Paul D

    2014-09-01

    Facial expressions are encoded via sensory mechanisms, but meaning extraction and salience of these expressions involve cognitive functions. We investigated the time course of sensory encoding and subsequent maintenance in memory via EEG. Twenty-nine healthy participants completed a facial emotion delayed match-to-sample task. P100, N170 and N250 ERPs were measured in response to the first stimulus, and evoked theta power (4-7Hz) was measured during the delay interval. Negative facial expressions produced larger N170 amplitudes and greater theta power early in the delay. N170 amplitude correlated with theta power, however larger N170 amplitude coupled with greater theta power only predicted behavioural performance for one emotion condition (very happy) out of six tested (see Supplemental Data). These findings indicate that the N170 ERP may be sensitive to emotional facial expressions when task demands require encoding and retention of this information. Furthermore, sustained theta activity may represent continued attentional processing that supports short-term memory, especially of negative facial stimuli. Further study is needed to investigate the potential influence of these measures, and their interaction, on behavioural performance. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.

  3. Automatic facial mimicry in response to dynamic emotional stimuli in five-month-old infants.

    PubMed

    Isomura, Tomoko; Nakano, Tamami

    2016-12-14

    Human adults automatically mimic others' emotional expressions, which is believed to contribute to sharing emotions with others. Although this behaviour appears fundamental to social reciprocity, little is known about its developmental process. Therefore, we examined whether infants show automatic facial mimicry in response to others' emotional expressions. Facial electromyographic activity over the corrugator supercilii (brow) and zygomaticus major (cheek) of four- to five-month-old infants was measured while they viewed dynamic clips presenting audiovisual, visual and auditory emotions. The audiovisual bimodal emotion stimuli were a display of a laughing/crying facial expression with an emotionally congruent vocalization, whereas the visual/auditory unimodal emotion stimuli displayed those emotional faces/vocalizations paired with a neutral vocalization/face, respectively. Increased activation of the corrugator supercilii muscle in response to audiovisual cries and the zygomaticus major in response to audiovisual laughter were observed between 500 and 1000 ms after stimulus onset, which clearly suggests rapid facial mimicry. By contrast, both visual and auditory unimodal emotion stimuli did not activate the infants' corresponding muscles. These results revealed that automatic facial mimicry is present as early as five months of age, when multimodal emotional information is present. © 2016 The Author(s).

  4. Impaired holistic processing of unfamiliar individual faces in acquired prosopagnosia.

    PubMed

    Ramon, Meike; Busigny, Thomas; Rossion, Bruno

    2010-03-01

    Prosopagnosia is an impairment at individualizing faces that classically follows brain damage. Several studies have reported observations supporting an impairment of holistic/configural face processing in acquired prosopagnosia. However, this issue may require more compelling evidence as the cases reported were generally patients suffering from integrative visual agnosia, and the sensitivity of the paradigms used to measure holistic/configural face processing in normal individuals remains unclear. Here we tested a well-characterized case of acquired prosopagnosia (PS) with no object recognition impairment, in five behavioral experiments (whole/part and composite face paradigms with unfamiliar faces). In all experiments, for normal observers we found that processing of a given facial feature was affected by the location and identity of the other features in a whole face configuration. In contrast, the patient's results over these experiments indicate that she encodes local facial information independently of the other features embedded in the whole facial context. These observations and a survey of the literature indicate that abnormal holistic processing of the individual face may be a characteristic hallmark of prosopagnosia following brain damage, perhaps with various degrees of severity. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  5. Aberrant Pattern of Scanning in Prosopagnosia Reflects Impaired Face Processing

    ERIC Educational Resources Information Center

    Stephan, Blossom Christa Maree; Caine, Diana

    2009-01-01

    Visual scanpath recording was used to investigate the information processing strategies used by a prosopagnosic patient, SC, when viewing faces. Compared to controls, SC showed an aberrant pattern of scanning, directing attention away from the internal configuration of facial features (eyes, nose) towards peripheral regions (hair, forehead) of the…

  6. Facilitating Comprehension and Processing of Language in Classroom and Clinic.

    ERIC Educational Resources Information Center

    Lasky, Elaine Z.

    A speech/language remediation-intervention model is proposed to enhance processing of auditory information in students with language or learning disabilities. Such children have difficulty attending to language signals (verbal and nonverbal responses ranging from facial expressions and gestures to those requiring the generation of complex…

  7. Fusiform gyrus volume reduction and facial recognition in chronic schizophrenia.

    PubMed

    Onitsuka, Toshiaki; Shenton, Martha E; Kasai, Kiyoto; Nestor, Paul G; Toner, Sarah K; Kikinis, Ron; Jolesz, Ferenc A; McCarley, Robert W

    2003-04-01

    The fusiform gyrus (FG), or occipitotemporal gyrus, is thought to subserve the processing and encoding of faces. Of note, several studies have reported that patients with schizophrenia show deficits in facial processing. It is thus hypothesized that the FG might be one brain region underlying abnormal facial recognition in schizophrenia. The objectives of this study were to determine whether there are abnormalities in gray matter volumes for the anterior and the posterior FG in patients with chronic schizophrenia and to investigate relationships between FG subregions and immediate and delayed memory for faces. Patients were recruited from the Boston VA Healthcare System, Brockton Division, and control subjects were recruited through newspaper advertisement. Study participants included 21 male patients diagnosed as having chronic schizophrenia and 28 male controls. Participants underwent high-spatial-resolution magnetic resonance imaging, and facial recognition memory was evaluated. Main outcome measures included anterior and posterior FG gray matter volumes based on high-spatial-resolution magnetic resonance imaging, a detailed and reliable manual delineation using 3-dimensional information, and correlation coefficients between FG subregions and raw scores on immediate and delayed facial memory derived from the Wechsler Memory Scale III. Patients with chronic schizophrenia had overall smaller FG gray matter volumes (10%) than normal controls. Additionally, patients with schizophrenia performed more poorly than normal controls in both immediate and delayed facial memory tests. Moreover, the degree of poor performance on delayed memory for faces was significantly correlated with the degree of bilateral anterior FG reduction in patients with schizophrenia. These results suggest that neuroanatomic FG abnormalities underlie at least some of the deficits associated with facial recognition in schizophrenia.

  8. Pilot study of facial soft tissue thickness differences among three skeletal classes in Japanese females.

    PubMed

    Utsuno, Hajime; Kageyama, Toru; Uchida, Keiichi; Yoshino, Mineo; Oohigashi, Shina; Miyazawa, Hiroo; Inoue, Katsuhiro

    2010-02-25

    Facial reconstruction is a technique used in forensic anthropology to estimate the appearance of the antemortem face from unknown human skeletal remains. This requires accurate skull assessment (for variables such as age, sex, and race) and soft tissue thickness data. However, the skull can provide only limited information, and further data are needed to reconstruct the face. The authors herein obtained further information from the skull in order to reconstruct the face more accurately. Skulls can be classified into three facial types on the basis of orthodontic skeletal classes (namely, straight facial profile, type I, convex facial profile, type II, and concave facial profile, type III). This concept was applied to facial tissue measurement and soft tissue depth was compared in each skeletal class in a Japanese female population. Differences of soft tissue depth between skeletal classes were observed, and this information may enable more accurate reconstruction than sex-specific depth alone. 2009 Elsevier Ireland Ltd. All rights reserved.

  9. Effects of spatial frequency and location of fearful faces on human amygdala activity.

    PubMed

    Morawetz, Carmen; Baudewig, Juergen; Treue, Stefan; Dechent, Peter

    2011-01-31

    Facial emotion perception plays a fundamental role in interpersonal social interactions. Images of faces contain visual information at various spatial frequencies. The amygdala has previously been reported to be preferentially responsive to low-spatial frequency (LSF) rather than to high-spatial frequency (HSF) filtered images of faces presented at the center of the visual field. Furthermore, it has been proposed that the amygdala might be especially sensitive to affective stimuli in the periphery. In the present study we investigated the impact of spatial frequency and stimulus eccentricity on face processing in the human amygdala and fusiform gyrus using functional magnetic resonance imaging (fMRI). The spatial frequencies of pictures of fearful faces were filtered to produce images that retained only LSF or HSF information. Facial images were presented either in the left or right visual field at two different eccentricities. In contrast to previous findings, we found that the amygdala responds to LSF and HSF stimuli in a similar manner regardless of the location of the affective stimuli in the visual field. Furthermore, the fusiform gyrus did not show differential responses to spatial frequency filtered images of faces. Our findings argue against the view that LSF information plays a crucial role in the processing of facial expressions in the amygdala and of a higher sensitivity to affective stimuli in the periphery. Copyright © 2010 Elsevier B.V. All rights reserved.

  10. Altering sensorimotor feedback disrupts visual discrimination of facial expressions.

    PubMed

    Wood, Adrienne; Lupyan, Gary; Sherrin, Steven; Niedenthal, Paula

    2016-08-01

    Looking at another person's facial expression of emotion can trigger the same neural processes involved in producing the expression, and such responses play a functional role in emotion recognition. Disrupting individuals' facial action, for example, interferes with verbal emotion recognition tasks. We tested the hypothesis that facial responses also play a functional role in the perceptual processing of emotional expressions. We altered the facial action of participants with a gel facemask while they performed a task that involved distinguishing target expressions from highly similar distractors. Relative to control participants, participants in the facemask condition demonstrated inferior perceptual discrimination of facial expressions, but not of nonface stimuli. The findings suggest that somatosensory/motor processes involving the face contribute to the visual perceptual-and not just conceptual-processing of facial expressions. More broadly, our study contributes to growing evidence for the fundamentally interactive nature of the perceptual inputs from different sensory modalities.

  11. Fusiform Gyrus Dysfunction is Associated with Perceptual Processing Efficiency to Emotional Faces in Adolescent Depression: A Model-Based Approach.

    PubMed

    Ho, Tiffany C; Zhang, Shunan; Sacchet, Matthew D; Weng, Helen; Connolly, Colm G; Henje Blom, Eva; Han, Laura K M; Mobayed, Nisreen O; Yang, Tony T

    2016-01-01

    While the extant literature has focused on major depressive disorder (MDD) as being characterized by abnormalities in processing affective stimuli (e.g., facial expressions), little is known regarding which specific aspects of cognition influence the evaluation of affective stimuli, and what are the underlying neural correlates. To investigate these issues, we assessed 26 adolescents diagnosed with MDD and 37 well-matched healthy controls (HCL) who completed an emotion identification task of dynamically morphing faces during functional magnetic resonance imaging (fMRI). We analyzed the behavioral data using a sequential sampling model of response time (RT) commonly used to elucidate aspects of cognition in binary perceptual decision making tasks: the Linear Ballistic Accumulator (LBA) model. Using a hierarchical Bayesian estimation method, we obtained group-level and individual-level estimates of LBA parameters on the facial emotion identification task. While the MDD and HCL groups did not differ in mean RT, accuracy, or group-level estimates of perceptual processing efficiency (i.e., drift rate parameter of the LBA), the MDD group showed significantly reduced responses in left fusiform gyrus compared to the HCL group during the facial emotion identification task. Furthermore, within the MDD group, fMRI signal in the left fusiform gyrus during affective face processing was significantly associated with greater individual-level estimates of perceptual processing efficiency. Our results therefore suggest that affective processing biases in adolescents with MDD are characterized by greater perceptual processing efficiency of affective visual information in sensory brain regions responsible for the early processing of visual information. The theoretical, methodological, and clinical implications of our results are discussed.

  12. Fusiform Gyrus Dysfunction is Associated with Perceptual Processing Efficiency to Emotional Faces in Adolescent Depression: A Model-Based Approach

    PubMed Central

    Ho, Tiffany C.; Zhang, Shunan; Sacchet, Matthew D.; Weng, Helen; Connolly, Colm G.; Henje Blom, Eva; Han, Laura K. M.; Mobayed, Nisreen O.; Yang, Tony T.

    2016-01-01

    While the extant literature has focused on major depressive disorder (MDD) as being characterized by abnormalities in processing affective stimuli (e.g., facial expressions), little is known regarding which specific aspects of cognition influence the evaluation of affective stimuli, and what are the underlying neural correlates. To investigate these issues, we assessed 26 adolescents diagnosed with MDD and 37 well-matched healthy controls (HCL) who completed an emotion identification task of dynamically morphing faces during functional magnetic resonance imaging (fMRI). We analyzed the behavioral data using a sequential sampling model of response time (RT) commonly used to elucidate aspects of cognition in binary perceptual decision making tasks: the Linear Ballistic Accumulator (LBA) model. Using a hierarchical Bayesian estimation method, we obtained group-level and individual-level estimates of LBA parameters on the facial emotion identification task. While the MDD and HCL groups did not differ in mean RT, accuracy, or group-level estimates of perceptual processing efficiency (i.e., drift rate parameter of the LBA), the MDD group showed significantly reduced responses in left fusiform gyrus compared to the HCL group during the facial emotion identification task. Furthermore, within the MDD group, fMRI signal in the left fusiform gyrus during affective face processing was significantly associated with greater individual-level estimates of perceptual processing efficiency. Our results therefore suggest that affective processing biases in adolescents with MDD are characterized by greater perceptual processing efficiency of affective visual information in sensory brain regions responsible for the early processing of visual information. The theoretical, methodological, and clinical implications of our results are discussed. PMID:26869950

  13. Visual speech segmentation: using facial cues to locate word boundaries in continuous speech

    PubMed Central

    Mitchel, Aaron D.; Weiss, Daniel J.

    2014-01-01

    Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition. PMID:25018577

  14. Toward DNA-based facial composites: preliminary results and validation.

    PubMed

    Claes, Peter; Hill, Harold; Shriver, Mark D

    2014-11-01

    The potential of constructing useful DNA-based facial composites is forensically of great interest. Given the significant identity information coded in the human face these predictions could help investigations out of an impasse. Although, there is substantial evidence that much of the total variation in facial features is genetically mediated, the discovery of which genes and gene variants underlie normal facial variation has been hampered primarily by the multipartite nature of facial variation. Traditionally, such physical complexity is simplified by simple scalar measurements defined a priori, such as nose or mouth width or alternatively using dimensionality reduction techniques such as principal component analysis where each principal coordinate is then treated as a scalar trait. However, as shown in previous and related work, a more impartial and systematic approach to modeling facial morphology is available and can facilitate both the gene discovery steps, as we recently showed, and DNA-based facial composite construction, as we show here. We first use genomic ancestry and sex to create a base-face, which is simply an average sex and ancestry matched face. Subsequently, the effects of 24 individual SNPs that have been shown to have significant effects on facial variation are overlaid on the base-face forming the predicted-face in a process akin to a photomontage or image blending. We next evaluate the accuracy of predicted faces using cross-validation. Physical accuracy of the facial predictions either locally in particular parts of the face or in terms of overall similarity is mainly determined by sex and genomic ancestry. The SNP-effects maintain the physical accuracy while significantly increasing the distinctiveness of the facial predictions, which would be expected to reduce false positives in perceptual identification tasks. To the best of our knowledge this is the first effort at generating facial composites from DNA and the results are preliminary but certainly promising, especially considering the limited amount of genetic information about the face contained in these 24 SNPs. This approach can incorporate additional SNPs as these are discovered and their effects documented. In this context we discuss three main avenues of research: expanding our knowledge of the genetic architecture of facial morphology, improving the predictive modeling of facial morphology by exploring and incorporating alternative prediction models, and increasing the value of the results through the weighted encoding of physical measurements in terms of human perception of faces. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  15. Positive facial expressions during retrieval of self-defining memories.

    PubMed

    Gandolphe, Marie Charlotte; Nandrino, Jean Louis; Delelis, Gérald; Ducro, Claire; Lavallee, Audrey; Saloppe, Xavier; Moustafa, Ahmed A; El Haj, Mohamad

    2017-11-14

    In this study, we investigated, for the first time, facial expressions during the retrieval of Self-defining memories (i.e., those vivid and emotionally intense memories of enduring concerns or unresolved conflicts). Participants self-rated the emotional valence of their Self-defining memories and autobiographical retrieval was analyzed with a facial analysis software. This software (Facereader) synthesizes the facial expression information (i.e., cheek, lips, muscles, eyebrow muscles) to describe and categorize facial expressions (i.e., neutral, happy, sad, surprised, angry, scared, and disgusted facial expressions). We found that participants showed more emotional than neutral facial expressions during the retrieval of Self-defining memories. We also found that participants showed more positive than negative facial expressions during the retrieval of Self-defining memories. Interestingly, participants attributed positive valence to the retrieved memories. These findings are the first to demonstrate the consistency between facial expressions and the emotional subjective experience of Self-defining memories. These findings provide valuable physiological information about the emotional experience of the past.

  16. Electrophysiological evidence for biased competition in V1 for fear expressions.

    PubMed

    West, Greg L; Anderson, Adam A K; Ferber, Susanne; Pratt, Jay

    2011-11-01

    When multiple stimuli are concurrently displayed in the visual field, they must compete for neural representation at the processing expense of their contemporaries. This biased competition is thought to begin as early as primary visual cortex, and can be driven by salient low-level stimulus features. Stimuli important for an organism's survival, such as facial expressions signaling environmental threat, might be similarly prioritized at this early stage of visual processing. In the present study, we used ERP recordings from striate cortex to examine whether fear expressions can bias the competition for neural representation at the earliest stage of retinotopic visuo-cortical processing when in direct competition with concurrently presented visual information of neutral valence. We found that within 50 msec after stimulus onset, information processing in primary visual cortex is biased in favor of perceptual representations of fear at the expense of competing visual information (Experiment 1). Additional experiments confirmed that the facial display's emotional content rather than low-level features is responsible for this prioritization in V1 (Experiment 2), and that this competition is reliant on a face's upright canonical orientation (Experiment 3). These results suggest that complex stimuli important for an organism's survival can indeed be prioritized at the earliest stage of cortical processing at the expense of competing information, with competition possibly beginning before encoding in V1.

  17. [Integration of the functional signal of intraoperative EMG of the facial nerve in to navigation model for surgery of the petrous bone].

    PubMed

    Strauss, G; Strauss, M; Lüders, C; Stopp, S; Shi, J; Dietz, A; Lüth, T

    2008-10-01

    PROBLEM DEFINITION: The goal of this work is the integration of the information of the intraoperative EMG monitoring of the facial nerve into the radiological data of the petrous bone. The following hypotheses are to be examined: (I) the N. VII can be determined intraoperatively with a high reliability by the stimulation-probe. A computer program is able to discriminate true-positive EMG signals from false-positive artifacts. (II) The course of the facial nerve can be registered in a three-dimensional area by EMG signals at a nerve model in the lab test. The individual items of the nerve can be combined into a route model. The route model can be integrated into the data of digital volume tomography (DVT). (I) Intraoperative EMG signals of the facial nerve were classified at 128 measurements by an automatic software. The results were correlated with the actual intraoperative situation. (II) The nerve phantom was designed and a DVT data set was provided. Phantom was registered with a navigation system (Karl Storz NPU, Tuttlingen, Germany). The stimulation probe of the EMG-system was tracked by the navigation system. The navigation system was extended by a processing unit (MiMed, Technische Universität München, Germany). Thus the classified EMG parameters of the facial route can be received, processed and be generated to a model of the facial nerve route. The operability was examined at 120 (10 x 12) measuring points. The evaluation of the examined algorithm for classification EMG-signals of the facial nerve resulted as correct in all measuring events. In all 10 attempts it succeeded to visualize the nerve route as three-dimensional model. The different sizes of the individual measuring points reflect the appropriate values of Istim and UEMG correctly. This work proves the feasibility of an automatic classification of an intraoperative EMG signal of the facial nerve by a processing unit. Furthermore the work shows the feasibility of tracking of the position of the stimulation probe and its integration into amodel of the route of the facial nerve (e. g. DVT). The rediability, with which the position of the nerve can be seized by the stimulation probe, is also included into the resulting route model.

  18. Behind the Robot's Smiles and Frowns: In Social Context, People Do Not Mirror Android's Expressions But React to Their Informational Value.

    PubMed

    Hofree, Galit; Ruvolo, Paul; Reinert, Audrey; Bartlett, Marian S; Winkielman, Piotr

    2018-01-01

    Facial actions are key elements of non-verbal behavior. Perceivers' reactions to others' facial expressions often represent a match or mirroring (e.g., they smile to a smile). However, the information conveyed by an expression depends on context. Thus, when shown by an opponent, a smile conveys bad news and evokes frowning. The availability of anthropomorphic agents capable of facial actions raises the question of how people respond to such agents in social context. We explored this issue in a study where participants played a strategic game with or against a facially expressive android. Electromyography (EMG) recorded participants' reactions over zygomaticus muscle (smiling) and corrugator muscle (frowning). We found that participants' facial responses to android's expressions reflect their informational value, rather than a direct match. Overall, participants smiled more, and frowned less, when winning than losing. Critically, participants' responses to the game outcome were similar regardless of whether it was conveyed via the android's smile or frown. Furthermore, the outcome had greater impact on people's facial reactions when it was conveyed through android's face than a computer screen. These findings demonstrate that facial actions of artificial agents impact human facial responding. They also suggest a sophistication in human-robot communication that highlights the signaling value of facial expressions.

  19. Passing faces: sequence-dependent variations in the perceptual processing of emotional faces.

    PubMed

    Karl, Christian; Hewig, Johannes; Osinsky, Roman

    2016-10-01

    There is broad evidence that contextual factors influence the processing of emotional facial expressions. Yet temporal-dynamic aspects, inter alia how face processing is influenced by the specific order of neutral and emotional facial expressions, have been largely neglected. To shed light on this topic, we recorded electroencephalogram from 168 healthy participants while they performed a gender-discrimination task with angry and neutral faces. Our event-related potential (ERP) analyses revealed a strong emotional modulation of the N170 component, indicating that the basic visual encoding and emotional analysis of a facial stimulus happen, at least partially, in parallel. While the N170 and the late positive potential (LPP; 400-600 ms) were only modestly affected by the sequence of preceding faces, we observed a strong influence of face sequences on the early posterior negativity (EPN; 200-300 ms). Finally, the differing response patterns of the EPN and LPP indicate that these two ERPs represent distinct processes during face analysis: while the former seems to represent the integration of contextual information in the perception of a current face, the latter appears to represent the net emotional interpretation of a current face.

  20. Facial identity and facial expression are initially integrated at visual perceptual stages of face processing.

    PubMed

    Fisher, Katie; Towler, John; Eimer, Martin

    2016-01-08

    It is frequently assumed that facial identity and facial expression are analysed in functionally and anatomically distinct streams within the core visual face processing system. To investigate whether expression and identity interact during the visual processing of faces, we employed a sequential matching procedure where participants compared either the identity or the expression of two successively presented faces, and ignored the other irrelevant dimension. Repetitions versus changes of facial identity and expression were varied independently across trials, and event-related potentials (ERPs) were recorded during task performance. Irrelevant facial identity and irrelevant expression both interfered with performance in the expression and identity matching tasks. These symmetrical interference effects show that neither identity nor expression can be selectively ignored during face matching, and suggest that they are not processed independently. N250r components to identity repetitions that reflect identity matching mechanisms in face-selective visual cortex were delayed and attenuated when there was an expression change, demonstrating that facial expression interferes with visual identity matching. These findings provide new evidence for interactions between facial identity and expression within the core visual processing system, and question the hypothesis that these two attributes are processed independently. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. An Analysis of Biometric Technology as an Enabler to Information Assurance

    DTIC Science & Technology

    2005-03-01

    29 Facial Recognition ................................................................................................ 30...al., 2003) Facial Recognition Facial recognition systems are gaining momentum as of late. The reason for this is that facial recognition systems...the traffic camera on the street corner, video technology is everywhere. There are a couple of different methods currently being used for facial

  2. Variants of Independence in the Perception of Facial Identity and Expression

    ERIC Educational Resources Information Center

    Fitousi, Daniel; Wenger, Michael J.

    2013-01-01

    A prominent theory in the face perception literature--the parallel-route hypothesis (Bruce & Young, 1986)--assumes a dedicated channel for the processing of identity that is separate and independent from the channel(s) in which nonidentity information is processed (e.g., expression, eye gaze). The current work subjected this assumption to…

  3. Curvilinear relationship between phonological working memory load and social-emotional modulation

    PubMed Central

    Mano, Quintino R.; Brown, Gregory G.; Bolden, Khalima; Aupperle, Robin; Sullivan, Sarah; Paulus, Martin P.; Stein, Murray B.

    2015-01-01

    Accumulating evidence suggests that working memory load is an important factor for the interplay between cognitive and facial-affective processing. However, it is unclear how distraction caused by perception of faces interacts with load-related performance. We developed a modified version of the delayed match-to-sample task wherein task-irrelevant facial distracters were presented early in the rehearsal of pseudoword memoranda that varied incrementally in load size (1-syllable, 2-syllables, or 3-syllables). Facial distracters displayed happy, sad, or neutral expressions in Experiment 1 (N=60) and happy, fearful, or neutral expressions in Experiment 2 (N=29). Facial distracters significantly disrupted task performance in the intermediate load condition (2-syllable) but not in the low or high load conditions (1- and 3-syllables, respectively), an interaction replicated and generalised in Experiment 2. All facial distracters disrupted working memory in the intermediate load condition irrespective of valence, suggesting a primary and general effect of distraction caused by faces. However, sad and fearful faces tended to be less disruptive than happy faces, suggesting a secondary and specific valence effect. Working memory appears to be most vulnerable to social-emotional information at intermediate loads. At low loads, spare capacity is capable of accommodating the combinatorial load (1-syllable plus facial distracter), whereas high loads maximised capacity and deprived facial stimuli from occupying working memory slots to cause disruption. PMID:22928750

  4. The valence-specific laterality effect in free viewing conditions: The influence of sex, handedness, and response bias.

    PubMed

    Rodway, Paul; Wright, Lynn; Hardie, Scott

    2003-12-01

    The right hemisphere has often been viewed as having a dominant role in the processing of emotional information. Other evidence indicates that both hemispheres process emotional information but their involvement is valence specific, with the right hemisphere dealing with negative emotions and the left hemisphere preferentially processing positive emotions. This has been found under both restricted (Reuter-Lorenz & Davidson, 1981) and free viewing conditions (Jansari, Tranel, & Adophs, 2000). It remains unclear whether the valence-specific laterality effect is also sex specific or is influenced by the handedness of participants. To explore this issue we repeated Jansari et al.'s free-viewing laterality task with 78 participants. We found a valence-specific laterality effect in women but not men, with women discriminating negative emotional expressions more accurately when the face was presented on the left-hand side and discriminating positive emotions more accurately when those faces were presented on the right-hand side. These results indicate that under free viewing conditions women are more lateralised for the processing of facial emotion than are men. Handedness did not affect the lateralised processing of facial emotion. Finally, participants demonstrated a response bias on control trials, where facial emotion did not differ between the faces. Participants selected the left-hand side more frequently when they believed the expression was negative and the right-hand side more frequently when they believed the expression was positive. This response bias can cause a spurious valence-specific laterality effect which might have contributed to the conflicting findings within the literature.

  5. The integration of visual context information in facial emotion recognition in 5- to 15-year-olds.

    PubMed

    Theurel, Anne; Witt, Arnaud; Malsert, Jennifer; Lejeune, Fleur; Fiorentini, Chiara; Barisnikov, Koviljka; Gentaz, Edouard

    2016-10-01

    The current study investigated the role of congruent visual context information in the recognition of facial emotional expression in 190 participants from 5 to 15years of age. Children performed a matching task that presented pictures with different facial emotional expressions (anger, disgust, happiness, fear, and sadness) in two conditions: with and without a visual context. The results showed that emotions presented with visual context information were recognized more accurately than those presented in the absence of visual context. The context effect remained steady with age but varied according to the emotion presented and the gender of participants. The findings demonstrated for the first time that children from the age of 5years are able to integrate facial expression and visual context information, and this integration improves facial emotion recognition. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Estimation of human emotions using thermal facial information

    NASA Astrophysics Data System (ADS)

    Nguyen, Hung; Kotani, Kazunori; Chen, Fan; Le, Bac

    2014-01-01

    In recent years, research on human emotion estimation using thermal infrared (IR) imagery has appealed to many researchers due to its invariance to visible illumination changes. Although infrared imagery is superior to visible imagery in its invariance to illumination changes and appearance differences, it has difficulties in handling transparent glasses in the thermal infrared spectrum. As a result, when using infrared imagery for the analysis of human facial information, the regions of eyeglasses are dark and eyes' thermal information is not given. We propose a temperature space method to correct eyeglasses' effect using the thermal facial information in the neighboring facial regions, and then use Principal Component Analysis (PCA), Eigen-space Method based on class-features (EMC), and PCA-EMC method to classify human emotions from the corrected thermal images. We collected the Kotani Thermal Facial Emotion (KTFE) database and performed the experiments, which show the improved accuracy rate in estimating human emotions.

  7. Startling similarity: Effects of facial self-resemblance and familiarity on the processing of emotional faces

    PubMed Central

    Larra, Mauro F.; Merz, Martina U.; Schächinger, Hartmut

    2017-01-01

    Facial self-resemblance has been associated with positive emotional evaluations, but this effect may be biased by self-face familiarity. Here we report two experiments utilizing startle modulation to investigate how the processing of facial expressions of emotion is affected by subtle resemblance to the self as well as to familiar faces. Participants of the first experiment (I) (N = 39) were presented with morphed faces showing happy, neutral, and fearful expressions which were manipulated to resemble either their own or unknown faces. At SOAs of either 300 ms or 3500–4500 ms after picture onset, startle responses were elicited by binaural bursts of white noise (50 ms, 105 dB), and recorded at the orbicularis oculi via EMG. Manual reaction time was measured in a simple emotion discrimination paradigm. Pictures preceding noise bursts by short SOA inhibited startle (prepulse inhibition, PPI). Both affective modulation and PPI of startle in response to emotional faces was altered by physical similarity to the self. As indexed both by relative facilitation of startle and faster manual responses, self-resemblance apparently induced deeper processing of facial affect, particularly in happy faces. Experiment II (N = 54) produced similar findings using morphs of famous faces, yet showed no impact of mere familiarity on PPI effects (or response time, either). The results are discussed with respect to differential (presumably pre-attentive) effects of self-specific vs. familiar information in face processing. PMID:29216226

  8. Facial expressions perceived by the adolescent brain: Towards the proficient use of low spatial frequency information.

    PubMed

    Peters, Judith C; Kemner, Chantal

    2017-10-01

    Rapid decoding of emotional expressions is essential for social communication. Fast processing of facial expressions depends on the adequate (subcortical) processing of important global face cues in the low spatial frequency (LSF) ranges. However, children below 9 years of age extract fearful expression information from local details represented by high SF (HSF) image content. Our ERP study investigated at which developmental stage this ineffective HSF-driven processing is replaced by the proficient and rapid LSF-driven perception of fearful faces, in which adults are highly skilled. We examined behavioral and neural responses to high- and low-pass filtered faces with a fearful or neutral expression in groups of children on the verge of pre-adolescence (9-10 years), adolescents (14-15 years), and young adults (20-28 years). Our results suggest that the neural emotional face processing network has a protracted maturational course into adolescence, which is related to changes in SF processing. In mid-adolescence, increased sensitivity to emotional LSF cues is developed, which aids the fast and adequate processing of fearful expressions that might signal impending danger. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. The Facial Expression Coding System (FACES): Development, Validation, and Utility

    ERIC Educational Resources Information Center

    Kring, Ann M.; Sloan, Denise M.

    2007-01-01

    This article presents information on the development and validation of the Facial Expression Coding System (FACES; A. M. Kring & D. Sloan, 1991). Grounded in a dimensional model of emotion, FACES provides information on the valence (positive, negative) of facial expressive behavior. In 5 studies, reliability and validity data from 13 diverse…

  10. The relative contributions of facial shape and surface information to perceptions of attractiveness and dominance.

    PubMed

    Torrance, Jaimie S; Wincenciak, Joanna; Hahn, Amanda C; DeBruine, Lisa M; Jones, Benedict C

    2014-01-01

    Although many studies have investigated the facial characteristics that influence perceptions of others' attractiveness and dominance, the majority of these studies have focused on either the effects of shape information or surface information alone. Consequently, the relative contributions of facial shape and surface characteristics to attractiveness and dominance perceptions are unclear. To address this issue, we investigated the relationships between ratings of original versions of faces and ratings of versions in which either surface information had been standardized (i.e., shape-only versions) or shape information had been standardized (i.e., surface-only versions). For attractiveness and dominance judgments of both male and female faces, ratings of shape-only and surface-only versions independently predicted ratings of the original versions of faces. The correlations between ratings of original and shape-only versions and between ratings of original and surface-only versions differed only in two instances. For male attractiveness, ratings of original versions were more strongly related to ratings of surface-only than shape-only versions, suggesting that surface information is particularly important for men's facial attractiveness. The opposite was true for female physical dominance, suggesting that shape information is particularly important for women's facial physical dominance. In summary, our results indicate that both facial shape and surface information contribute to judgments of others' attractiveness and dominance, suggesting that it may be important to consider both sources of information in research on these topics.

  11. Reconstructing dynamic mental models of facial expressions in prosopagnosia reveals distinct representations for identity and expression.

    PubMed

    Richoz, Anne-Raphaëlle; Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G; Caldara, Roberto

    2015-04-01

    The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus--pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits obtained from patients by using static images of facial expressions, and offer novel routes for patient rehabilitation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Are face representations depth cue invariant?

    PubMed

    Dehmoobadsharifabadi, Armita; Farivar, Reza

    2016-06-01

    The visual system can process three-dimensional depth cues defining surfaces of objects, but it is unclear whether such information contributes to complex object recognition, including face recognition. The processing of different depth cues involves both dorsal and ventral visual pathways. We investigated whether facial surfaces defined by individual depth cues resulted in meaningful face representations-representations that maintain the relationship between the population of faces as defined in a multidimensional face space. We measured face identity aftereffects for facial surfaces defined by individual depth cues (Experiments 1 and 2) and tested whether the aftereffect transfers across depth cues (Experiments 3 and 4). Facial surfaces and their morphs to the average face were defined purely by one of shading, texture, motion, or binocular disparity. We obtained identification thresholds for matched (matched identity between adapting and test stimuli), non-matched (non-matched identity between adapting and test stimuli), and no-adaptation (showing only the test stimuli) conditions for each cue and across different depth cues. We found robust face identity aftereffect in both experiments. Our results suggest that depth cues do contribute to forming meaningful face representations that are depth cue invariant. Depth cue invariance would require integration of information across different areas and different pathways for object recognition, and this in turn has important implications for cortical models of visual object recognition.

  13. Top-Down and Bottom-Up Visual Information Processing of Non-Social Stimuli in High-Functioning Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Maekawa, Toshihiko; Tobimatsu, Shozo; Inada, Naoko; Oribe, Naoya; Onitsuka, Toshiaki; Kanba, Shigenobu; Kamio, Yoko

    2011-01-01

    Individuals with high-functioning autism spectrum disorder (HF-ASD) often show superior performance in simple visual tasks, despite difficulties in the perception of socially important information such as facial expression. The neural basis of visual perception abnormalities associated with HF-ASD is currently unclear. We sought to elucidate the…

  14. Haunted by a doppelgänger: irrelevant facial similarity affects rule-based judgments.

    PubMed

    von Helversen, Bettina; Herzog, Stefan M; Rieskamp, Jörg

    2014-01-01

    Judging other people is a common and important task. Every day professionals make decisions that affect the lives of other people when they diagnose medical conditions, grant parole, or hire new employees. To prevent discrimination, professional standards require that decision makers render accurate and unbiased judgments solely based on relevant information. Facial similarity to previously encountered persons can be a potential source of bias. Psychological research suggests that people only rely on similarity-based judgment strategies if the provided information does not allow them to make accurate rule-based judgments. Our study shows, however, that facial similarity to previously encountered persons influences judgment even in situations in which relevant information is available for making accurate rule-based judgments and where similarity is irrelevant for the task and relying on similarity is detrimental. In two experiments in an employment context we show that applicants who looked similar to high-performing former employees were judged as more suitable than applicants who looked similar to low-performing former employees. This similarity effect was found despite the fact that the participants used the relevant résumé information about the applicants by following a rule-based judgment strategy. These findings suggest that similarity-based and rule-based processes simultaneously underlie human judgment.

  15. Evidence of a Shift from Featural to Configural Face Processing in Infancy

    ERIC Educational Resources Information Center

    Schwarzer, Gudrun; Zauner, Nicola; Jovanovic, Bianca

    2007-01-01

    Two experiments examined whether 4-, 6-, and 10-month-old infants process natural looking faces by feature, i.e. processing internal facial features independently of the facial context or holistically by processing the features in conjunction with the facial context. Infants were habituated to two faces and looking time was measured. After…

  16. Behind the Robot’s Smiles and Frowns: In Social Context, People Do Not Mirror Android’s Expressions But React to Their Informational Value

    PubMed Central

    Hofree, Galit; Ruvolo, Paul; Reinert, Audrey; Bartlett, Marian S.; Winkielman, Piotr

    2018-01-01

    Facial actions are key elements of non-verbal behavior. Perceivers’ reactions to others’ facial expressions often represent a match or mirroring (e.g., they smile to a smile). However, the information conveyed by an expression depends on context. Thus, when shown by an opponent, a smile conveys bad news and evokes frowning. The availability of anthropomorphic agents capable of facial actions raises the question of how people respond to such agents in social context. We explored this issue in a study where participants played a strategic game with or against a facially expressive android. Electromyography (EMG) recorded participants’ reactions over zygomaticus muscle (smiling) and corrugator muscle (frowning). We found that participants’ facial responses to android’s expressions reflect their informational value, rather than a direct match. Overall, participants smiled more, and frowned less, when winning than losing. Critically, participants’ responses to the game outcome were similar regardless of whether it was conveyed via the android’s smile or frown. Furthermore, the outcome had greater impact on people’s facial reactions when it was conveyed through android’s face than a computer screen. These findings demonstrate that facial actions of artificial agents impact human facial responding. They also suggest a sophistication in human-robot communication that highlights the signaling value of facial expressions. PMID:29740307

  17. Neurocognition and symptoms identify links between facial recognition and emotion processing in schizophrenia: Meta-analytic findings

    PubMed Central

    Ventura, Joseph; Wood, Rachel C.; Jimenez, Amy M.; Hellemann, Gerhard S.

    2014-01-01

    Background In schizophrenia patients, one of the most commonly studied deficits of social cognition is emotion processing (EP), which has documented links to facial recognition (FR). But, how are deficits in facial recognition linked to emotion processing deficits? Can neurocognitive and symptom correlates of FR and EP help differentiate the unique contribution of FR to the domain of social cognition? Methods A meta-analysis of 102 studies (combined n = 4826) in schizophrenia patients was conducted to determine the magnitude and pattern of relationships between facial recognition, emotion processing, neurocognition, and type of symptom. Results Meta-analytic results indicated that facial recognition and emotion processing are strongly interrelated (r = .51). In addition, the relationship between FR and EP through voice prosody (r = .58) is as strong as the relationship between FR and EP based on facial stimuli (r = .53). Further, the relationship between emotion recognition, neurocognition, and symptoms is independent of the emotion processing modality – facial stimuli and voice prosody. Discussion The association between FR and EP that occurs through voice prosody suggests that FR is a fundamental cognitive process. The observed links between FR and EP might be due to bottom-up associations between neurocognition and EP, and not simply because most emotion recognition tasks use visual facial stimuli. In addition, links with symptoms, especially negative symptoms and disorganization, suggest possible symptom mechanisms that contribute to FR and EP deficits. PMID:24268469

  18. Neurocognition and symptoms identify links between facial recognition and emotion processing in schizophrenia: meta-analytic findings.

    PubMed

    Ventura, Joseph; Wood, Rachel C; Jimenez, Amy M; Hellemann, Gerhard S

    2013-12-01

    In schizophrenia patients, one of the most commonly studied deficits of social cognition is emotion processing (EP), which has documented links to facial recognition (FR). But, how are deficits in facial recognition linked to emotion processing deficits? Can neurocognitive and symptom correlates of FR and EP help differentiate the unique contribution of FR to the domain of social cognition? A meta-analysis of 102 studies (combined n=4826) in schizophrenia patients was conducted to determine the magnitude and pattern of relationships between facial recognition, emotion processing, neurocognition, and type of symptom. Meta-analytic results indicated that facial recognition and emotion processing are strongly interrelated (r=.51). In addition, the relationship between FR and EP through voice prosody (r=.58) is as strong as the relationship between FR and EP based on facial stimuli (r=.53). Further, the relationship between emotion recognition, neurocognition, and symptoms is independent of the emotion processing modality - facial stimuli and voice prosody. The association between FR and EP that occurs through voice prosody suggests that FR is a fundamental cognitive process. The observed links between FR and EP might be due to bottom-up associations between neurocognition and EP, and not simply because most emotion recognition tasks use visual facial stimuli. In addition, links with symptoms, especially negative symptoms and disorganization, suggest possible symptom mechanisms that contribute to FR and EP deficits. © 2013 Elsevier B.V. All rights reserved.

  19. [Prosopagnosia and facial expression recognition].

    PubMed

    Koyama, Shinichi

    2014-04-01

    This paper reviews clinical neuropsychological studies that have indicated that the recognition of a person's identity and the recognition of facial expressions are processed by different cortical and subcortical areas of the brain. The fusiform gyrus, especially the right fusiform gyrus, plays an important role in the recognition of identity. The superior temporal sulcus, amygdala, and medial frontal cortex play important roles in facial-expression recognition. Both facial recognition and facial-expression recognition are highly intellectual processes that involve several regions of the brain.

  20. Mutual information-based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah

    2013-12-01

    This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.

  1. The effect of Ramadan fasting on spatial attention through emotional stimuli

    PubMed Central

    Molavi, Maziyar; Yunus, Jasmy; Utama, Nugraha P

    2016-01-01

    Fasting can influence psychological and mental states. In the current study, the effect of periodical fasting on the process of emotion through gazed facial expression as a realistic multisource of social information was investigated for the first time. The dynamic cue-target task was applied via behavior and event-related potential measurements for 40 participants to reveal the temporal and spatial brain activities – before, during, and after fasting periods. The significance of fasting included several effects. The amplitude of the N1 component decreased over the centroparietal scalp during fasting. Furthermore, the reaction time during the fasting period decreased. The self-measurement of deficit arousal as well as the mood increased during the fasting period. There was a significant contralateral alteration of P1 over occipital area for the happy facial expression stimuli. The significant effect of gazed expression and its interaction with the emotional stimuli was indicated by the amplitude of N1. Furthermore, the findings of the study approved the validity effect as a congruency between gaze and target position, as indicated by the increment of P3 amplitude over centroparietal area as well as slower reaction time from behavioral response data during incongruency or invalid condition between gaze and target position compared with those during valid condition. Results of this study proved that attention to facial expression stimuli as a kind of communicative social signal was affected by fasting. Also, fasting improved the mood of practitioners. Moreover, findings from the behavioral and event-related potential data analyses indicated that the neural dynamics of facial emotion are processed faster than that of gazing, as the participants tended to react faster and prefer to relay on the type of facial emotions than to gaze direction while doing the task. Because of happy facial expression stimuli, right hemisphere activation was more than that of the left hemisphere. It indicated the consistency of the emotional lateralization concept rather than the valence concept of emotional processing. PMID:27307772

  2. Brain response to masked and unmasked facial emotions as a function of implicit and explicit personality self-concept of extraversion.

    PubMed

    Suslow, Thomas; Kugel, Harald; Lindner, Christian; Dannlowski, Udo; Egloff, Boris

    2017-01-06

    Extraversion-introversion is a personality dimension referring to individual differences in social behavior. In the past, neurobiological research on extraversion was almost entirely based upon questionnaires which inform about the explicit self-concept. Today, indirect measures are available that tap into the implicit self-concept of extraversion which is assumed to result from automatic processing functions. In our study, brain activation while viewing facial expression of affiliation relevant (i.e., happiness, and disgust) and irrelevant (i.e., fear) emotions was examined as a function of the implicit and explicit self-concept of extraversion and processing mode (automatic vs. controlled). 40 healthy volunteers watched blocks of masked and unmasked emotional faces while undergoing functional magnetic resonance imaging. The Implicit Association Test and the NEO Five-Factor Inventory were applied as implicit and explicit measures of extraversion which were uncorrelated in our sample. Implicit extraversion was found to be positively associated with neural response to masked happy faces in the thalamus and temporo-parietal regions and to masked disgust faces in cerebellar areas. Moreover, it was positively correlated with brain response to unmasked disgust faces in the amygdala and cortical areas. Explicit extraversion was not related to brain response to facial emotions when controlling trait anxiety. The implicit compared to the explicit self-concept of extraversion seems to be more strongly associated with brain activation not only during automatic but also during controlled processing of affiliation relevant facial emotions. Enhanced neural response to facial disgust could reflect high sensitivity to signals of interpersonal rejection in extraverts (i.e., individuals with affiliative tendencies). Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  3. Hierarchical Encoding of Social Cues in Primate Inferior Temporal Cortex

    PubMed Central

    Morin, Elyse L.; Hadj-Bouziane, Fadila; Stokes, Mark; Ungerleider, Leslie G.; Bell, Andrew H.

    2015-01-01

    Faces convey information about identity and emotional state, both of which are important for our social interactions. Models of face processing propose that changeable versus invariant aspects of a face, specifically facial expression/gaze direction versus facial identity, are coded by distinct neural pathways and yet neurophysiological data supporting this separation are incomplete. We recorded activity from neurons along the inferior bank of the superior temporal sulcus (STS), while monkeys viewed images of conspecific faces and non-face control stimuli. Eight monkey identities were used, each presented with 3 different facial expressions (neutral, fear grin, and threat). All facial expressions were displayed with both a direct and averted gaze. In the posterior STS, we found that about one-quarter of face-responsive neurons are sensitive to social cues, the majority of which being sensitive to only one of these cues. In contrast, in anterior STS, not only did the proportion of neurons sensitive to social cues increase, but so too did the proportion of neurons sensitive to conjunctions of identity with either gaze direction or expression. These data support a convergence of signals related to faces as one moves anteriorly along the inferior bank of the STS, which forms a fundamental part of the face-processing network. PMID:24836688

  4. Speech Signal and Facial Image Processing for Obstructive Sleep Apnea Assessment

    PubMed Central

    Espinoza-Cuadros, Fernando; Fernández-Pozo, Rubén; Toledano, Doroteo T.; Alcázar-Ramírez, José D.; López-Gonzalo, Eduardo; Hernández-Gómez, Luis A.

    2015-01-01

    Obstructive sleep apnea (OSA) is a common sleep disorder characterized by recurring breathing pauses during sleep caused by a blockage of the upper airway (UA). OSA is generally diagnosed through a costly procedure requiring an overnight stay of the patient at the hospital. This has led to proposing less costly procedures based on the analysis of patients' facial images and voice recordings to help in OSA detection and severity assessment. In this paper we investigate the use of both image and speech processing to estimate the apnea-hypopnea index, AHI (which describes the severity of the condition), over a population of 285 male Spanish subjects suspected to suffer from OSA and referred to a Sleep Disorders Unit. Photographs and voice recordings were collected in a supervised but not highly controlled way trying to test a scenario close to an OSA assessment application running on a mobile device (i.e., smartphones or tablets). Spectral information in speech utterances is modeled by a state-of-the-art low-dimensional acoustic representation, called i-vector. A set of local craniofacial features related to OSA are extracted from images after detecting facial landmarks using Active Appearance Models (AAMs). Support vector regression (SVR) is applied on facial features and i-vectors to estimate the AHI. PMID:26664493

  5. Speech Signal and Facial Image Processing for Obstructive Sleep Apnea Assessment.

    PubMed

    Espinoza-Cuadros, Fernando; Fernández-Pozo, Rubén; Toledano, Doroteo T; Alcázar-Ramírez, José D; López-Gonzalo, Eduardo; Hernández-Gómez, Luis A

    2015-01-01

    Obstructive sleep apnea (OSA) is a common sleep disorder characterized by recurring breathing pauses during sleep caused by a blockage of the upper airway (UA). OSA is generally diagnosed through a costly procedure requiring an overnight stay of the patient at the hospital. This has led to proposing less costly procedures based on the analysis of patients' facial images and voice recordings to help in OSA detection and severity assessment. In this paper we investigate the use of both image and speech processing to estimate the apnea-hypopnea index, AHI (which describes the severity of the condition), over a population of 285 male Spanish subjects suspected to suffer from OSA and referred to a Sleep Disorders Unit. Photographs and voice recordings were collected in a supervised but not highly controlled way trying to test a scenario close to an OSA assessment application running on a mobile device (i.e., smartphones or tablets). Spectral information in speech utterances is modeled by a state-of-the-art low-dimensional acoustic representation, called i-vector. A set of local craniofacial features related to OSA are extracted from images after detecting facial landmarks using Active Appearance Models (AAMs). Support vector regression (SVR) is applied on facial features and i-vectors to estimate the AHI.

  6. Aggression differentially modulates brain responses to fearful and angry faces: an exploratory study.

    PubMed

    Lu, Hui; Wang, Yu; Xu, Shuang; Wang, Yifeng; Zhang, Ruiping; Li, Tsingan

    2015-08-19

    Aggression is reported to modulate neural responses to the threatening information. However, whether aggression can modulate neural response to different kinds of threatening facial expressions (angry and fearful expressions) remains unknown. Thus, event-related potentials were measured in individuals (13 high aggressive, 12 low aggressive) exposed to neutral, angry, and fearful facial expressions while performing a frame-distinguishing task, irrespective of the emotional valence of the expressions. Highly aggressive participants showed no distinct neural responses between the three facial expressions. In addition, compared with individuals with low aggression, highly aggressive individuals showed a decreased frontocentral response to fearful faces within 250-300 ms and to angry faces within 400-500 ms of exposure. These results indicate that fearful faces represent a more threatening signal requiring a quick cognitive response during the early stage of facial processing, whereas angry faces elicit a stronger response during the later processing stage because of its eminent emotional significance. The present results represent the first known evidence that aggression is associated with different neural responses to fearful and angry faces. By exploring the distinct temporal responses to fearful and angry faces modulated by aggression, this study more precisely characterizes the cognitive characteristics of aggressive individuals. Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.

  7. Developmental trends in the process of constructing own- and other-race facial composites.

    PubMed

    Kehn, Andre; Renken, Maggie D; Gray, Jennifer M; Nunez, Narina L

    2014-01-01

    The current study examined developmental differences from the age of 5 to 18 in the creation process of own- and other-race facial composites. In addition, it considered how differences in the creation process affect similarity ratings. Participants created two composites (one own- and one other-race) from memory. The complexity of the composite creation process was recorded during Phase One. In Phase Two, a separate group of participants rated the composites for similarity to the corresponding target face. Results support the cross-race effect, developmental differences (based on composite creators) in similarity ratings, and the importance of the creation process for own- and other-race facial composites. Together, these findings suggest that as children get older the process through which they create facial composites becomes more complex and their ability to create facial composites improves. Increased complexity resulted in higher rated composites. Results are discussed from a psycho-legal perspective.

  8. Seeing Emotion with Your Ears: Emotional Prosody Implicitly Guides Visual Attention to Faces

    PubMed Central

    Rigoulot, Simon; Pell, Marc D.

    2012-01-01

    Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0–1250 ms], [1250–2500 ms], [2500–5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions. PMID:22303454

  9. Neurocognitive mechanisms of gaze-expression interactions in face processing and social attention

    PubMed Central

    Graham, Reiko; LaBar, Kevin S.

    2012-01-01

    The face conveys a rich source of non-verbal information used during social communication. While research has revealed how specific facial channels such as emotional expression are processed, little is known about the prioritization and integration of multiple cues in the face during dyadic exchanges. Classic models of face perception have emphasized the segregation of dynamic versus static facial features along independent information processing pathways. Here we review recent behavioral and neuroscientific evidence suggesting that within the dynamic stream, concurrent changes in eye gaze and emotional expression can yield early independent effects on face judgments and covert shifts of visuospatial attention. These effects are partially segregated within initial visual afferent processing volleys, but are subsequently integrated in limbic regions such as the amygdala or via reentrant visual processing volleys. This spatiotemporal pattern may help to resolve otherwise perplexing discrepancies across behavioral studies of emotional influences on gaze-directed attentional cueing. Theoretical explanations of gaze-expression interactions are discussed, with special consideration of speed-of-processing (discriminability) and contextual (ambiguity) accounts. Future research in this area promises to reveal the mental chronometry of face processing and interpersonal attention, with implications for understanding how social referencing develops in infancy and is impaired in autism and other disorders of social cognition. PMID:22285906

  10. Facial identification in very low-resolution images simulating prosthetic vision.

    PubMed

    Chang, M H; Kim, H S; Shin, J H; Park, K S

    2012-08-01

    Familiar facial identification is important to blind or visually impaired patients and can be achieved using a retinal prosthesis. Nevertheless, there are limitations in delivering the facial images with a resolution sufficient to distinguish facial features, such as eyes and nose, through multichannel electrode arrays used in current visual prostheses. This study verifies the feasibility of familiar facial identification under low-resolution prosthetic vision and proposes an edge-enhancement method to deliver more visual information that is of higher quality. We first generated a contrast-enhanced image and an edge image by applying the Sobel edge detector and blocked each of them by averaging. Then, we subtracted the blocked edge image from the blocked contrast-enhanced image and produced a pixelized image imitating an array of phosphenes. Before subtraction, every gray value of the edge images was weighted as 50% (mode 2), 75% (mode 3) and 100% (mode 4). In mode 1, the facial image was blocked and pixelized with no further processing. The most successful identification was achieved with mode 3 at every resolution in terms of identification index, which covers both accuracy and correct response time. We also found that the subjects recognized a distinctive face especially more accurately and faster than the other given facial images even under low-resolution prosthetic vision. Every subject could identify familiar faces even in very low-resolution images. And the proposed edge-enhancement method seemed to contribute to intermediate-stage visual prostheses.

  11. Contemporary solutions for the treatment of facial nerve paralysis.

    PubMed

    Garcia, Ryan M; Hadlock, Tessa A; Klebuc, Michael J; Simpson, Roger L; Zenn, Michael R; Marcus, Jeffrey R

    2015-06-01

    After reviewing this article, the participant should be able to: 1. Understand the most modern indications and technique for neurotization, including masseter-to-facial nerve transfer (fifth-to-seventh cranial nerve transfer). 2. Contrast the advantages and limitations associated with contiguous muscle transfers and free-muscle transfers for facial reanimation. 3. Understand the indications for a two-stage and one-stage free gracilis muscle transfer for facial reanimation. 4. Apply nonsurgical adjuvant treatments for acute facial nerve paralysis. Facial expression is a complex neuromotor and psychomotor process that is disrupted in patients with facial paralysis breaking the link between emotion and physical expression. Contemporary reconstructive options are being implemented in patients with facial paralysis. While static procedures provide facial symmetry at rest, true 'facial reanimation' requires restoration of facial movement. Contemporary treatment options include neurotization procedures (a new motor nerve is used to restore innervation to a viable muscle), contiguous regional muscle transfer (most commonly temporalis muscle transfer), microsurgical free muscle transfer, and nonsurgical adjuvants used to balance facial symmetry. Each approach has advantages and disadvantages along with ongoing controversies and should be individualized for each patient. Treatments for patients with facial paralysis continue to evolve in order to restore the complex psychomotor process of facial expression.

  12. Adaptation to Emotional Conflict: Evidence from a Novel Face Emotion Paradigm

    PubMed Central

    Clayson, Peter E.; Larson, Michael J.

    2013-01-01

    The preponderance of research on trial-by-trial recruitment of affective control (e.g., conflict adaptation) relies on stimuli wherein lexical word information conflicts with facial affective stimulus properties (e.g., the face-Stroop paradigm where an emotional word is overlaid on a facial expression). Several studies, however, indicate different neural time course and properties for processing of affective lexical stimuli versus affective facial stimuli. The current investigation used a novel task to examine control processes implemented following conflicting emotional stimuli with conflict-inducing affective face stimuli in the absence of affective words. Forty-one individuals completed a task wherein the affective-valence of the eyes and mouth were either congruent (happy eyes, happy mouth) or incongruent (happy eyes, angry mouth) while high-density event-related potentials (ERPs) were recorded. There was a significant congruency effect and significant conflict adaptation effects for error rates. Although response times (RTs) showed a significant congruency effect, the effect of previous-trial congruency on current-trial RTs was only present for current congruent trials. Temporospatial principal components analysis showed a P3-like ERP source localized using FieldTrip software to the medial cingulate gyrus that was smaller on incongruent than congruent trials and was significantly influenced by the recruitment of control processes following previous-trial emotional conflict (i.e., there was significant conflict adaptation in the ERPs). Results show that a face-only paradigm may be sufficient to elicit emotional conflict and suggest a system for rapidly detecting conflicting emotional stimuli and subsequently adjusting control resources, similar to cognitive conflict detection processes, when using conflicting facial expressions without words. PMID:24073278

  13. Dissimilar processing of emotional facial expressions in human and monkey temporal cortex

    PubMed Central

    Zhu, Qi; Nelissen, Koen; Van den Stock, Jan; De Winter, François-Laurent; Pauwels, Karl; de Gelder, Beatrice; Vanduffel, Wim; Vandenbulcke, Mathieu

    2013-01-01

    Emotional facial expressions play an important role in social communication across primates. Despite major progress made in our understanding of categorical information processing such as for objects and faces, little is known, however, about how the primate brain evolved to process emotional cues. In this study, we used functional magnetic resonance imaging (fMRI) to compare the processing of emotional facial expressions between monkeys and humans. We used a 2 × 2 × 2 factorial design with species (human and monkey), expression (fear and chewing) and configuration (intact versus scrambled) as factors. At the whole brain level, selective neural responses to conspecific emotional expressions were anatomically confined to the superior temporal sulcus (STS) in humans. Within the human STS, we found functional subdivisions with a face-selective right posterior STS area that also responded selectively to emotional expressions of other species and a more anterior area in the right middle STS that responded specifically to human emotions. Hence, we argue that the latter region does not show a mere emotion-dependent modulation of activity but is primarily driven by human emotional facial expressions. Conversely, in monkeys, emotional responses appeared in earlier visual cortex and outside face-selective regions in inferior temporal cortex that responded also to multiple visual categories. Within monkey IT, we also found areas that were more responsive to conspecific than to non-conspecific emotional expressions but these responses were not as specific as in human middle STS. Overall, our results indicate that human STS may have developed unique properties to deal with social cues such as emotional expressions. PMID:23142071

  14. Adaptation to emotional conflict: evidence from a novel face emotion paradigm.

    PubMed

    Clayson, Peter E; Larson, Michael J

    2013-01-01

    The preponderance of research on trial-by-trial recruitment of affective control (e.g., conflict adaptation) relies on stimuli wherein lexical word information conflicts with facial affective stimulus properties (e.g., the face-Stroop paradigm where an emotional word is overlaid on a facial expression). Several studies, however, indicate different neural time course and properties for processing of affective lexical stimuli versus affective facial stimuli. The current investigation used a novel task to examine control processes implemented following conflicting emotional stimuli with conflict-inducing affective face stimuli in the absence of affective words. Forty-one individuals completed a task wherein the affective-valence of the eyes and mouth were either congruent (happy eyes, happy mouth) or incongruent (happy eyes, angry mouth) while high-density event-related potentials (ERPs) were recorded. There was a significant congruency effect and significant conflict adaptation effects for error rates. Although response times (RTs) showed a significant congruency effect, the effect of previous-trial congruency on current-trial RTs was only present for current congruent trials. Temporospatial principal components analysis showed a P3-like ERP source localized using FieldTrip software to the medial cingulate gyrus that was smaller on incongruent than congruent trials and was significantly influenced by the recruitment of control processes following previous-trial emotional conflict (i.e., there was significant conflict adaptation in the ERPs). Results show that a face-only paradigm may be sufficient to elicit emotional conflict and suggest a system for rapidly detecting conflicting emotional stimuli and subsequently adjusting control resources, similar to cognitive conflict detection processes, when using conflicting facial expressions without words.

  15. Enhanced subliminal emotional responses to dynamic facial expressions.

    PubMed

    Sato, Wataru; Kubota, Yasutaka; Toichi, Motomi

    2014-01-01

    Emotional processing without conscious awareness plays an important role in human social interaction. Several behavioral studies reported that subliminal presentation of photographs of emotional facial expressions induces unconscious emotional processing. However, it was difficult to elicit strong and robust effects using this method. We hypothesized that dynamic presentations of facial expressions would enhance subliminal emotional effects and tested this hypothesis with two experiments. Fearful or happy facial expressions were presented dynamically or statically in either the left or the right visual field for 20 (Experiment 1) and 30 (Experiment 2) ms. Nonsense target ideographs were then presented, and participants reported their preference for them. The results consistently showed that dynamic presentations of emotional facial expressions induced more evident emotional biases toward subsequent targets than did static ones. These results indicate that dynamic presentations of emotional facial expressions induce more evident unconscious emotional processing.

  16. People with chronic facial pain perform worse than controls at a facial emotion recognition task, but it is not all about the emotion.

    PubMed

    von Piekartz, H; Wallwork, S B; Mohr, G; Butler, D S; Moseley, G L

    2015-04-01

    Alexithymia, or a lack of emotional awareness, is prevalent in some chronic pain conditions and has been linked to poor recognition of others' emotions. Recognising others' emotions from their facial expression involves both emotional and motor processing, but the possible contribution of motor disruption has not been considered. It is possible that poor performance on emotional recognition tasks could reflect problems with emotional processing, motor processing or both. We hypothesised that people with chronic facial pain would be less accurate in recognising others' emotions from facial expressions, would be less accurate in a motor imagery task involving the face, and that performance on both tasks would be positively related. A convenience sample of 19 people (15 females) with chronic facial pain and 19 gender-matched controls participated. They undertook two tasks; in the first task, they identified the facial emotion presented in a photograph. In the second, they identified whether the person in the image had a facial feature pointed towards their left or right side, a well-recognised paradigm to induce implicit motor imagery. People with chronic facial pain performed worse than controls at both tasks (Facially Expressed Emotion Labelling (FEEL) task P < 0·001; left/right judgment task P < 0·001). Participants who were more accurate at one task were also more accurate at the other, regardless of group (P < 0·001, r(2)  = 0·523). Participants with chronic facial pain were worse than controls at both the FEEL emotion recognition task and the left/right facial expression task and performance covaried within participants. We propose that disrupted motor processing may underpin or at least contribute to the difficulty that facial pain patients have in emotion recognition and that further research that tests this proposal is warranted. © 2014 John Wiley & Sons Ltd.

  17. NK1 receptor antagonism and emotional processing in healthy volunteers.

    PubMed

    Chandra, P; Hafizi, S; Massey-Chase, R M; Goodwin, G M; Cowen, P J; Harmer, C J

    2010-04-01

    The neurokinin-1 (NK(1)) receptor antagonist, aprepitant, showed activity in several animal models of depression; however, its efficacy in clinical trials was disappointing. There is little knowledge of the role of NK(1) receptors in human emotional behaviour to help explain this discrepancy. The aim of the current study was to assess the effects of a single oral dose of aprepitant (125 mg) on models of emotional processing sensitive to conventional antidepressant drug administration in 38 healthy volunteers, randomly allocated to receive aprepitant or placebo in a between groups double blind design. Performance on measures of facial expression recognition, emotional categorisation, memory and attentional visual-probe were assessed following the drug absorption. Relative to placebo, aprepitant improved recognition of happy facial expressions and increased vigilance to emotional information in the unmasked condition of the visual probe task. In contrast, aprepitant impaired emotional memory and slowed responses in the facial expression recognition task suggesting possible deleterious effects on cognition. These results suggest that while antagonism of NK(1) receptors does affect emotional processing in humans, its effects are more restricted and less consistent across tasks than those of conventional antidepressants. Human models of emotional processing may provide a useful means of assessing the likely therapeutic potential of new treatments for depression.

  18. Emotion identification and aging: Behavioral and neural age-related changes.

    PubMed

    Gonçalves, Ana R; Fernandes, Carina; Pasion, Rita; Ferreira-Santos, Fernando; Barbosa, Fernando; Marques-Teixeira, João

    2018-05-01

    Aging is known to alter the processing of facial expressions of emotion (FEE), however the impact of this alteration is less clear. Additionally, there is little information about the temporal dynamics of the neural processing of facial affect. We examined behavioral and neural age-related changes in the identification of FEE using event-related potentials. Furthermore, we analyze the relationship between behavioral/neural responses and neuropsychological functioning. To this purpose, 30 younger adults, 29 middle-aged adults and 26 older adults identified FEE. The behavioral results showed a similar performance between groups. The neural results showed no significant differences between groups for the P100 component and an increased N170 amplitude in the older group. Furthermore, a pattern of asymmetric activation was evident in the N170 component. Results also suggest deficits in facial feature decoding abilities, reflected by a reduced N250 amplitude in older adults. Neuropsychological functioning predicts P100 modulation, but does not seem to influence emotion identification ability. The findings suggest the existence of a compensatory function that would explain the age-equivalent performance in emotion identification. The study may help future research addressing behavioral and neural processes involved on processing of FEE in neurodegenerative conditions. Copyright © 2018 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  19. Facial Attractiveness as a Moderator of the Association between Social and Physical Aggression and Popularity in Adolescents

    ERIC Educational Resources Information Center

    Rosen, Lisa H.; Underwood, Marion K.

    2010-01-01

    This study examined the relations between facial attractiveness, aggression, and popularity in adolescence to determine whether facial attractiveness would buffer against the negative effects of aggression on popularity. We collected ratings of facial attractiveness from standardized photographs, and teachers provided information on adolescents'…

  20. A face for all seasons: Searching for context-specific leadership traits and discovering a general preference for perceived health

    PubMed Central

    Spisak, Brian R.; Blaker, Nancy M.; Lefevre, Carmen E.; Moore, Fhionna R.; Krebbers, Kleis F. B.

    2014-01-01

    Previous research indicates that followers tend to contingently match particular leader qualities to evolutionarily consistent situations requiring collective action (i.e., context-specific cognitive leadership prototypes) and information processing undergoes categorization which ranks certain qualities as first-order context-general and others as second-order context-specific. To further investigate this contingent categorization phenomenon we examined the “attractiveness halo”—a first-order facial cue which significantly biases leadership preferences. While controlling for facial attractiveness, we independently manipulated the underlying facial cues of health and intelligence and then primed participants with four distinct organizational dynamics requiring leadership (i.e., competition vs. cooperation between groups and exploratory change vs. stable exploitation). It was expected that the differing requirements of the four dynamics would contingently select for relatively healthier- or intelligent-looking leaders. We found perceived facial intelligence to be a second-order context-specific trait—for instance, in times requiring a leader to address between-group cooperation—whereas perceived health is significantly preferred across all contexts (i.e., a first-order trait). The results also indicate that facial health positively affects perceived masculinity while facial intelligence negatively affects perceived masculinity, which may partially explain leader choice in some of the environmental contexts. The limitations and a number of implications regarding leadership biases are discussed. PMID:25414653

  1. Differential Gaze Patterns on Eyes and Mouth During Audiovisual Speech Segmentation

    PubMed Central

    Lusk, Laina G.; Mitchel, Aaron D.

    2016-01-01

    Speech is inextricably multisensory: both auditory and visual components provide critical information for all aspects of speech processing, including speech segmentation, the visual components of which have been the target of a growing number of studies. In particular, a recent study (Mitchel and Weiss, 2014) established that adults can utilize facial cues (i.e., visual prosody) to identify word boundaries in fluent speech. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2014). Subjects spent the most time watching the eyes and mouth. A significant trend in gaze durations was found with the longest gaze duration on the mouth, followed by the eyes and then the nose. In addition, eye-gaze patterns changed across familiarization as subjects learned the word boundaries, showing decreased attention to the mouth in later blocks while attention on other facial features remained consistent. These findings highlight the importance of the visual component of speech processing and suggest that the mouth may play a critical role in visual speech segmentation. PMID:26869959

  2. Exploring the nature of facial affect processing deficits in schizophrenia.

    PubMed

    van 't Wout, Mascha; Aleman, André; Kessels, Roy P C; Cahn, Wiepke; de Haan, Edward H F; Kahn, René S

    2007-04-15

    Schizophrenia has been associated with deficits in facial affect processing, especially negative emotions. However, the exact nature of the deficit remains unclear. The aim of the present study was to investigate whether schizophrenia patients have problems in automatic allocation of attention as well as in controlled evaluation of facial affect. Thirty-seven patients with schizophrenia were compared with 41 control subjects on incidental facial affect processing (gender decision of faces with a fearful, angry, happy, disgusted, and neutral expression) and degraded facial affect labeling (labeling of fearful, angry, happy, and neutral faces). The groups were matched on estimates of verbal and performance intelligence (National Adult Reading Test; Raven's Matrices), general face recognition ability (Benton Face Recognition), and other demographic variables. The results showed that patients with schizophrenia as well as control subjects demonstrate the normal threat-related interference during incidental facial affect processing. Conversely, on controlled evaluation patients were specifically worse in the labeling of fearful faces. In particular, patients with high levels of negative symptoms may be characterized by deficits in labeling fear. We suggest that patients with schizophrenia show no evidence of deficits in the automatic allocation of attention resources to fearful (threat-indicating) faces, but have a deficit in the controlled processing of facial emotions that may be specific for fearful faces.

  3. Empirical mode decomposition-based facial pose estimation inside video sequences

    NASA Astrophysics Data System (ADS)

    Qing, Chunmei; Jiang, Jianmin; Yang, Zhijing

    2010-03-01

    We describe a new pose-estimation algorithm via integration of the strength in both empirical mode decomposition (EMD) and mutual information. While mutual information is exploited to measure the similarity between facial images to estimate poses, EMD is exploited to decompose input facial images into a number of intrinsic mode function (IMF) components, which redistribute the effect of noise, expression changes, and illumination variations as such that, when the input facial image is described by the selected IMF components, all the negative effects can be minimized. Extensive experiments were carried out in comparisons to existing representative techniques, and the results show that the proposed algorithm achieves better pose-estimation performances with robustness to noise corruption, illumination variation, and facial expressions.

  4. In what sense 'familiar'? Examining experiential differences within pathologies of facial recognition.

    PubMed

    Young, Garry

    2009-09-01

    Explanations of Capgras delusion and prosopagnosia typically incorporate a dual-route approach to facial recognition in which a deficit in overt or covert processing in one condition is mirror-reversed in the other. Despite this double dissociation, experiences of either patient-group are often reported in the same way--as lacking a sense of familiarity toward familiar faces. In this paper, deficits in the facial processing of these patients are compared to other facial recognition pathologies, and their experiential characteristics mapped onto the dual-route model in order to provide a less ambiguous link between facial processing and experiential content. The paper concludes that the experiential states of Capgras delusion, prosopagnosia, and related facial pathologies are quite distinct, and that this descriptive distinctiveness finds explanatory equivalence at the level of anatomical and functional disruption within the face recognition system. The role of skin conductance response (SCR) as a measure of 'familiarity' is also clarified.

  5. Attention to emotion and non-Western faces: revisiting the facial feedback hypothesis.

    PubMed

    Dzokoto, Vivian; Wallace, David S; Peters, Laura; Bentsi-Enchill, Esi

    2014-01-01

    In a modified replication of Strack, Martin, and Stepper's demonstration of the Facial Feedback Hypothesis (1988), we investigated the effect of attention to emotion on the facial feedback process in a non-western cultural setting. Participants, recruited from two universities in Ghana, West Africa, gave self-reports of their perceived levels of attention to emotion, and then completed cartoon-rating tasks while randomly assigned to smiling, frowning, or neutral conditions. While participants with low Attention to Emotion scores displayed the usual facial feedback effect (rating cartoons as funnier when in the smiling compared to the frowning condition), the effect was not present in individuals with high Attention to Emotion. The findings indicate that (1) the facial feedback process can occur in contexts beyond those in which the phenomenon has previously been studied, and (2) aspects of emotion regulation, such as Attention to Emotion can interfere with the facial feedback process.

  6. Automatic prediction of facial trait judgments: appearance vs. structural models.

    PubMed

    Rojas, Mario; Masip, David; Todorov, Alexander; Vitria, Jordi

    2011-01-01

    Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.

  7. My Father's Ears

    ERIC Educational Resources Information Center

    Jones, Bridget

    2005-01-01

    Each new baby is scrutinised and their facial features and other characteristics catalogued against known family members. This is essentially a social process: the new family member is recognised and accepted into the tribe. However, people's genetic inheritance--the information in their DNA--is also a serious matter with the potential to…

  8. Visual search for facial expressions of emotions: a comparison of dynamic and static faces.

    PubMed

    Horstmann, Gernot; Ansorge, Ulrich

    2009-02-01

    A number of past studies have used the visual search paradigm to examine whether certain aspects of emotional faces are processed preattentively and can thus be used to guide attention. All these studies presented static depictions of facial prototypes. Emotional expressions conveyed by the movement patterns of the face have never been examined for their preattentive effect. The present study presented for the first time dynamic facial expressions in a visual search paradigm. Experiment 1 revealed efficient search for a dynamic angry face among dynamic friendly faces, but inefficient search in a control condition with static faces. Experiments 2 to 4 suggested that this pattern of results is due to a stronger movement signal in the angry than in the friendly face: No (strong) advantage of dynamic over static faces is revealed when the degree of movement is controlled. These results show that dynamic information can be efficiently utilized in visual search for facial expressions. However, these results do not generally support the hypothesis that emotion-specific movement patterns are always preattentively discriminated. (c) 2009 APA, all rights reserved

  9. Quantifying facial expression recognition across viewing conditions.

    PubMed

    Goren, Deborah; Wilson, Hugh R

    2006-04-01

    Facial expressions are key to social interactions and to assessment of potential danger in various situations. Therefore, our brains must be able to recognize facial expressions when they are transformed in biologically plausible ways. We used synthetic happy, sad, angry and fearful faces to determine the amount of geometric change required to recognize these emotions during brief presentations. Five-alternative forced choice conditions involving central viewing, peripheral viewing and inversion were used to study recognition among the four emotions. Two-alternative forced choice was used to study affect discrimination when spatial frequency information in the stimulus was modified. The results show an emotion and task-dependent pattern of detection. Facial expressions presented with low peak frequencies are much harder to discriminate from neutral than faces defined by either mid or high peak frequencies. Peripheral presentation of faces also makes recognition much more difficult, except for happy faces. Differences between fearful detection and recognition tasks are probably due to common confusions with sadness when recognizing fear from among other emotions. These findings further support the idea that these emotions are processed separately from each other.

  10. Psychocentricity and participant profiles: implications for lexical processing among multilinguals

    PubMed Central

    Libben, Gary; Curtiss, Kaitlin; Weber, Silke

    2014-01-01

    Lexical processing among bilinguals is often affected by complex patterns of individual experience. In this paper we discuss the psychocentric perspective on language representation and processing, which highlights the centrality of individual experience in psycholinguistic experimentation. We discuss applications to the investigation of lexical processing among multilinguals and explore the advantages of using high-density experiments with multilinguals. High density experiments are designed to co-index measures of lexical perception and production, as well as participant profiles. We discuss the challenges associated with the characterization of participant profiles and present a new data visualization technique, that we term Facial Profiles. This technique is based on Chernoff faces developed over 40 years ago. The Facial Profile technique seeks to overcome some of the challenges associated with the use of Chernoff faces, while maintaining the core insight that recoding multivariate data as facial features can engage the human face recognition system and thus enhance our ability to detect and interpret patterns within multivariate datasets. We demonstrate that Facial Profiles can code participant characteristics in lexical processing studies by recoding variables such as reading ability, speaking ability, and listening ability into iconically-related relative sizes of eye, mouth, and ear, respectively. The balance of ability in bilinguals can be captured by creating composite facial profiles or Janus Facial Profiles. We demonstrate the use of Facial Profiles and Janus Facial Profiles in the characterization of participant effects in the study of lexical perception and production. PMID:25071614

  11. Unfakeable facial configurations affect strategic choices in trust games with or without information about past behavior.

    PubMed

    Rezlescu, Constantin; Duchaine, Brad; Olivola, Christopher Y; Chater, Nick

    2012-01-01

    Many human interactions are built on trust, so widespread confidence in first impressions generally favors individuals with trustworthy-looking appearances. However, few studies have explicitly examined: 1) the contribution of unfakeable facial features to trust-based decisions, and 2) how these cues are integrated with information about past behavior. Using highly controlled stimuli and an improved experimental procedure, we show that unfakeable facial features associated with the appearance of trustworthiness attract higher investments in trust games. The facial trustworthiness premium is large for decisions based solely on faces, with trustworthy identities attracting 42% more money (Study 1), and remains significant though reduced to 6% when reputational information is also available (Study 2). The face trustworthiness premium persists with real (rather than virtual) currency and when higher payoffs are at stake (Study 3). Our results demonstrate that cooperation may be affected not only by controllable appearance cues (e.g., clothing, facial expressions) as shown previously, but also by features that are impossible to mimic (e.g., individual facial structure). This unfakeable face trustworthiness effect is not limited to the rare situations where people lack any information about their partners, but survives in richer environments where relevant details about partner past behavior are available.

  12. Unfakeable Facial Configurations Affect Strategic Choices in Trust Games with or without Information about Past Behavior

    PubMed Central

    Rezlescu, Constantin; Duchaine, Brad; Olivola, Christopher Y.; Chater, Nick

    2012-01-01

    Background Many human interactions are built on trust, so widespread confidence in first impressions generally favors individuals with trustworthy-looking appearances. However, few studies have explicitly examined: 1) the contribution of unfakeable facial features to trust-based decisions, and 2) how these cues are integrated with information about past behavior. Methodology/Principal Findings Using highly controlled stimuli and an improved experimental procedure, we show that unfakeable facial features associated with the appearance of trustworthiness attract higher investments in trust games. The facial trustworthiness premium is large for decisions based solely on faces, with trustworthy identities attracting 42% more money (Study 1), and remains significant though reduced to 6% when reputational information is also available (Study 2). The face trustworthiness premium persists with real (rather than virtual) currency and when higher payoffs are at stake (Study 3). Conclusions/Significance Our results demonstrate that cooperation may be affected not only by controllable appearance cues (e.g., clothing, facial expressions) as shown previously, but also by features that are impossible to mimic (e.g., individual facial structure). This unfakeable face trustworthiness effect is not limited to the rare situations where people lack any information about their partners, but survives in richer environments where relevant details about partner past behavior are available. PMID:22470553

  13. How face blurring affects body language processing of static gestures in women and men.

    PubMed

    Proverbio, A M; Ornaghi, L; Gabaro, V

    2018-05-14

    The role of facial coding in body language comprehension was investigated by ERP recordings in 31 participants viewing 800 photographs of gestures (iconic, deictic and emblematic), which could be congruent or incongruent with their caption. Facial information was obscured by blurring in half of the stimuli. The task consisted of evaluating picture/caption congruence. Quicker response times were observed in women than in men to congruent stimuli, and a cost for incongruent vs. congruent stimuli was found only in men. Face obscuration did not affect accuracy in women as reflected by omission percentages, nor reduced their cognitive potentials, thus suggesting a better comprehension of face deprived pantomimes. N170 response (modulated by congruity and face presence) peaked later in men than in women. Late Positivity was much larger for congruent stimuli in the female brain, regardless of face blurring. Face presence specifically activated the right superior temporal and fusiform gyri, cingulate cortex and insula, according to source reconstruction. These regions have been reported to be insufficiently activated in face-avoiding individuals with social deficits. Overall, the results corroborate the hypothesis that females might be more resistant to the lack of facial information or better at understanding body language in face-deprived social information.

  14. Facing mixed emotions: Analytic and holistic perception of facial emotion expressions engages separate brain networks.

    PubMed

    Meaux, Emilie; Vuilleumier, Patrik

    2016-11-01

    The ability to decode facial emotions is of primary importance for human social interactions; yet, it is still debated how we analyze faces to determine their expression. Here we compared the processing of emotional face expressions through holistic integration and/or local analysis of visual features, and determined which brain systems mediate these distinct processes. Behavioral, physiological, and brain responses to happy and angry faces were assessed by presenting congruent global configurations of expressions (e.g., happy top+happy bottom), incongruent composite configurations (e.g., angry top+happy bottom), and isolated features (e.g. happy top only). Top and bottom parts were always from the same individual. Twenty-six healthy volunteers were scanned using fMRI while they classified the expression in either the top or the bottom face part but ignored information in the other non-target part. Results indicate that the recognition of happy and anger expressions is neither strictly holistic nor analytic Both routes were involved, but with a different role for analytic and holistic information depending on the emotion type, and different weights of local features between happy and anger expressions. Dissociable neural pathways were engaged depending on emotional face configurations. In particular, regions within the face processing network differed in their sensitivity to holistic expression information, which predominantly activated fusiform, inferior occipital areas and amygdala when internal features were congruent (i.e. template matching), whereas more local analysis of independent features preferentially engaged STS and prefrontal areas (IFG/OFC) in the context of full face configurations, but early visual areas and pulvinar when seen in isolated parts. Collectively, these findings suggest that facial emotion recognition recruits separate, but interactive dorsal and ventral routes within the face processing networks, whose engagement may be shaped by reciprocal interactions and modulated by task demands. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Emotional voices in context: A neurobiological model of multimodal affective information processing

    NASA Astrophysics Data System (ADS)

    Brück, Carolin; Kreifelts, Benjamin; Wildgruber, Dirk

    2011-12-01

    Just as eyes are often considered a gateway to the soul, the human voice offers a window through which we gain access to our fellow human beings' minds - their attitudes, intentions and feelings. Whether in talking or singing, crying or laughing, sighing or screaming, the sheer sound of a voice communicates a wealth of information that, in turn, may serve the observant listener as valuable guidepost in social interaction. But how do human beings extract information from the tone of a voice? In an attempt to answer this question, the present article reviews empirical evidence detailing the cerebral processes that underlie our ability to decode emotional information from vocal signals. The review will focus primarily on two prominent classes of vocal emotion cues: laughter and speech prosody (i.e. the tone of voice while speaking). Following a brief introduction, behavioral as well as neuroimaging data will be summarized that allows to outline cerebral mechanisms associated with the decoding of emotional voice cues, as well as the influence of various context variables (e.g. co-occurring facial and verbal emotional signals, attention focus, person-specific parameters such as gender and personality) on the respective processes. Building on the presented evidence, a cerebral network model will be introduced that proposes a differential contribution of various cortical and subcortical brain structures to the processing of emotional voice signals both in isolation and in context of accompanying (facial and verbal) emotional cues.

  16. Emotional voices in context: a neurobiological model of multimodal affective information processing.

    PubMed

    Brück, Carolin; Kreifelts, Benjamin; Wildgruber, Dirk

    2011-12-01

    Just as eyes are often considered a gateway to the soul, the human voice offers a window through which we gain access to our fellow human beings' minds - their attitudes, intentions and feelings. Whether in talking or singing, crying or laughing, sighing or screaming, the sheer sound of a voice communicates a wealth of information that, in turn, may serve the observant listener as valuable guidepost in social interaction. But how do human beings extract information from the tone of a voice? In an attempt to answer this question, the present article reviews empirical evidence detailing the cerebral processes that underlie our ability to decode emotional information from vocal signals. The review will focus primarily on two prominent classes of vocal emotion cues: laughter and speech prosody (i.e. the tone of voice while speaking). Following a brief introduction, behavioral as well as neuroimaging data will be summarized that allows to outline cerebral mechanisms associated with the decoding of emotional voice cues, as well as the influence of various context variables (e.g. co-occurring facial and verbal emotional signals, attention focus, person-specific parameters such as gender and personality) on the respective processes. Building on the presented evidence, a cerebral network model will be introduced that proposes a differential contribution of various cortical and subcortical brain structures to the processing of emotional voice signals both in isolation and in context of accompanying (facial and verbal) emotional cues. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Neural mechanism for judging the appropriateness of facial affect.

    PubMed

    Kim, Ji-Woong; Kim, Jae-Jin; Jeong, Bum Seok; Ki, Seon Wan; Im, Dong-Mi; Lee, Soo Jung; Lee, Hong Shick

    2005-12-01

    Questions regarding the appropriateness of facial expressions in particular situations arise ubiquitously in everyday social interactions. To determine the appropriateness of facial affect, first of all, we should represent our own or the other's emotional state as induced by the social situation. Then, based on these representations, we should infer the possible affective response of the other person. In this study, we identified the brain mechanism mediating special types of social evaluative judgments of facial affect in which the internal reference is related to theory of mind (ToM) processing. Many previous ToM studies have used non-emotional stimuli, but, because so much valuable social information is conveyed through nonverbal emotional channels, this investigation used emotionally salient visual materials to tap ToM. Fourteen right-handed healthy subjects volunteered for our study. We used functional magnetic resonance imaging to examine brain activation during the judgmental task for the appropriateness of facial affects as opposed to gender matching tasks. We identified activation of a brain network, which includes both medial frontal cortex, left temporal pole, left inferior frontal gyrus, and left thalamus during the judgmental task for appropriateness of facial affect compared to the gender matching task. The results of this study suggest that the brain system involved in ToM plays a key role in judging the appropriateness of facial affect in an emotionally laden situation. In addition, our result supports that common neural substrates are involved in performing diverse kinds of ToM tasks irrespective of perceptual modalities and the emotional salience of test materials.

  18. CACNA1C risk variant affects facial emotion recognition in healthy individuals.

    PubMed

    Nieratschker, Vanessa; Brückmann, Christof; Plewnia, Christian

    2015-11-27

    Recognition and correct interpretation of facial emotion is essential for social interaction and communication. Previous studies have shown that impairments in this cognitive domain are common features of several psychiatric disorders. Recent association studies identified CACNA1C as one of the most promising genetic risk factors for psychiatric disorders and previous evidence suggests that the most replicated risk variant in CACNA1C (rs1006737) is affecting emotion recognition and processing. However, studies investigating the influence of rs1006737 on this intermediate phenotype in healthy subjects at the behavioral level are largely missing to date. Here, we applied the "Reading the Mind in the Eyes" test, a facial emotion recognition paradigm in a cohort of 92 healthy individuals to address this question. Whereas accuracy was not affected by genotype, CACNA1C rs1006737 risk-allele carries (AA/AG) showed significantly slower mean response times compared to individuals homozygous for the G-allele, indicating that healthy risk-allele carriers require more information to correctly identify a facial emotion. Our study is the first to provide evidence for an impairing behavioral effect of the CACNA1C risk variant rs1006737 on facial emotion recognition in healthy individuals and adds to the growing number of studies pointing towards CACNA1C as affecting intermediate phenotypes of psychiatric disorders.

  19. The Joint Facial and Invasive Neck Trauma (J-FAINT) Project, Iraq and Afghanistan 2003-2011

    DTIC Science & Technology

    2013-01-01

    Original Research— Facial Plastic and Reconstructive Surgery The Joint Facial and Invasive Neck Trauma (J-FAINT) Project, Iraq and Afghanistan 2003...number and type of facial and penetrat- ing neck trauma injuries sustained in Operation Iraqi Freedom (OIF) and Operation Enduring Freedom (OEF). Study...queried for data from OIF and OEF from January 2003 to May 2011. Information on demographics; type and severity of facial , neck, and associated trauma

  20. Hierarchical Spatio-Temporal Probabilistic Graphical Model with Multiple Feature Fusion for Binary Facial Attribute Classification in Real-World Face Videos.

    PubMed

    Demirkus, Meltem; Precup, Doina; Clark, James J; Arbel, Tal

    2016-06-01

    Recent literature shows that facial attributes, i.e., contextual facial information, can be beneficial for improving the performance of real-world applications, such as face verification, face recognition, and image search. Examples of face attributes include gender, skin color, facial hair, etc. How to robustly obtain these facial attributes (traits) is still an open problem, especially in the presence of the challenges of real-world environments: non-uniform illumination conditions, arbitrary occlusions, motion blur and background clutter. What makes this problem even more difficult is the enormous variability presented by the same subject, due to arbitrary face scales, head poses, and facial expressions. In this paper, we focus on the problem of facial trait classification in real-world face videos. We have developed a fully automatic hierarchical and probabilistic framework that models the collective set of frame class distributions and feature spatial information over a video sequence. The experiments are conducted on a large real-world face video database that we have collected, labelled and made publicly available. The proposed method is flexible enough to be applied to any facial classification problem. Experiments on a large, real-world video database McGillFaces [1] of 18,000 video frames reveal that the proposed framework outperforms alternative approaches, by up to 16.96 and 10.13%, for the facial attributes of gender and facial hair, respectively.

  1. Stereotypes and prejudice affect the recognition of emotional body postures.

    PubMed

    Bijlstra, Gijsbert; Holland, Rob W; Dotsch, Ron; Wigboldus, Daniel H J

    2018-03-26

    Most research on emotion recognition focuses on facial expressions. However, people communicate emotional information through bodily cues as well. Prior research on facial expressions has demonstrated that emotion recognition is modulated by top-down processes. Here, we tested whether this top-down modulation generalizes to the recognition of emotions from body postures. We report three studies demonstrating that stereotypes and prejudice about men and women may affect how fast people classify various emotional body postures. Our results suggest that gender cues activate gender associations, which affect the recognition of emotions from body postures in a top-down fashion. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Hierarchical Encoding of Social Cues in Primate Inferior Temporal Cortex.

    PubMed

    Morin, Elyse L; Hadj-Bouziane, Fadila; Stokes, Mark; Ungerleider, Leslie G; Bell, Andrew H

    2015-09-01

    Faces convey information about identity and emotional state, both of which are important for our social interactions. Models of face processing propose that changeable versus invariant aspects of a face, specifically facial expression/gaze direction versus facial identity, are coded by distinct neural pathways and yet neurophysiological data supporting this separation are incomplete. We recorded activity from neurons along the inferior bank of the superior temporal sulcus (STS), while monkeys viewed images of conspecific faces and non-face control stimuli. Eight monkey identities were used, each presented with 3 different facial expressions (neutral, fear grin, and threat). All facial expressions were displayed with both a direct and averted gaze. In the posterior STS, we found that about one-quarter of face-responsive neurons are sensitive to social cues, the majority of which being sensitive to only one of these cues. In contrast, in anterior STS, not only did the proportion of neurons sensitive to social cues increase, but so too did the proportion of neurons sensitive to conjunctions of identity with either gaze direction or expression. These data support a convergence of signals related to faces as one moves anteriorly along the inferior bank of the STS, which forms a fundamental part of the face-processing network. Published by Oxford University Press 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  3. Attention to emotion modulates fMRI activity in human right superior temporal sulcus.

    PubMed

    Narumoto, J; Okada, T; Sadato, N; Fukui, K; Yonekura, Y

    2001-10-01

    A parallel neural network has been proposed for processing various types of information conveyed by faces including emotion. Using functional magnetic resonance imaging (fMRI), we tested the effect of the explicit attention to the emotional expression of the faces on the neuronal activity of the face-responsive regions. Delayed match to sample procedure was adopted. Subjects were required to match the visually presented pictures with regard to the contour of the face pictures, facial identity, and emotional expressions by valence (happy and fearful expressions) and arousal (fearful and sad expressions). Contour matching of the non-face scrambled pictures was used as a control condition. The face-responsive regions that responded more to faces than to non-face stimuli were the bilateral lateral fusiform gyrus (LFG), the right superior temporal sulcus (STS), and the bilateral intraparietal sulcus (IPS). In these regions, general attention to the face enhanced the activities of the bilateral LFG, the right STS, and the left IPS compared with attention to the contour of the facial image. Selective attention to facial emotion specifically enhanced the activity of the right STS compared with attention to the face per se. The results suggest that the right STS region plays a special role in facial emotion recognition within distributed face-processing systems. This finding may support the notion that the STS is involved in social perception.

  4. Do you see what I see? Sex differences in the discrimination of facial emotions during adolescence.

    PubMed

    Lee, Nikki C; Krabbendam, Lydia; White, Thomas P; Meeter, Martijn; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Büchel, Christian; Conrod, Patricia; Flor, Herta; Frouin, Vincent; Heinz, Andreas; Garavan, Hugh; Gowland, Penny; Ittermann, Bernd; Mann, Karl; Paillère Martinot, Marie-Laure; Nees, Frauke; Paus, Tomas; Pausova, Zdenka; Rietschel, Marcella; Robbins, Trevor; Fauth-Bühler, Mira; Smolka, Michael N; Gallinat, Juergen; Schumann, Gunther; Shergill, Sukhi S

    2013-12-01

    During adolescence social relationships become increasingly important. Establishing and maintaining these relationships requires understanding of emotional stimuli, such as facial emotions. A failure to adequately interpret emotional facial expressions has previously been associated with various mental disorders that emerge during adolescence. The current study examined sex differences in emotional face processing during adolescence. Participants were adolescents (n = 1951) with a target age of 14, who completed a forced-choice emotion discrimination task. The stimuli used comprised morphed faces that contained a blend of two emotions in varying intensities (11 stimuli per set of emotions). Adolescent girls showed faster and more sensitive perception of facial emotions than boys. However, both adolescent boys and girls were most sensitive to variations in emotion intensity in faces combining happiness and sadness, and least sensitive to changes in faces comprising fear and anger. Furthermore, both sexes overidentified happiness and anger. However, the overidentification of happiness was stronger in boys. These findings were not influenced by individual differences in the level of pubertal maturation. These results indicate that male and female adolescents differ in their ability to identify emotions in morphed faces containing emotional blends. The findings provide information for clinical studies examining whether sex differences in emotional processing are related to sex differences in the prevalence of psychiatric disorders within this age group.

  5. How do schizophrenia patients use visual information to decode facial emotion?

    PubMed

    Lee, Junghee; Gosselin, Frédéric; Wynn, Jonathan K; Green, Michael F

    2011-09-01

    Impairment in recognizing facial emotions is a prominent feature of schizophrenia patients, but the underlying mechanism of this impairment remains unclear. This study investigated the specific aspects of visual information that are critical for schizophrenia patients to recognize emotional expression. Using the Bubbles technique, we probed the use of visual information during a facial emotion discrimination task (fear vs. happy) in 21 schizophrenia patients and 17 healthy controls. Visual information was sampled through randomly located Gaussian apertures (or "bubbles") at 5 spatial frequency scales. Online calibration of the amount of face exposed through bubbles was used to ensure 75% overall accuracy for each subject. Least-square multiple linear regression analyses between sampled information and accuracy were performed to identify critical visual information that was used to identify emotional expression. To accurately identify emotional expression, schizophrenia patients required more exposure of facial areas (i.e., more bubbles) compared with healthy controls. To identify fearful faces, schizophrenia patients relied less on bilateral eye regions at high-spatial frequency compared with healthy controls. For identification of happy faces, schizophrenia patients relied on the mouth and eye regions; healthy controls did not utilize eyes and used the mouth much less than patients did. Schizophrenia patients needed more facial information to recognize emotional expression of faces. In addition, patients differed from controls in their use of high-spatial frequency information from eye regions to identify fearful faces. This study provides direct evidence that schizophrenia patients employ an atypical strategy of using visual information to recognize emotional faces.

  6. Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time.

    PubMed

    Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G

    2014-01-20

    Designed by biological and social evolutionary pressures, facial expressions of emotion comprise specific facial movements to support a near-optimal system of signaling and decoding. Although highly dynamical, little is known about the form and function of facial expression temporal dynamics. Do facial expressions transmit diagnostic signals simultaneously to optimize categorization of the six classic emotions, or sequentially to support a more complex communication system of successive categorizations over time? Our data support the latter. Using a combination of perceptual expectation modeling, information theory, and Bayesian classifiers, we show that dynamic facial expressions of emotion transmit an evolving hierarchy of "biologically basic to socially specific" information over time. Early in the signaling dynamics, facial expressions systematically transmit few, biologically rooted face signals supporting the categorization of fewer elementary categories (e.g., approach/avoidance). Later transmissions comprise more complex signals that support categorization of a larger number of socially specific categories (i.e., the six classic emotions). Here, we show that dynamic facial expressions of emotion provide a sophisticated signaling system, questioning the widely accepted notion that emotion communication is comprised of six basic (i.e., psychologically irreducible) categories, and instead suggesting four. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Who do you trust? The impact of facial emotion and behaviour on decision making

    PubMed Central

    Campellone, Timothy R.; Kring, Ann M.

    2014-01-01

    During social interactions, we use available information to guide our decisions, including behaviour and emotional displays. In some situations, behaviour and emotional displays may be incongruent, complicating decision making. This study had two main aims: first, to investigate the independent contributions of behaviour and facial displays of emotion on decisions to trust, and, second, to examine what happens when the information being signalled by a facial display is incongruent with behaviour. Participants played a modified version of the Trust Game in which they learned simulated players’ behaviour with or without concurrent displays of facial emotion. Results indicated that displays of anger, but not happiness, influenced decisions to trust during initial encounters. Over the course of repeated interactions, however, emotional displays consistent with an established pattern of behaviour made independent contributions to decision making, strengthening decisions to trust. When facial display and behaviour were incongruent, participants used current behaviour to inform decision making. PMID:23017055

  8. 78 FR 73502 - Multistakeholder Process To Develop Consumer Data Privacy Code of Conduct Concerning Facial...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-06

    ... Process To Develop Consumer Data Privacy Code of Conduct Concerning Facial Recognition Technology AGENCY... technology. This Notice announces the meetings to be held in February, March, April, May, and June 2014. The... promote trust regarding facial recognition technology in the commercial context.\\4\\ NTIA encourages...

  9. Putting the face in context: Body expressions impact facial emotion processing in human infants.

    PubMed

    Rajhans, Purva; Jessen, Sarah; Missana, Manuela; Grossmann, Tobias

    2016-06-01

    Body expressions exert strong contextual effects on facial emotion perception in adults. Specifically, conflicting body cues hamper the recognition of emotion from faces, as evident on both the behavioral and neural level. We examined the developmental origins of the neural processes involved in emotion perception across body and face in 8-month-old infants by measuring event-related brain potentials (ERPs). We primed infants with body postures (fearful, happy) that were followed by either congruent or incongruent facial expressions. Our results revealed that body expressions impact facial emotion processing and that incongruent body cues impair the neural discrimination of emotional facial expressions. Priming effects were associated with attentional and recognition memory processes, as reflected in a modulation of the Nc and Pc evoked at anterior electrodes. These findings demonstrate that 8-month-old infants possess neural mechanisms that allow for the integration of emotion across body and face, providing evidence for the early developmental emergence of context-sensitive facial emotion perception. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Identification and Classification of Facial Familiarity in Directed Lying: An ERP Study

    PubMed Central

    Sun, Delin; Chan, Chetwyn C. H.; Lee, Tatia M. C.

    2012-01-01

    Recognizing familiar faces is essential to social functioning, but little is known about how people identify human faces and classify them in terms of familiarity. Face identification involves discriminating familiar faces from unfamiliar faces, whereas face classification involves making an intentional decision to classify faces as “familiar” or “unfamiliar.” This study used a directed-lying task to explore the differentiation between identification and classification processes involved in the recognition of familiar faces. To explore this issue, the participants in this study were shown familiar and unfamiliar faces. They responded to these faces (i.e., as familiar or unfamiliar) in accordance with the instructions they were given (i.e., to lie or to tell the truth) while their EEG activity was recorded. Familiar faces (regardless of lying vs. truth) elicited significantly less negative-going N400f in the middle and right parietal and temporal regions than unfamiliar faces. Regardless of their actual familiarity, the faces that the participants classified as “familiar” elicited more negative-going N400f in the central and right temporal regions than those classified as “unfamiliar.” The P600 was related primarily with the facial identification process. Familiar faces (regardless of lying vs. truth) elicited more positive-going P600f in the middle parietal and middle occipital regions. The results suggest that N400f and P600f play different roles in the processes involved in facial recognition. The N400f appears to be associated with both the identification (judgment of familiarity) and classification of faces, while it is likely that the P600f is only associated with the identification process (recollection of facial information). Future studies should use different experimental paradigms to validate the generalizability of the results of this study. PMID:22363597

  11. The Perception of Dynamic and Static Facial Expressions of Happiness and Disgust Investigated by ERPs and fMRI Constrained Source Analysis

    PubMed Central

    Trautmann-Lengsfeld, Sina Alexa; Domínguez-Borràs, Judith; Escera, Carles; Herrmann, Manfred; Fehr, Thorsten

    2013-01-01

    A recent functional magnetic resonance imaging (fMRI) study by our group demonstrated that dynamic emotional faces are more accurately recognized and evoked more widespread patterns of hemodynamic brain responses than static emotional faces. Based on this experimental design, the present study aimed at investigating the spatio-temporal processing of static and dynamic emotional facial expressions in 19 healthy women by means of multi-channel electroencephalography (EEG), event-related potentials (ERP) and fMRI-constrained regional source analyses. ERP analysis showed an increased amplitude of the LPP (late posterior positivity) over centro-parietal regions for static facial expressions of disgust compared to neutral faces. In addition, the LPP was more widespread and temporally prolonged for dynamic compared to static faces of disgust and happiness. fMRI constrained source analysis on static emotional face stimuli indicated the spatio-temporal modulation of predominantly posterior regional brain activation related to the visual processing stream for both emotional valences when compared to the neutral condition in the fusiform gyrus. The spatio-temporal processing of dynamic stimuli yielded enhanced source activity for emotional compared to neutral conditions in temporal (e.g., fusiform gyrus), and frontal regions (e.g., ventromedial prefrontal cortex, medial and inferior frontal cortex) in early and again in later time windows. The present data support the view that dynamic facial displays trigger more information reflected in complex neural networks, in particular because of their changing features potentially triggering sustained activation related to a continuing evaluation of those faces. A combined fMRI and EEG approach thus provides an advanced insight to the spatio-temporal characteristics of emotional face processing, by also revealing additional neural generators, not identifiable by the only use of an fMRI approach. PMID:23818974

  12. Disconnection mechanism and regional cortical atrophy contribute to impaired processing of facial expressions and theory of mind in multiple sclerosis: a structural MRI study.

    PubMed

    Mike, Andrea; Strammer, Erzsebet; Aradi, Mihaly; Orsi, Gergely; Perlaki, Gabor; Hajnal, Andras; Sandor, Janos; Banati, Miklos; Illes, Eniko; Zaitsev, Alexander; Herold, Robert; Guttmann, Charles R G; Illes, Zsolt

    2013-01-01

    Successful socialization requires the ability of understanding of others' mental states. This ability called as mentalization (Theory of Mind) may become deficient and contribute to everyday life difficulties in multiple sclerosis. We aimed to explore the impact of brain pathology on mentalization performance in multiple sclerosis. Mentalization performance of 49 patients with multiple sclerosis was compared to 24 age- and gender matched healthy controls. T1- and T2-weighted three-dimensional brain MRI images were acquired at 3Tesla from patients with multiple sclerosis and 18 gender- and age matched healthy controls. We assessed overall brain cortical thickness in patients with multiple sclerosis and the scanned healthy controls, and measured the total and regional T1 and T2 white matter lesion volumes in patients with multiple sclerosis. Performances in tests of recognition of mental states and emotions from facial expressions and eye gazes correlated with both total T1-lesion load and regional T1-lesion load of association fiber tracts interconnecting cortical regions related to visual and emotion processing (genu and splenium of corpus callosum, right inferior longitudinal fasciculus, right inferior fronto-occipital fasciculus, uncinate fasciculus). Both of these tests showed correlations with specific cortical areas involved in emotion recognition from facial expressions (right and left fusiform face area, frontal eye filed), processing of emotions (right entorhinal cortex) and socially relevant information (left temporal pole). Thus, both disconnection mechanism due to white matter lesions and cortical thinning of specific brain areas may result in cognitive deficit in multiple sclerosis affecting emotion and mental state processing from facial expressions and contributing to everyday and social life difficulties of these patients.

  13. Processing of Emotional Faces in Patients with Chronic Pain Disorder: An Eye-Tracking Study.

    PubMed

    Giel, Katrin Elisabeth; Paganini, Sarah; Schank, Irena; Enck, Paul; Zipfel, Stephan; Junne, Florian

    2018-01-01

    Problems in emotion processing potentially contribute to the development and maintenance of chronic pain. Theories focusing on attentional processing have suggested that dysfunctional attention deployment toward emotional information, i.e., attentional biases for negative emotions, might entail one potential developmental and/or maintenance factor of chronic pain. We assessed self-reported alexithymia, attentional orienting to and maintenance on emotional stimuli using eye tracking in 17 patients with chronic pain disorder (CP) and two age- and sex-matched control groups, 17 healthy individuals (HC) and 17 individuals who were matched to CP according to depressive symptoms (DC). In a choice viewing paradigm, a dot indicated the position of the emotional picture in the next trial to allow for strategic attention deployment. Picture pairs consisted of a happy or sad facial expression and a neutral facial expression of the same individual. Participants were asked to explore picture pairs freely. CP and DC groups reported higher alexithymia than the HC group. HC showed a previously reported emotionality bias by preferentially orienting to the emotional face and preferentially maintaining on the happy face. CP and DC participants showed no facilitated early attention to sad facial expressions, and DC participants showed no facilitated early attention to happy facial expressions, while CP and DC participants did. We found no group differences in attentional maintenance. Our findings are in line with the clinical large overlap between pain and depression. The blunted initial reaction to sadness could be interpreted as a failure of the attentional system to attend to evolutionary salient emotional stimuli or as an attempt to suppress negative emotions. These difficulties in emotion processing might contribute to etiology or maintenance of chronic pain and depression.

  14. Evaluation of appearance transfer and persistence in central face transplantation: a computer simulation analysis.

    PubMed

    Pomahac, Bohdan; Aflaki, Pejman; Nelson, Charles; Balas, Benjamin

    2010-05-01

    Partial facial allotransplantation is an emerging option in reconstruction of central facial defects, providing function and aesthetic appearance. Ethical debate partly stems from uncertainty surrounding identity aspects of the procedure. There is no objective evidence regarding the effect of donors' transplanted facial structures on appearance change of the recipients and its influence on facial recognition of donors and recipients. Full-face frontal view color photographs of 100 volunteers were taken at a distance of 150 cm with a digital camera (Nikon/DX80). Photographs were taken in front of a blue background, and with a neutral facial expression. Using image-editing software (Adobe-Photoshop-CS3), central facial transplantation was performed between participants. Twenty observers performed a familiar 'facial recognition task', to identify 40 post-transplant composite faces presented individually on the screen at a viewing distance of 60 cm, with an exposure time of 5s. Each composite face comprised of a familiar and an unfamiliar face to the observers. Trials were done with and without external facial features (head contour, hair and ears). Two variables were defined: 'Appearance Transfer' refers to transfer of donor's appearance to the recipient. 'Appearance Persistence' deals with the extent of recipient's appearance change post-transplantation. A t-test was run to determine if the rates of Appearance Transfer differed from Appearance Persistence. Average Appearance Transfer rate (2.6%) was significantly lower than Appearance Persistence rate (66%) (P<0.001), indicating that donor's appearance transfer to the recipient is negligible, whereas recipients will be identified the majority of the time. External facial features were important in facial recognition of recipients, evidenced by a significant rise in Appearance Persistence from 19% in the absence of external features to 66% when those features were present (P<0.01). This study may be helpful in the informed consent process of prospective recipients. It is beneficial for education of donors families and is expected to positively affect their decision to consent for facial tissue donation. Copyright (c) 2009 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  15. [A text-book case of tropical facial elephantiasis].

    PubMed

    Dilu, N-J; Sokolo, R

    2007-02-01

    Tropical facial elephantiasis is a nosological entity which can arise from various underlying causes: von Recklinghausen neurofibromatosis, lymphatic and cutaneodermal filarioses, deep mycosis. We report an exceptional case of tropical facial elephantiasis caused by onchocercosis and entomophtoromycosis (rhinophycomycosis). The patient's facial morphology was noted "hippopotamus-face" or "dog-face". Onchocercosis and entomophtoromycosis are two diseases known to cause facial elephantiasis. We have not however been able to find any case report in the literature of co-morbidity nor any information on factors predictive of concomitant occurrence.

  16. Recent Advances in Face Lift to Achieve Facial Balance.

    PubMed

    Ilankovan, Velupillai

    2017-03-01

    Facial balance is achieved by correction of facial proportions and the facial contour. Ageing affects this balance in addition to other factors. We have strived to inform all the recent advances in providing this balance. The anatomy of ageing including various changed in clinical features are described. The procedures are explained on the basis of the upper, middle and lower face. Different face lift, neck lift procedures with innovative techniques are demonstrated. The aim is to provide an unoperated balanced facial proportion with zero complication.

  17. Differences in holistic processing do not explain cultural differences in the recognition of facial expression.

    PubMed

    Yan, Xiaoqian; Young, Andrew W; Andrews, Timothy J

    2017-12-01

    The aim of this study was to investigate the causes of the own-race advantage in facial expression perception. In Experiment 1, we investigated Western Caucasian and Chinese participants' perception and categorization of facial expressions of six basic emotions that included two pairs of confusable expressions (fear and surprise; anger and disgust). People were slightly better at identifying facial expressions posed by own-race members (mainly in anger and disgust). In Experiment 2, we asked whether the own-race advantage was due to differences in the holistic processing of facial expressions. Participants viewed composite faces in which the upper part of one expression was combined with the lower part of a different expression. The upper and lower parts of the composite faces were either aligned or misaligned. Both Chinese and Caucasian participants were better at identifying the facial expressions from the misaligned images, showing interference on recognizing the parts of the expressions created by holistic perception of the aligned composite images. However, this interference from holistic processing was equivalent across expressions of own-race and other-race faces in both groups of participants. Whilst the own-race advantage in recognizing facial expressions does seem to reflect the confusability of certain emotions, it cannot be explained by differences in holistic processing.

  18. Features versus context: An approach for precise and detailed detection and delineation of faces and facial features.

    PubMed

    Ding, Liya; Martinez, Aleix M

    2010-11-01

    The appearance-based approach to face detection has seen great advances in the last several years. In this approach, we learn the image statistics describing the texture pattern (appearance) of the object class we want to detect, e.g., the face. However, this approach has had limited success in providing an accurate and detailed description of the internal facial features, i.e., eyes, brows, nose, and mouth. In general, this is due to the limited information carried by the learned statistical model. While the face template is relatively rich in texture, facial features (e.g., eyes, nose, and mouth) do not carry enough discriminative information to tell them apart from all possible background images. We resolve this problem by adding the context information of each facial feature in the design of the statistical model. In the proposed approach, the context information defines the image statistics most correlated with the surroundings of each facial component. This means that when we search for a face or facial feature, we look for those locations which most resemble the feature yet are most dissimilar to its context. This dissimilarity with the context features forces the detector to gravitate toward an accurate estimate of the position of the facial feature. Learning to discriminate between feature and context templates is difficult, however, because the context and the texture of the facial features vary widely under changing expression, pose, and illumination, and may even resemble one another. We address this problem with the use of subclass divisions. We derive two algorithms to automatically divide the training samples of each facial feature into a set of subclasses, each representing a distinct construction of the same facial component (e.g., closed versus open eyes) or its context (e.g., different hairstyles). The first algorithm is based on a discriminant analysis formulation. The second algorithm is an extension of the AdaBoost approach. We provide extensive experimental results using still images and video sequences for a total of 3,930 images. We show that the results are almost as good as those obtained with manual detection.

  19. A stable biologically motivated learning mechanism for visual feature extraction to handle facial categorization.

    PubMed

    Rajaei, Karim; Khaligh-Razavi, Seyed-Mahdi; Ghodrati, Masoud; Ebrahimpour, Reza; Shiri Ahmad Abadi, Mohammad Ebrahim

    2012-01-01

    The brain mechanism of extracting visual features for recognizing various objects has consistently been a controversial issue in computational models of object recognition. To extract visual features, we introduce a new, biologically motivated model for facial categorization, which is an extension of the Hubel and Wiesel simple-to-complex cell hierarchy. To address the synaptic stability versus plasticity dilemma, we apply the Adaptive Resonance Theory (ART) for extracting informative intermediate level visual features during the learning process, which also makes this model stable against the destruction of previously learned information while learning new information. Such a mechanism has been suggested to be embedded within known laminar microcircuits of the cerebral cortex. To reveal the strength of the proposed visual feature learning mechanism, we show that when we use this mechanism in the training process of a well-known biologically motivated object recognition model (the HMAX model), it performs better than the HMAX model in face/non-face classification tasks. Furthermore, we demonstrate that our proposed mechanism is capable of following similar trends in performance as humans in a psychophysical experiment using a face versus non-face rapid categorization task.

  20. Developmental Differences in Holistic Interference of Facial Part Recognition

    PubMed Central

    Nakabayashi, Kazuyo; Liu, Chang Hong

    2013-01-01

    Research has shown that adults’ recognition of a facial part can be disrupted if the part is learnt without a face context but tested in a whole face. This has been interpreted as the holistic interference effect. The present study investigated whether children of 6- and 9–10-year-olds would show a similar effect. Participants were asked to judge whether a probe part was the same as or different from a test part whereby the part was presented either in isolation or in a whole face. The results showed that while all the groups were susceptible to a holistic interference, the youngest group was most severely affected. Contrary to the view that piecemeal processing precedes holistic processing in the cognitive development, our findings demonstrate that holistic processing is already present at 6 years of age. It is the ability to inhibit the influence of holistic information on piecemeal processing that seems to require a longer period of development into at an older and adult age. PMID:24204847

  1. Automated facial recognition of manually generated clay facial approximations: Potential application in unidentified persons data repositories.

    PubMed

    Parks, Connie L; Monson, Keith L

    2018-01-01

    This research examined how accurately 2D images (i.e., photographs) of 3D clay facial approximations were matched to corresponding photographs of the approximated individuals using an objective automated facial recognition system. Irrespective of search filter (i.e., blind, sex, or ancestry) or rank class (R 1 , R 10 , R 25 , and R 50 ) employed, few operationally informative results were observed. In only a single instance of 48 potential match opportunities was a clay approximation matched to a corresponding life photograph within the top 50 images (R 50 ) of a candidate list, even with relatively small gallery sizes created from the application of search filters (e.g., sex or ancestry search restrictions). Increasing the candidate lists to include the top 100 images (R 100 ) resulted in only two additional instances of correct match. Although other untested variables (e.g., approximation method, 2D photographic process, and practitioner skill level) may have impacted the observed results, this study suggests that 2D images of manually generated clay approximations are not readily matched to life photos by automated facial recognition systems. Further investigation is necessary in order to identify the underlying cause(s), if any, of the poor recognition results observed in this study (e.g., potential inferior facial feature detection and extraction). Additional inquiry exploring prospective remedial measures (e.g., stronger feature differentiation) is also warranted, particularly given the prominent use of clay approximations in unidentified persons casework. Copyright © 2017. Published by Elsevier B.V.

  2. Facial emotion recognition in Parkinson's disease: A review and new hypotheses

    PubMed Central

    Vérin, Marc; Sauleau, Paul; Grandjean, Didier

    2018-01-01

    Abstract Parkinson's disease is a neurodegenerative disorder classically characterized by motor symptoms. Among them, hypomimia affects facial expressiveness and social communication and has a highly negative impact on patients' and relatives' quality of life. Patients also frequently experience nonmotor symptoms, including emotional‐processing impairments, leading to difficulty in recognizing emotions from faces. Aside from its theoretical importance, understanding the disruption of facial emotion recognition in PD is crucial for improving quality of life for both patients and caregivers, as this impairment is associated with heightened interpersonal difficulties. However, studies assessing abilities in recognizing facial emotions in PD still report contradictory outcomes. The origins of this inconsistency are unclear, and several questions (regarding the role of dopamine replacement therapy or the possible consequences of hypomimia) remain unanswered. We therefore undertook a fresh review of relevant articles focusing on facial emotion recognition in PD to deepen current understanding of this nonmotor feature, exploring multiple significant potential confounding factors, both clinical and methodological, and discussing probable pathophysiological mechanisms. This led us to examine recent proposals about the role of basal ganglia‐based circuits in emotion and to consider the involvement of facial mimicry in this deficit from the perspective of embodied simulation theory. We believe our findings will inform clinical practice and increase fundamental knowledge, particularly in relation to potential embodied emotion impairment in PD. © 2018 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society. PMID:29473661

  3. The Facespan-the perceptual span for face recognition.

    PubMed

    Papinutto, Michael; Lao, Junpeng; Ramon, Meike; Caldara, Roberto; Miellet, Sébastien

    2017-05-01

    In reading, the perceptual span is a well-established concept that refers to the amount of information that can be read in a single fixation. Surprisingly, despite extensive empirical interest in determining the perceptual strategies deployed to process faces and an ongoing debate regarding the factors or mechanism(s) underlying efficient face processing, the perceptual span for faces-the Facespan-remains undetermined. To address this issue, we applied the gaze-contingent Spotlight technique implemented in an old-new face recognition paradigm. This procedure allowed us to parametrically vary the amount of facial information available at a fixated location in order to determine the minimal aperture size at which face recognition performance plateaus. As expected, accuracy increased nonlinearly with spotlight size apertures. Analyses of Structural Similarity comparing the available information during spotlight and natural viewing conditions indicate that the Facespan-the minimum spatial extent of preserved facial information leading to comparable performance as in natural viewing-encompasses 7° of visual angle in our viewing conditions (size of the face stimulus: 15.6°; viewing distance: 70 cm), which represents 45% of the face. The present findings provide a benchmark for future investigations that will address if and how the Facespan is modulated by factors such as cultural, developmental, idiosyncratic, or task-related differences.

  4. Physical attraction to reliable, low variability nervous systems: Reaction time variability predicts attractiveness.

    PubMed

    Butler, Emily E; Saville, Christopher W N; Ward, Robert; Ramsey, Richard

    2017-01-01

    The human face cues a range of important fitness information, which guides mate selection towards desirable others. Given humans' high investment in the central nervous system (CNS), cues to CNS function should be especially important in social selection. We tested if facial attractiveness preferences are sensitive to the reliability of human nervous system function. Several decades of research suggest an operational measure for CNS reliability is reaction time variability, which is measured by standard deviation of reaction times across trials. Across two experiments, we show that low reaction time variability is associated with facial attractiveness. Moreover, variability in performance made a unique contribution to attractiveness judgements above and beyond both physical health and sex-typicality judgements, which have previously been associated with perceptions of attractiveness. In a third experiment, we empirically estimated the distribution of attractiveness preferences expected by chance and show that the size and direction of our results in Experiments 1 and 2 are statistically unlikely without reference to reaction time variability. We conclude that an operating characteristic of the human nervous system, reliability of information processing, is signalled to others through facial appearance. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Horizontal tuning for faces originates in high-level Fusiform Face Area.

    PubMed

    Goffaux, Valerie; Duecker, Felix; Hausfeld, Lars; Schiltz, Christine; Goebel, Rainer

    2016-01-29

    Recent work indicates that the specialization of face visual perception relies on the privileged processing of horizontal angles of facial information. This suggests that stimulus properties assumed to be fully resolved in primary visual cortex (V1; e.g., orientation) in fact determine human vision until high-level stages of processing. To address this hypothesis, the present fMRI study explored the orientation sensitivity of V1 and high-level face-specialized ventral regions such as the Occipital Face Area (OFA) and Fusiform Face Area (FFA) to different angles of face information. Participants viewed face images filtered to retain information at horizontal, vertical or oblique angles. Filtered images were viewed upright, inverted and (phase-)scrambled. FFA responded most strongly to the horizontal range of upright face information; its activation pattern reliably separated horizontal from oblique ranges, but only when faces were upright. Moreover, activation patterns induced in the right FFA and the OFA by upright and inverted faces could only be separated based on horizontal information. This indicates that the specialized processing of upright face information in the OFA and FFA essentially relies on the encoding of horizontal facial cues. This pattern was not passively inherited from V1, which was found to respond less strongly to horizontal than other orientations likely due to adaptive whitening. Moreover, we found that orientation decoding accuracy in V1 was impaired for stimuli containing no meaningful shape. By showing that primary coding in V1 is influenced by high-order stimulus structure and that high-level processing is tuned to selective ranges of primary information, the present work suggests that primary and high-level levels of the visual system interact in order to modulate the processing of certain ranges of primary information depending on their relevance with respect to the stimulus and task at hand. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. The Relationship between Processing Facial Identity and Emotional Expression in 8-Month-Old Infants

    ERIC Educational Resources Information Center

    Schwarzer, Gudrun; Jovanovic, Bianca

    2010-01-01

    In Experiment 1, it was investigated whether infants process facial identity and emotional expression independently or in conjunction with one another. Eight-month-old infants were habituated to two upright or two inverted faces varying in facial identity and emotional expression. Infants were tested with a habituation face, a switch face, and a…

  7. From Facial Emotional Recognition Abilities to Emotional Attribution: A Study in Down Syndrome

    ERIC Educational Resources Information Center

    Hippolyte, Loyse; Barisnikov, Koviljka; Van der Linden, Martial; Detraux, Jean-Jacques

    2009-01-01

    Facial expression processing and the attribution of facial emotions to a context were investigated in adults with Down syndrome (DS) in two experiments. Their performances were compared with those of a child control group matched for receptive vocabulary. The ability to process faces without emotional content was controlled for, and no differences…

  8. Plain faces are more expressive: comparative study of facial colour, mobility and musculature in primates

    PubMed Central

    Santana, Sharlene E.; Dobson, Seth D.; Diogo, Rui

    2014-01-01

    Facial colour patterns and facial expressions are among the most important phenotypic traits that primates use during social interactions. While colour patterns provide information about the sender's identity, expressions can communicate its behavioural intentions. Extrinsic factors, including social group size, have shaped the evolution of facial coloration and mobility, but intrinsic relationships and trade-offs likely operate in their evolution as well. We hypothesize that complex facial colour patterning could reduce how salient facial expressions appear to a receiver, and thus species with highly expressive faces would have evolved uniformly coloured faces. We test this hypothesis through a phylogenetic comparative study, and explore the underlying morphological factors of facial mobility. Supporting our hypothesis, we find that species with highly expressive faces have plain facial colour patterns. The number of facial muscles does not predict facial mobility; instead, species that are larger and have a larger facial nucleus have more expressive faces. This highlights a potential trade-off between facial mobility and colour patterning in primates and reveals complex relationships between facial features during primate evolution. PMID:24850898

  9. Toward a Two-Dimensional Model of Social Cognition in Clinical Neuropsychology: A Systematic Review of Factor Structure Studies.

    PubMed

    Etchepare, Aurore; Prouteau, Antoinette

    2018-04-01

    Social cognition has received growing interest in many conditions in recent years. However, this construct still suffers from a considerable lack of consensus, especially regarding the dimensions to be studied and the resulting methodology of clinical assessment. Our review aims to clarify the distinctiveness of the dimensions of social cognition. Based on Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statements, a systematic review was conducted to explore the factor structure of social cognition in the adult general and clinical populations. The initial search provided 441 articles published between January 1982 and March 2017. Eleven studies were included, all conducted in psychiatric populations and/or healthy participants. Most studies were in favor of a two-factor solution. Four studies drew a distinction between low-level (e.g., facial emotion/prosody recognition) and high-level (e.g., theory of mind) information processing. Four others reported a distinction between affective (e.g., facial emotion/prosody recognition) and cognitive (e.g., false beliefs) information processing. Interestingly, attributional style was frequently reported as an additional separate factor of social cognition. Results of factor analyses add further support for the relevance of models differentiating level of information processing (low- vs. high-level) from nature of processed information (affective vs. cognitive). These results add to a significant body of empirical evidence from developmental, clinical research and neuroimaging studies. We argue the relevance of integrating low- versus high-level processing with affective and cognitive processing in a two-dimensional model of social cognition that would be useful for future research and clinical practice. (JINS, 2018, 24, 391-404).

  10. [Clinical experience in facial nerve tumors: a review of 27 cases].

    PubMed

    Zhang, Fan; Wang, Yucheng; Dai, Chunfu; Chi, Fanglu; Zhou, Liang; Chen, Bing; Li, Huawei

    2010-01-01

    To analyze the clinical manifestations and the diagnosis of the facial nerve tumor according to the clinical information, and evaluate the different surgical approaches depending on tumor location. Twenty-seven cases of facial nerve tumors with general clinical informations available from 1999.9 to 2006.12 in the Shanghai EENT Hospital were reviewed retrospectively. Twenty (74.1%) schwannomas, 4 (14.8%) neurofibromas ,and 3 (11.1%) hemangiomas were identified with histopathology postoperatively. During the course of the disease, 23 patients (85.2%) suffered facial paralysis, both hearing loss and tinnitus affected 11 (40.7%) cases, 5 (18.5%) manifested infra-auricular mass and the others showed some of otalgia or vertigo or ear fullness or facial numbness/twitches. CT or/and MRI results in 24 cases indicated that the tumors originated from the facial nerve. Intra-operative findings showed that 24 (88.9%) cases involved no less than 2 segments of the facial nerve, of these 24 cases 87.5% (21/24) involved the mastoid portion, 70.8% (17/24) involved the tympanic portion, 62.5% (15/24) involved the geniculate ganglion, only 4.2% (1/24) involved the internal acoustic canal (IAC), and 3 cases (11.1%) had only one segments involved. In all of these 27 cases, the tumors were completely excised, of which 13 were resected followed by an immediate facial nerve reconstruction, including 11 sural nerve cable graft, 1 facial nerve end-to-end anastomosis and 1 hypoglossal-facial nerve end-to-end anastomosis. Tumors were removed with preservation of facial nerve continuity in 2 cases. Facial nerve tumor is a rare and benign lesion, and has numerous clinical manifestations. CT and MRI can help surgeons to make a right diagnosis preoperatively. When and how to give the patients an operation depends on the patients individually.

  11. The “Good Cop, Bad Cop” Effect in the RT-Based Concealed Information Test: Exploring the Effect of Emotional Expressions Displayed by a Virtual Investigator

    PubMed Central

    Varga, Mihai; Visu-Petra, George; Miclea, Mircea; Visu-Petra, Laura

    2015-01-01

    Concealing the possession of relevant information represents a complex cognitive process, shaped by contextual demands and individual differences in cognitive and socio-emotional functioning. The Reaction Time-based Concealed Information Test (RT-CIT) is used to detect concealed knowledge based on the difference in RTs between denying recognition of critical (probes) and newly encountered (irrelevant) information. Several research questions were addressed in this scenario implemented after a mock crime. First, we were interested whether the introduction of a social stimulus (facial identity) simulating a virtual investigator would facilitate the process of deception detection. Next, we explored whether his emotional displays (friendly, hostile or neutral) would have a differential impact on speed of responses to probe versus irrelevant items. We also compared the impact of introducing similar stimuli in a working memory (WM) updating context without requirements to conceal information. Finally, we explored the association between deceptive behavior and individual differences in WM updating proficiency or in internalizing problems (state / trait anxiety and depression). Results indicated that the mere presence of a neutral virtual investigator slowed down participants' responses, but not the appended lie-specific time (difference between probes and irrelevants). Emotional expression was shown to differentially affect speed of responses to critical items, with positive displays from the virtual examiner enhancing lie-specific time, compared to negative facial expressions, which had an opposite impact. This valence-specific effect was not visible in the WM updating context. Higher levels of trait / state anxiety were related to faster responses to probes in the negative condition (hostile facial expression) of the RT-CIT. These preliminary findings further emphasize the need to take into account motivational and emotional factors when considering the transfer of deception detection techniques from the laboratory to real-life settings. PMID:25699516

  12. [Negative symptoms, emotion and cognition in schizophrenia].

    PubMed

    Fakra, E; Belzeaux, R; Azorin, J-M; Adida, M

    2015-12-01

    For a long time, treatment of schizophrenia has been essentially focussed on positive symptoms managing. Yet, even if these symptoms are the most noticeable, negative symptoms are more enduring, resistant to pharmacological treatment and associated with a worse prognosis. In the two last decades, attention has shift towards cognitive deficit, as this deficit is most robustly associated to functional outcome. But it appears that the modest improvement in cognition, obtained in schizophrenia through pharmacological treatment or, more purposely, by cognitive enhancement therapy, has only lead to limited amelioration of functional outcome. Authors have claimed that pure cognitive processes, such as those evaluated and trained in lots of these programs, may be too distant from real-life conditions, as the latter are largely based on social interactions. Consequently, the field of social cognition, at the interface of cognition and emotion, has emerged. In a first part of this article we examined the links, in schizophrenia, between negative symptoms, cognition and emotions from a therapeutic standpoint. Nonetheless, investigation of emotion in schizophrenia may also hold relevant premises for understanding the physiopathology of this disorder. In a second part, we propose to illustrate this research by relying on the heuristic value of an elementary marker of social cognition, facial affect recognition. Facial affect recognition has been repeatedly reported to be impaired in schizophrenia and some authors have argued that this deficit could constitute an endophenotype of the illness. We here examined how facial affect processing has been used to explore broader emotion dysfunction in schizophrenia, through behavioural and imaging studies. In particular, fMRI paradigms using facial affect have shown particular patterns of amygdala engagement in schizophrenia, suggesting an intact potential to elicit the limbic system which may however not be advantageous. Finally, we analysed facial affect processing on a cognitive-perceptual level, and the aptitude in schizophrenia to manipulate featural and configural information in faces. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  13. Large Intratemporal Facial Nerve Schwannoma without Facial Palsy: Surgical Strategy of Tumor Removal and Functional Reconstruction.

    PubMed

    Yetiser, Sertac

    2018-06-08

     Three patients with large intratemporal facial schwannomas underwent tumor removal and facial nerve reconstruction with hypoglossal anastomosis. The surgical strategy for the cases was tailored to the location of the mass and its extension along the facial nerve.  To provide data on the different clinical aspects of facial nerve schwannoma, the appropriate planning for management, and the predictive outcomes of facial function.  Three patients with facial schwannomas (two men and one woman, ages 45, 36, and 52 years, respectively) who presented to the clinic between 2009 and 2015 were reviewed. They all had hearing loss but normal facial function. All patients were operated on with radical tumor removal via mastoidectomy and subtotal petrosectomy and simultaneous cranial nerve (CN) 7- CN 12 anastomosis.  Multiple segments of the facial nerve were involved ranging in size from 3 to 7 cm. In the follow-up period of 9 to 24 months, there was no tumor recurrence. Facial function was scored House-Brackmann grades II and III, but two patients are still in the process of functional recovery.  Conservative treatment with sparing of the nerve is considered in patients with small tumors. Excision of a large facial schwannoma with immediate hypoglossal nerve grafting as a primary procedure can provide satisfactory facial nerve function. One of the disadvantages of performing anastomosis is that there is not enough neural tissue just before the bifurcation of the main stump to provide neural suturing without tension because middle fossa extension of the facial schwannoma frequently involves the main facial nerve at the stylomastoid foramen. Reanimation should be processed with extensive backward mobilization of the hypoglossal nerve. Georg Thieme Verlag KG Stuttgart · New York.

  14. Serotonin and the neural processing of facial emotions in adults with autism: an fMRI study using acute tryptophan depletion.

    PubMed

    Daly, Eileen M; Deeley, Quinton; Ecker, Christine; Craig, Michael; Hallahan, Brian; Murphy, Clodagh; Johnston, Patrick; Spain, Debbie; Gillan, Nicola; Brammer, Michael; Giampietro, Vincent; Lamar, Melissa; Page, Lisa; Toal, Fiona; Cleare, Anthony; Surguladze, Simon; Murphy, Declan G M

    2012-10-01

    People with autism spectrum disorders (ASDs) have lifelong deficits in social behavior and differences in behavioral as well as neural responses to facial expressions of emotion. The biological basis to this is incompletely understood, but it may include differences in the role of neurotransmitters such as serotonin, which modulate facial emotion processing in health. While some individuals with ASD have significant differences in the serotonin system, to our knowledge, no one has investigated its role during facial emotion processing in adults with ASD and control subjects using acute tryptophan depletion (ATD) and functional magnetic resonance imaging. To compare the effects of ATD on brain responses to primary facial expressions of emotion in men with ASD and healthy control subjects. Double-blind, placebo-controlled, crossover trial of ATD and functional magnetic resonance imaging to measure brain activity during incidental processing of disgust, fearful, happy, and sad facial expressions. Institute of Psychiatry, King's College London, and South London and Maudsley National Health Service Foundation Trust, England. Fourteen men of normal intelligence with autism and 14 control subjects who did not significantly differ in sex, age, or overall intelligence. Blood oxygenation level-dependent response to facial expressions of emotion. Brain activation was differentially modulated by ATD depending on diagnostic group and emotion type within regions of the social brain network. For example, processing of disgust faces was associated with interactions in medial frontal and lingual gyri, whereas processing of happy faces was associated with interactions in middle frontal gyrus and putamen. Modulation of the processing of facial expressions of emotion by serotonin significantly differs in people with ASD compared with control subjects. The differences vary with emotion type and occur in social brain regions that have been shown to be associated with group differences in serotonin synthesis/receptor or transporter density.

  15. I feel your voice. Cultural differences in the multisensory perception of emotion.

    PubMed

    Tanaka, Akihiro; Koizumi, Ai; Imai, Hisato; Hiramatsu, Saori; Hiramoto, Eriko; de Gelder, Beatrice

    2010-09-01

    Cultural differences in emotion perception have been reported mainly for facial expressions and to a lesser extent for vocal expressions. However, the way in which the perceiver combines auditory and visual cues may itself be subject to cultural variability. Our study investigated cultural differences between Japanese and Dutch participants in the multisensory perception of emotion. A face and a voice, expressing either congruent or incongruent emotions, were presented on each trial. Participants were instructed to judge the emotion expressed in one of the two sources. The effect of to-be-ignored voice information on facial judgments was larger in Japanese than in Dutch participants, whereas the effect of to-be-ignored face information on vocal judgments was smaller in Japanese than in Dutch participants. This result indicates that Japanese people are more attuned than Dutch people to vocal processing in the multisensory perception of emotion. Our findings provide the first evidence that multisensory integration of affective information is modulated by perceivers' cultural background.

  16. NATIONAL PREPAREDNESS: Integrating New and Existing Technology and Information Sharing into an Effective Homeland Security Strategy

    DTIC Science & Technology

    2002-06-07

    Continue to Develop and Refine Emerging Technology • Some of the emerging biometric devices, such as iris scans and facial recognition systems...such as iris scans and facial recognition systems, facial recognition systems, and speaker verification systems. (976301)

  17. Using Event Related Potentials to Explore Stages of Facial Affect Recognition Deficits in Schizophrenia

    PubMed Central

    Wynn, Jonathan K.; Lee, Junghee; Horan, William P.; Green, Michael F.

    2008-01-01

    Schizophrenia patients show impairments in identifying facial affect; however, it is not known at what stage facial affect processing is impaired. We evaluated 3 event-related potentials (ERPs) to explore stages of facial affect processing in schizophrenia patients. Twenty-six schizophrenia patients and 27 normal controls participated. In separate blocks, subjects identified the gender of a face, the emotion of a face, or if a building had 1 or 2 stories. Three ERPs were examined: (1) P100 to examine basic visual processing, (2) N170 to examine facial feature encoding, and (3) N250 to examine affect decoding. Behavioral performance on each task was also measured. Results showed that schizophrenia patients’ P100 was comparable to the controls during all 3 identification tasks. Both patients and controls exhibited a comparable N170 that was largest during processing of faces and smallest during processing of buildings. For both groups, the N250 was largest during the emotion identification task and smallest for the building identification task. However, the patients produced a smaller N250 compared with the controls across the 3 tasks. The groups did not differ in behavioral performance in any of the 3 identification tasks. The pattern of intact P100 and N170 suggest that patients maintain basic visual processing and facial feature encoding abilities. The abnormal N250 suggests that schizophrenia patients are less efficient at decoding facial affect features. Our results imply that abnormalities in the later stage of feature decoding could potentially underlie emotion identification deficits in schizophrenia. PMID:18499704

  18. Similarities and differences in Chinese and Caucasian adults' use of facial cues for trustworthiness judgments.

    PubMed

    Xu, Fen; Wu, Dingcheng; Toriyama, Rie; Ma, Fengling; Itakura, Shoji; Lee, Kang

    2012-01-01

    All cultural groups in the world place paramount value on interpersonal trust. Existing research suggests that although accurate judgments of another's trustworthiness require extensive interactions with the person, we often make trustworthiness judgments based on facial cues on the first encounter. However, little is known about what facial cues are used for such judgments and what the bases are on which individuals make their trustworthiness judgments. In the present study, we tested the hypothesis that individuals may use facial attractiveness cues as a "shortcut" for judging another's trustworthiness due to the lack of other more informative and in-depth information about trustworthiness. Using data-driven statistical models of 3D Caucasian faces, we compared facial cues used for judging the trustworthiness of Caucasian faces by Caucasian participants who were highly experienced with Caucasian faces, and the facial cues used by Chinese participants who were unfamiliar with Caucasian faces. We found that Chinese and Caucasian participants used similar facial cues to judge trustworthiness. Also, both Chinese and Caucasian participants used almost identical facial cues for judging trustworthiness and attractiveness. The results suggest that without opportunities to interact with another person extensively, we use the less racially specific and more universal attractiveness cues as a "shortcut" for trustworthiness judgments.

  19. Perceiving the evil eye: Investigating hostile interpretation of ambiguous facial emotional expression in violent and non-violent offenders.

    PubMed

    Kuin, Niki C; Masthoff, Erik D M; Munafò, Marcus R; Penton-Voak, Ian S

    2017-01-01

    Research into the causal and perpetuating factors influencing aggression has partly focused on the general tendency of aggression-prone individuals to infer hostile intent in others, even in ambiguous circumstances. This is referred to as the 'hostile interpretation bias'. Whether this hostile interpretation bias also exists in basal information processing, such as perception of facial emotion, is not yet known, especially with respect to the perception of ambiguous expressions. In addition, little is known about how this potential bias in facial emotion perception is related to specific characteristics of aggression. In the present study, conducted in a penitentiary setting with detained male adults, we investigated if violent offenders (n = 71) show a stronger tendency to interpret ambiguous facial expressions on a computer task as angry rather than happy, compared to non-violent offenders (n = 14) and to a control group of healthy volunteers (n = 32). We also investigated if hostile perception of facial expressions is related to specific characteristics of aggression, such as proactive and reactive aggression. No clear statistical evidence was found that violent offenders perceived facial emotional expressions as more angry than non-violent offenders or healthy volunteers. A regression analysis in the violent offender group showed that only age and a self-report measure of hostility predicted outcome on the emotion perception task. Other traits, such as psychopathic traits, intelligence, attention and a tendency to jump to conclusions were not associated with interpretation of anger in facial emotional expressions. We discuss the possible impact of the study design and population studied on our results, as well as implications for future studies.

  20. Perceiving the evil eye: Investigating hostile interpretation of ambiguous facial emotional expression in violent and non-violent offenders

    PubMed Central

    Masthoff, Erik D. M.; Munafò, Marcus R.; Penton-Voak, Ian S.

    2017-01-01

    Research into the causal and perpetuating factors influencing aggression has partly focused on the general tendency of aggression-prone individuals to infer hostile intent in others, even in ambiguous circumstances. This is referred to as the ‘hostile interpretation bias’. Whether this hostile interpretation bias also exists in basal information processing, such as perception of facial emotion, is not yet known, especially with respect to the perception of ambiguous expressions. In addition, little is known about how this potential bias in facial emotion perception is related to specific characteristics of aggression. In the present study, conducted in a penitentiary setting with detained male adults, we investigated if violent offenders (n = 71) show a stronger tendency to interpret ambiguous facial expressions on a computer task as angry rather than happy, compared to non-violent offenders (n = 14) and to a control group of healthy volunteers (n = 32). We also investigated if hostile perception of facial expressions is related to specific characteristics of aggression, such as proactive and reactive aggression. No clear statistical evidence was found that violent offenders perceived facial emotional expressions as more angry than non-violent offenders or healthy volunteers. A regression analysis in the violent offender group showed that only age and a self-report measure of hostility predicted outcome on the emotion perception task. Other traits, such as psychopathic traits, intelligence, attention and a tendency to jump to conclusions were not associated with interpretation of anger in facial emotional expressions. We discuss the possible impact of the study design and population studied on our results, as well as implications for future studies. PMID:29190802

  1. Facial Emotion Identification and Sexual Assault Risk Detection among College Student Sexual Assault Victims and Nonvictims

    ERIC Educational Resources Information Center

    Melkonian, Alexander J.; Ham, Lindsay S.; Bridges, Ana J.; Fugitt, Jessica L.

    2017-01-01

    Objective: High rates of sexual victimization among college students necessitate further study of factors associated with sexual assault risk detection. The present study examined how social information processing relates to sexual assault risk detection as a function of sexual assault victimization history. Participants: 225 undergraduates…

  2. Early Family System Types Predict Children's Emotional Attention Biases at School Age

    ERIC Educational Resources Information Center

    Lindblom, Jallu; Peltola, Mikko J.; Vänskä, Mervi; Hietanen, Jari K.; Laakso, Anu; Tiitinen, Aila; Tulppala, Maija; Punamäki, Raija-Leena

    2017-01-01

    The family environment shapes children's social information processing and emotion regulation. Yet, the long-term effects of early family systems have rarely been studied. This study investigated how family system types predict children's attentional biases toward facial expressions at the age of 10 years. The participants were 79 children from…

  3. Modeling Face Identification Processing in Children and Adults.

    ERIC Educational Resources Information Center

    Schwarzer, Gudrun; Massaro, Dominic W.

    2001-01-01

    Two experiments studied whether and how 5-year-olds integrate single facial features to identify faces. Results indicated that children could evaluate and integrate information from eye and mouth features to identify a face when salience of features was varied. A weighted Fuzzy Logical Model of Perception fit better than a Single Channel Model,…

  4. The aging African-American face.

    PubMed

    Brissett, Anthony E; Naylor, Michelle C

    2010-05-01

    With the desire to create a more youthful appearance, patients of all races and ethnicities are increasingly seeking nonsurgical and surgical rejuvenation. In particular, facial rejuvenation procedures have grown significantly within the African-American population. This increase has resulted in a paradigm shift in facial plastic surgery as one considers rejuvenation procedures in those of African descent, as the aging process of various racial groups differs from traditional models. The purpose of this article is to draw attention to the facial features unique to those of African descent and the role these features play in the aging process, taking care to highlight the differences from traditional models of facial aging. In addition, this article will briefly describe the nonsurgical and surgical options for facial rejuvenation taking into consideration the previously discussed facial aging differences and postoperative considerations. Thieme Medical Publishers.

  5. Decoding facial blends of emotion: visual field, attentional and hemispheric biases.

    PubMed

    Ross, Elliott D; Shayya, Luay; Champlain, Amanda; Monnot, Marilee; Prodan, Calin I

    2013-12-01

    Most clinical research assumes that modulation of facial expressions is lateralized predominantly across the right-left hemiface. However, social psychological research suggests that facial expressions are organized predominantly across the upper-lower face. Because humans learn to cognitively control facial expression for social purposes, the lower face may display a false emotion, typically a smile, to enable approach behavior. In contrast, the upper face may leak a person's true feeling state by producing a brief facial blend of emotion, i.e. a different emotion on the upper versus lower face. Previous studies from our laboratory have shown that upper facial emotions are processed preferentially by the right hemisphere under conditions of directed attention if facial blends of emotion are presented tachistoscopically to the mid left and right visual fields. This paper explores how facial blends are processed within the four visual quadrants. The results, combined with our previous research, demonstrate that lower more so than upper facial emotions are perceived best when presented to the viewer's left and right visual fields just above the horizontal axis. Upper facial emotions are perceived best when presented to the viewer's left visual field just above the horizontal axis under conditions of directed attention. Thus, by gazing at a person's left ear, which also avoids the social stigma of eye-to-eye contact, one's ability to decode facial expressions should be enhanced. Published by Elsevier Inc.

  6. Discrimination of emotional facial expressions by tufted capuchin monkeys (Sapajus apella).

    PubMed

    Calcutt, Sarah E; Rubin, Taylor L; Pokorny, Jennifer J; de Waal, Frans B M

    2017-02-01

    Tufted or brown capuchin monkeys (Sapajus apella) have been shown to recognize conspecific faces as well as categorize them according to group membership. Little is known, though, about their capacity to differentiate between emotionally charged facial expressions or whether facial expressions are processed as a collection of features or configurally (i.e., as a whole). In 3 experiments, we examined whether tufted capuchins (a) differentiate photographs of neutral faces from either affiliative or agonistic expressions, (b) use relevant facial features to make such choices or view the expression as a whole, and (c) demonstrate an inversion effect for facial expressions suggestive of configural processing. Using an oddity paradigm presented on a computer touchscreen, we collected data from 9 adult and subadult monkeys. Subjects discriminated between emotional and neutral expressions with an exceptionally high success rate, including differentiating open-mouth threats from neutral expressions even when the latter contained varying degrees of visible teeth and mouth opening. They also showed an inversion effect for facial expressions, results that may indicate that quickly recognizing expressions does not originate solely from feature-based processing but likely a combination of relational processes. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Enhanced facial texture illumination normalization for face recognition.

    PubMed

    Luo, Yong; Guan, Ye-Peng

    2015-08-01

    An uncontrolled lighting condition is one of the most critical challenges for practical face recognition applications. An enhanced facial texture illumination normalization method is put forward to resolve this challenge. An adaptive relighting algorithm is developed to improve the brightness uniformity of face images. Facial texture is extracted by using an illumination estimation difference algorithm. An anisotropic histogram-stretching algorithm is proposed to minimize the intraclass distance of facial skin and maximize the dynamic range of facial texture distribution. Compared with the existing methods, the proposed method can more effectively eliminate the redundant information of facial skin and illumination. Extensive experiments show that the proposed method has superior performance in normalizing illumination variation and enhancing facial texture features for illumination-insensitive face recognition.

  8. Communicating with Virtual Humans.

    ERIC Educational Resources Information Center

    Thalmann, Nadia Magnenat

    The face is a small part of a human, but it plays an essential role in communication. An open hybrid system for facial animation is presented. It encapsulates a considerable amount of information regarding facial models, movements, expressions, emotions, and speech. The complex description of facial animation can be handled better by assigning…

  9. Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

  10. Adult Perceptions of Positive and Negative Infant Emotional Expressions

    ERIC Educational Resources Information Center

    Bolzani Dinehart, Laura H.; Messinger, Daniel S.; Acosta, Susan I.; Cassel, Tricia; Ambadar, Zara; Cohn, Jeffrey

    2005-01-01

    Adults' perceptions provide information about the emotional meaning of infant facial expressions. This study asks whether similar facial movements influence adult perceptions of emotional intensity in both infant positive (smile) and negative (cry face) facial expressions. Ninety-five college students rated a series of naturally occurring and…

  11. Effects of dynamic information in recognising facial expressions on dimensional and categorical judgments.

    PubMed

    Fujimura, Tomomi; Suzuki, Naoto

    2010-01-01

    We investigated the effects of dynamic information on decoding facial expressions. A dynamic face entailed a change from a neutral to a full-blown expression, whereas a static face included only the full-blown expression. Sixty-eight participants were divided into two groups, the dynamic condition and the static condition. The facial stimuli expressed eight kinds of emotions (excited, happy, calm, sleepy, sad, angry, fearful, and surprised) according to a dimensional perspective. Participants evaluated each facial stimulus using two methods, the Affect Grid (Russell et al, 1989 Personality and Social Psychology 29 497-510) and the forced-choice task, allowing for dimensional and categorical judgment interpretations. For activation ratings in dimensional judgments, the results indicated that dynamic calm faces, low-activation expressions were rated as less activated than static faces. For categorical judgments, dynamic excited, happy, and fearful faces, which are high- and middle-activation expressions, had higher ratings than did those under the static condition. These results suggest that the beneficial effect of dynamic information depends on the emotional properties of facial expressions.

  12. Orientations for the successful categorization of facial expressions and their link with facial features.

    PubMed

    Duncan, Justin; Gosselin, Frédéric; Cobarro, Charlène; Dugas, Gabrielle; Blais, Caroline; Fiset, Daniel

    2017-12-01

    Horizontal information was recently suggested to be crucial for face identification. In the present paper, we expand on this finding and investigate the role of orientations for all the basic facial expressions and neutrality. To this end, we developed orientation bubbles to quantify utilization of the orientation spectrum by the visual system in a facial expression categorization task. We first validated the procedure in Experiment 1 with a simple plaid-detection task. In Experiment 2, we used orientation bubbles to reveal the diagnostic-i.e., task relevant-orientations for the basic facial expressions and neutrality. Overall, we found that horizontal information was highly diagnostic for expressions-surprise excepted. We also found that utilization of horizontal information strongly predicted performance level in this task. Despite the recent surge of research on horizontals, the link with local features remains unexplored. We were thus also interested in investigating this link. In Experiment 3, location bubbles were used to reveal the diagnostic features for the basic facial expressions. Crucially, Experiments 2 and 3 were run in parallel on the same participants, in an interleaved fashion. This way, we were able to correlate individual orientation and local diagnostic profiles. Our results indicate that individual differences in horizontal tuning are best predicted by utilization of the eyes.

  13. Dynamic texture recognition using local binary patterns with an application to facial expressions.

    PubMed

    Zhao, Guoying; Pietikäinen, Matti

    2007-06-01

    Dynamic texture (DT) is an extension of texture to the temporal domain. Description and recognition of DTs have attracted growing attention. In this paper, a novel approach for recognizing DTs is proposed and its simplifications and extensions to facial image analysis are also considered. First, the textures are modeled with volume local binary patterns (VLBP), which are an extension of the LBP operator widely used in ordinary texture analysis, combining motion and appearance. To make the approach computationally simple and easy to extend, only the co-occurrences of the local binary patterns on three orthogonal planes (LBP-TOP) are then considered. A block-based method is also proposed to deal with specific dynamic events such as facial expressions in which local information and its spatial locations should also be taken into account. In experiments with two DT databases, DynTex and Massachusetts Institute of Technology (MIT), both the VLBP and LBP-TOP clearly outperformed the earlier approaches. The proposed block-based method was evaluated with the Cohn-Kanade facial expression database with excellent results. The advantages of our approach include local processing, robustness to monotonic gray-scale changes, and simple computation.

  14. Attention and memory bias to facial emotions underlying negative symptoms of schizophrenia.

    PubMed

    Jang, Seon-Kyeong; Park, Seon-Cheol; Lee, Seung-Hwan; Cho, Yang Seok; Choi, Kee-Hong

    2016-01-01

    This study assessed bias in selective attention to facial emotions in negative symptoms of schizophrenia and its influence on subsequent memory for facial emotions. Thirty people with schizophrenia who had high and low levels of negative symptoms (n = 15, respectively) and 21 healthy controls completed a visual probe detection task investigating selective attention bias (happy, sad, and angry faces randomly presented for 50, 500, or 1000 ms). A yes/no incidental facial memory task was then completed. Attention bias scores and recognition errors were calculated. Those with high negative symptoms exhibited reduced attention to emotional faces relative to neutral faces; those with low negative symptoms showed the opposite pattern when faces were presented for 500 ms regardless of the valence. Compared to healthy controls, those with high negative symptoms made more errors for happy faces in the memory task. Reduced attention to emotional faces in the probe detection task was significantly associated with less pleasure and motivation and more recognition errors for happy faces in schizophrenia group only. Attention bias away from emotional information relatively early in the attentional process and associated diminished positive memory may relate to pathological mechanisms for negative symptoms.

  15. Segmentation of human face using gradient-based approach

    NASA Astrophysics Data System (ADS)

    Baskan, Selin; Bulut, M. Mete; Atalay, Volkan

    2001-04-01

    This paper describes a method for automatic segmentation of facial features such as eyebrows, eyes, nose, mouth and ears in color images. This work is an initial step for wide range of applications based on feature-based approaches, such as face recognition, lip-reading, gender estimation, facial expression analysis, etc. Human face can be characterized by its skin color and nearly elliptical shape. For this purpose, face detection is performed using color and shape information. Uniform illumination is assumed. No restrictions on glasses, make-up, beard, etc. are imposed. Facial features are extracted using the vertically and horizontally oriented gradient projections. The gradient of a minimum with respect to its neighbor maxima gives the boundaries of a facial feature. Each facial feature has a different horizontal characteristic. These characteristics are derived by extensive experimentation with many face images. Using fuzzy set theory, the similarity between the candidate and the feature characteristic under consideration is calculated. Gradient-based method is accompanied by the anthropometrical information, for robustness. Ear detection is performed using contour-based shape descriptors. This method detects the facial features and circumscribes each facial feature with the smallest rectangle possible. AR database is used for testing. The developed method is also suitable for real-time systems.

  16. Caring or daring? Exploring the impact of facial masculinity/femininity and gender category information on first impressions.

    PubMed

    Walker, Mirella; Wänke, Michaela

    2017-01-01

    In two studies we disentangled and systematically investigated the impact of subtle facial cues to masculinity/femininity and gender category information on first impressions. Participants judged the same unambiguously male and female target persons-either with masculine or feminine facial features slightly enhanced-regarding stereotypically masculine (i.e., competence) and feminine (i.e., warmth) personality traits. Results of both studies showed a strong effect of facial masculinity/femininity: Masculine-looking persons were seen as colder and more competent than feminine-looking persons. This effect of facial masculinity/femininity was not only found for typical (i.e., masculine-looking men and feminine-looking women) and atypical (i.e., masculine-looking women and feminine-looking men) category members; it was even found to be more pronounced for atypical than for typical category members. This finding reveals that comparing atypical members to the group prototype results in pronounced effects of facial masculinity/femininity. These contrast effects for atypical members predominate assimilation effects for typical members. Intriguingly, very subtle facial cues to masculinity/femininity strongly guide first impressions and may have more impact than the gender category.

  17. Caring or daring? Exploring the impact of facial masculinity/femininity and gender category information on first impressions

    PubMed Central

    Walker, Mirella; Wänke, Michaela

    2017-01-01

    In two studies we disentangled and systematically investigated the impact of subtle facial cues to masculinity/femininity and gender category information on first impressions. Participants judged the same unambiguously male and female target persons–either with masculine or feminine facial features slightly enhanced–regarding stereotypically masculine (i.e., competence) and feminine (i.e., warmth) personality traits. Results of both studies showed a strong effect of facial masculinity/femininity: Masculine-looking persons were seen as colder and more competent than feminine-looking persons. This effect of facial masculinity/femininity was not only found for typical (i.e., masculine-looking men and feminine-looking women) and atypical (i.e., masculine-looking women and feminine-looking men) category members; it was even found to be more pronounced for atypical than for typical category members. This finding reveals that comparing atypical members to the group prototype results in pronounced effects of facial masculinity/femininity. These contrast effects for atypical members predominate assimilation effects for typical members. Intriguingly, very subtle facial cues to masculinity/femininity strongly guide first impressions and may have more impact than the gender category. PMID:29023451

  18. Physical therapy for facial paralysis: a tailored treatment approach.

    PubMed

    Brach, J S; VanSwearingen, J M

    1999-04-01

    Bell palsy is an acute facial paralysis of unknown etiology. Although recovery from Bell palsy is expected without intervention, clinical experience suggests that recovery is often incomplete. This case report describes a classification system used to guide treatment and to monitor recovery of an individual with facial paralysis. The patient was a 71-year-old woman with complete left facial paralysis secondary to Bell palsy. Signs and symptoms were assessed using a standardized measure of facial impairment (Facial Grading System [FGS]) and questions regarding functional limitations. A treatment-based category was assigned based on signs and symptoms. Rehabilitation involved muscle re-education exercises tailored to the treatment-based category. In 14 physical therapy sessions over 13 months, the patient had improved facial impairments (initial FGS score= 17/100, final FGS score= 68/100) and no reported functional limitations. Recovery from Bell palsy can be a complicated and lengthy process. The use of a classification system may help simplify the rehabilitation process.

  19. Musical chords and emotion: major and minor triads are processed for emotion.

    PubMed

    Bakker, David Radford; Martin, Frances Heritage

    2015-03-01

    Musical chords are arguably the smallest building blocks of music that retain emotional information. Major chords are generally perceived as positive- and minor chords as negative-sounding, but there has been debate concerning how early these emotional connotations may be processed. To investigate this, emotional facial stimuli and musical chord stimuli were simultaneously presented to participants, and facilitation of processing was measured via event-related potential (ERP) amplitudes. Decreased amplitudes of the P1 and N2 ERP components have been found to index the facilitation of early processing. If simultaneously presented musical chords and facial stimuli are perceived at early stages as belonging to the same emotional category, then early processing should be facilitated for these congruent pairs, and ERP amplitudes should therefore be decreased as compared to the incongruent pairs. ERPs were recorded from 30 musically naive participants as they viewed happy, sad, and neutral faces presented simultaneously with a major or minor chord. When faces and chords were presented that contained congruent emotional information (happy-major or sad-minor), processing was facilitated, as indexed by decreased N2 ERP amplitudes. This suggests that musical chords do possess emotional connotations that can be processed as early as 200 ms in naive listeners. The early stages of processing that are involved suggest that major and minor chords have deeply connected emotional meanings, rather than superficially attributed ones, indicating that minor triads possess negative emotional connotations and major triads possess positive emotional connotations.

  20. Facial Attractiveness as a Moderator of the Association between Social and Physical Aggression and Popularity in Adolescents

    PubMed Central

    Rosen, Lisa H.; Underwood, Marion K.

    2010-01-01

    This study examined the relations between facial attractiveness, aggression, and popularity in adolescence to determine whether facial attractiveness would buffer against the negative effects of aggression on popularity. We collected ratings of facial attractiveness from standardized photographs, and teachers provided information on adolescents’ social aggression, physical aggression, and popularity for 143 seventh graders (70 girls). Regression analyses indicated that facial attractiveness moderated the relations between both types of aggression and popularity. Aggression was associated with a reduction in popularity for adolescents low on facial attractiveness. However, popularity did not decrease as a function of aggression for adolescents high on facial attractiveness. Aggressors with high facial attractiveness may experience fewer negative consequences to their social standing, thus contributing to higher overall rates of aggression in school settings. PMID:20609852

  1. Relationship between individual differences in functional connectivity and facial-emotion recognition abilities in adults with traumatic brain injury.

    PubMed

    Rigon, A; Voss, M W; Turkstra, L S; Mutlu, B; Duff, M C

    2017-01-01

    Although several studies have demonstrated that facial-affect recognition impairment is common following moderate-severe traumatic brain injury (TBI), and that there are diffuse alterations in large-scale functional brain networks in TBI populations, little is known about the relationship between the two. Here, in a sample of 26 participants with TBI and 20 healthy comparison participants (HC) we measured facial-affect recognition abilities and resting-state functional connectivity (rs-FC) using fMRI. We then used network-based statistics to examine (A) the presence of rs-FC differences between individuals with TBI and HC within the facial-affect processing network, and (B) the association between inter-individual differences in emotion recognition skills and rs-FC within the facial-affect processing network. We found that participants with TBI showed significantly lower rs-FC in a component comprising homotopic and within-hemisphere, anterior-posterior connections within the facial-affect processing network. In addition, within the TBI group, participants with higher emotion-labeling skills showed stronger rs-FC within a network comprised of intra- and inter-hemispheric bilateral connections. Findings indicate that the ability to successfully recognize facial-affect after TBI is related to rs-FC within components of facial-affective networks, and provide new evidence that further our understanding of the mechanisms underlying emotion recognition impairment in TBI.

  2. Emotional Intelligence and Mismatching Expressive and Verbal Messages: A Contribution to Detection of Deception

    PubMed Central

    Wojciechowski, Jerzy; Stolarski, Maciej; Matthews, Gerald

    2014-01-01

    Processing facial emotion, especially mismatches between facial and verbal messages, is believed to be important in the detection of deception. For example, emotional leakage may accompany lying. Individuals with superior emotion perception abilities may then be more adept in detecting deception by identifying mismatch between facial and verbal messages. Two personal factors that may predict such abilities are female gender and high emotional intelligence (EI). However, evidence on the role of gender and EI in detection of deception is mixed. A key issue is that the facial processing skills required to detect deception may not be the same as those required to identify facial emotion. To test this possibility, we developed a novel facial processing task, the FDT (Face Decoding Test) that requires detection of inconsistencies between facial and verbal cues to emotion. We hypothesized that gender and ability EI would be related to performance when cues were inconsistent. We also hypothesized that gender effects would be mediated by EI, because women tend to score as more emotionally intelligent on ability tests. Data were collected from 210 participants. Analyses of the FDT suggested that EI was correlated with superior face decoding in all conditions. We also confirmed the expected gender difference, the superiority of high EI individuals, and the mediation hypothesis. Also, EI was more strongly associated with facial decoding performance in women than in men, implying there may be gender differences in strategies for processing affective cues. It is concluded that integration of emotional and cognitive cues may be a core attribute of EI that contributes to the detection of deception. PMID:24658500

  3. Judgments of facial attractiveness as a combination of facial parts information over time: Social and aesthetic factors.

    PubMed

    Saegusa, Chihiro; Watanabe, Katsumi

    2016-02-01

    Facial attractiveness can be judged on the basis of visual information acquired in a very short duration, but the absolute level of attractiveness changes depending on the duration of the observation. However, how information from individual facial parts contributes to the judgment of whole-face attractiveness is unknown. In the current study, we examined how contributions of facial parts to the judgment of whole-face attractiveness would change over time. In separate sessions, participants evaluated the attractiveness of whole faces, as well as of the eyes, nose, and mouth after observing them for 20, 100, and 1,000 ms. Correlation and multiple regression analyses indicated that the eyes made a consistently high contribution to whole-face attractiveness, even with an observation duration of 20 ms, whereas the contribution of other facial parts increased as the observation duration grew longer. When the eyes were averted, the attractiveness ratings for the whole face were decreased marginally. In addition, the contribution advantage of the eyes at the 20-ms observation duration was diminished. We interpret these results to indicate that (a) eye gaze signals social attractiveness at the early stage (perhaps in combination with emotional expression), (b) other facial parts start contributing to the judgment of whole-face attractiveness by forming aesthetic attractiveness, and (c) there is a dynamic interplay between social and aesthetic attractiveness. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. Unconscious Processing of Facial Expressions in Individuals with Internet Gaming Disorder.

    PubMed

    Peng, Xiaozhe; Cui, Fang; Wang, Ting; Jiao, Can

    2017-01-01

    Internet Gaming Disorder (IGD) is characterized by impairments in social communication and the avoidance of social contact. Facial expression processing is the basis of social communication. However, few studies have investigated how individuals with IGD process facial expressions, and whether they have deficits in emotional facial processing remains unclear. The aim of the present study was to explore these two issues by investigating the time course of emotional facial processing in individuals with IGD. A backward masking task was used to investigate the differences between individuals with IGD and normal controls (NC) in the processing of subliminally presented facial expressions (sad, happy, and neutral) with event-related potentials (ERPs). The behavioral results showed that individuals with IGD are slower than NC in response to both sad and neutral expressions in the sad-neutral context. The ERP results showed that individuals with IGD exhibit decreased amplitudes in ERP component N170 (an index of early face processing) in response to neutral expressions compared to happy expressions in the happy-neutral expressions context, which might be due to their expectancies for positive emotional content. The NC, on the other hand, exhibited comparable N170 amplitudes in response to both happy and neutral expressions in the happy-neutral expressions context, as well as sad and neutral expressions in the sad-neutral expressions context. Both individuals with IGD and NC showed comparable ERP amplitudes during the processing of sad expressions and neutral expressions. The present study revealed that individuals with IGD have different unconscious neutral facial processing patterns compared with normal individuals and suggested that individuals with IGD may expect more positive emotion in the happy-neutral expressions context. • The present study investigated whether the unconscious processing of facial expressions is influenced by excessive online gaming. A validated backward masking paradigm was used to investigate whether individuals with Internet Gaming Disorder (IGD) and normal controls (NC) exhibit different patterns in facial expression processing.• The results demonstrated that individuals with IGD respond differently to facial expressions compared with NC on a preattentive level. Behaviorally, individuals with IGD are slower than NC in response to both sad and neutral expressions in the sad-neutral context. The ERP results further showed (1) decreased amplitudes in the N170 component (an index of early face processing) in individuals with IGD when they process neutral expressions compared with happy expressions in the happy-neutral expressions context, whereas the NC exhibited comparable N170 amplitudes in response to these two expressions; (2) both the IGD and NC group demonstrated similar N170 amplitudes in response to sad and neutral faces in the sad-neutral expressions context.• The decreased amplitudes of N170 to neutral faces than happy faces in individuals with IGD might due to their less expectancies for neutral content in the happy-neutral expressions context, while individuals with IGD may have no different expectancies for neutral and sad faces in the sad-neutral expressions context.

  5. Performance of a Working Face Recognition Machine using Cortical Thought Theory

    DTIC Science & Technology

    1984-12-04

    been considered (2). Recommendations from Bledsoe’s study included research on facial - recognition systems that are "completely automatic (remove the...C. L. Location of some facial features . computer, Palo Alto: Panoramic Research, Aug 1966. 2. Bledsoe, W. W. Man-machine facial recognition : Is...34 image?" It would seem - that the location and size of the features left in this contrast-expanded image contain the essential information of facial

  6. Alexithymia, emotion perception, and social assertiveness in adult women with Noonan and Turner syndromes.

    PubMed

    Roelofs, Renée L; Wingbermühle, Ellen; Freriks, Kim; Verhaak, Chris M; Kessels, Roy P C; Egger, Jos I M

    2015-04-01

    Noonan syndrome (NS) and Turner syndrome (TS) are associated with cognitive problems and difficulties in affective information processing. While both phenotypes include short stature, facial dysmorphisms, and a webbed neck, genetic etiology and neuropsychological phenotype differ significantly. The present study examines putative differences in affective information processing and social assertiveness between adult women with NS and TS. Twenty-six women with NS, 40 women with TS, and 40 female controls were matched on age and intelligence, and subsequently compared on (1) alexithymia, measured by the Bermond-Vorst Alexithymia Questionnaire, (2) emotion perception, evaluated by the Emotion Recognition Task, and (3) social assertiveness and social discomfort, assessed by the Scale for Interpersonal Behavior. Women with TS showed higher levels of alexithymia than women with NS and controls (P-values < 0.001), whereas women with NS had more trouble recognizing angry facial expressions in comparison with controls (P = 0.01). No significant group differences were found for the frequency of social assertiveness and the level of social discomfort. Women with NS and TS demonstrated different patterns of impairment in affective information processing, in terms of alexithymia and emotion perception. The present findings suggest neuropsychological phenotyping to be helpful for the diagnosis of specific cognitive-affective deficits in genetic syndromes, for the enhancement of genetic counseling, and for the development of personalized treatment plans. © 2015 Wiley Periodicals, Inc.

  7. The Public Face of Transplantation: The Potential of Education to Expand the Face Donor Pool.

    PubMed

    Plana, Natalie M; Kimberly, Laura L; Parent, Brendan; Khouri, Kimberly S; Diaz-Siso, J Rodrigo; Fryml, Elise M; Motosko, Catherine C; Ceradini, Daniel J; Caplan, Arthur; Rodriguez, Eduardo D

    2018-01-01

    Despite the growing success of facial transplantation, organ donor shortages remain challenging. Educational health campaigns can effectively inform the general public and institute behavioral modifications. A brief educational introduction to facial transplantation may positively influence the public's position on facial donation. The authors anonymously surveyed 300 participants, gathering basic demographic information, donor registration status, awareness of facial transplantation, and willingness to donate solid organs and facial allografts. Two-hundred of these participants were presented an educational video and subsequently resurveyed on facial donation. Factorial parametric analyses were performed to compare exposure responses before and after watching video exposure. Among participants completing the survey alone (control group), 49 percent were registered donors, 78 percent reported willingness to donate solid organs, and 52 percent reported willingness to donate facial allograft. Of participants who watched the video (video group) 52 percent were registered; 69 and 51 percent were willing to donate solid organs and face, respectively. Following educational intervention, 69 percent of participants in the video group reported willingness to donate facial tissue, an 18 percent increase (p < 0.05), that equated to those willing to donate solid organs. The greatest increase was observed among younger participants (23 percent); women (22 percent); Jewish (22 percent), Catholic (22 percent), and black/African American (25 percent) participants; and respondents holding a higher degree. No significant differences according to gender or ethnicity were observed. Educational interventions hold much promise for increasing the general public's awareness of facial transplantation and willingness to participate in donation of facial allografts.

  8. Three-dimensional printing for restoration of the donor face: A new digital technique tested and used in the first facial allotransplantation patient in Finland.

    PubMed

    Mäkitie, A A; Salmi, M; Lindford, A; Tuomi, J; Lassus, P

    2016-12-01

    Prosthetic mask restoration of the donor face is essential in current facial transplant protocols. The aim was to develop a new three-dimensional (3D) printing (additive manufacturing; AM) process for the production of a donor face mask that fulfilled the requirements for facial restoration after facial harvest. A digital image of a single test person's face was obtained in a standardized setting and subjected to three different image processing techniques. These data were used for the 3D modeling and printing of a donor face mask. The process was also tested in a cadaver setting and ultimately used clinically in a donor patient after facial allograft harvest. and Conclusions: All the three developed and tested techniques enabled the 3D printing of a custom-made face mask in a timely manner that is almost an exact replica of the donor patient's face. This technique was successfully used in a facial allotransplantation donor patient. Copyright © 2016 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  9. A Virtual Environment to Improve the Detection of Oral-Facial Malfunction in Children with Cerebral Palsy.

    PubMed

    Martín-Ruiz, María-Luisa; Máximo-Bocanegra, Nuria; Luna-Oliva, Laura

    2016-03-26

    The importance of an early rehabilitation process in children with cerebral palsy (CP) is widely recognized. On the one hand, new and useful treatment tools such as rehabilitation systems based on interactive technologies have appeared for rehabilitation of gross motor movements. On the other hand, from the therapeutic point of view, performing rehabilitation exercises with the facial muscles can improve the swallowing process, the facial expression through the management of muscles in the face, and even the speech of children with cerebral palsy. However, it is difficult to find interactive games to improve the detection and evaluation of oral-facial musculature dysfunctions in children with CP. This paper describes a framework based on strategies developed for interactive serious games that is created both for typically developed children and children with disabilities. Four interactive games are the core of a Virtual Environment called SONRIE. This paper demonstrates the benefits of SONRIE to monitor children's oral-facial difficulties. The next steps will focus on the validation of SONRIE to carry out the rehabilitation process of oral-facial musculature in children with cerebral palsy.

  10. Impaired recognition of happy facial expressions in bipolar disorder.

    PubMed

    Lawlor-Savage, Linette; Sponheim, Scott R; Goghari, Vina M

    2014-08-01

    The ability to accurately judge facial expressions is important in social interactions. Individuals with bipolar disorder have been found to be impaired in emotion recognition; however, the specifics of the impairment are unclear. This study investigated whether facial emotion recognition difficulties in bipolar disorder reflect general cognitive, or emotion-specific, impairments. Impairment in the recognition of particular emotions and the role of processing speed in facial emotion recognition were also investigated. Clinically stable bipolar patients (n = 17) and healthy controls (n = 50) judged five facial expressions in two presentation types, time-limited and self-paced. An age recognition condition was used as an experimental control. Bipolar patients' overall facial recognition ability was unimpaired. However, patients' specific ability to judge happy expressions under time constraints was impaired. Findings suggest a deficit in happy emotion recognition impacted by processing speed. Given the limited sample size, further investigation with a larger patient sample is warranted.

  11. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia

    PubMed Central

    Daini, Roberta; Comparetti, Chiara M.; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition. PMID:25520643

  12. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia.

    PubMed

    Daini, Roberta; Comparetti, Chiara M; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition.

  13. Alexithymia and the labeling of facial emotions: response slowing and increased motor and somatosensory processing

    PubMed Central

    2014-01-01

    Background Alexithymia is a personality trait that is characterized by difficulties in identifying and describing feelings. Previous studies have shown that alexithymia is related to problems in recognizing others’ emotional facial expressions when these are presented with temporal constraints. These problems can be less severe when the expressions are visible for a relatively long time. Because the neural correlates of these recognition deficits are still relatively unexplored, we investigated the labeling of facial emotions and brain responses to facial emotions as a function of alexithymia. Results Forty-eight healthy participants had to label the emotional expression (angry, fearful, happy, or neutral) of faces presented for 1 or 3 seconds in a forced-choice format while undergoing functional magnetic resonance imaging. The participants’ level of alexithymia was assessed using self-report and interview. In light of the previous findings, we focused our analysis on the alexithymia component of difficulties in describing feelings. Difficulties describing feelings, as assessed by the interview, were associated with increased reaction times for negative (i.e., angry and fearful) faces, but not with labeling accuracy. Moreover, individuals with higher alexithymia showed increased brain activation in the somatosensory cortex and supplementary motor area (SMA) in response to angry and fearful faces. These cortical areas are known to be involved in the simulation of the bodily (motor and somatosensory) components of facial emotions. Conclusion The present data indicate that alexithymic individuals may use information related to bodily actions rather than affective states to understand the facial expressions of other persons. PMID:24629094

  14. Facial biases on vocal perception and memory.

    PubMed

    Boltz, Marilyn G

    2017-06-01

    Does a speaker's face influence the way their voice is heard and later remembered? This question was addressed through two experiments where in each, participants listened to middle-aged voices accompanied by faces that were either age-appropriate, younger or older than the voice or, as a control, no face at all. In Experiment 1, participants evaluated each voice on various acoustical dimensions and speaker characteristics. The results showed that facial displays influenced perception such that the same voice was heard differently depending on the age of the accompanying face. Experiment 2 further revealed that facial displays led to memory distortions that were age-congruent in nature. These findings illustrate that faces can activate certain social categories and preconceived stereotypes that then influence vocal and person perception in a corresponding fashion. Processes of face/voice integration are very similar to those of music/film, indicating that the two areas can mutually inform one another and perhaps, more generally, reflect a centralized mechanism of cross-sensory integration. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Recognition of Facial Expressions and Prosodic Cues with Graded Emotional Intensities in Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Doi, Hirokazu; Fujisawa, Takashi X.; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-01-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group…

  16. Dermatological Feasibility of Multimodal Facial Color Imaging Modality for Cross-Evaluation of Facial Actinic Keratosis

    PubMed Central

    Bae, Youngwoo; Son, Taeyoon; Nelson, J. Stuart; Kim, Jae-Hong; Choi, Eung Ho; Jung, Byungjo

    2010-01-01

    Background/Purpose Digital color image analysis is currently considered as a routine procedure in dermatology. In our previous study, a multimodal facial color imaging modality (MFCIM), which provides a conventional, parallel- and cross-polarization, and fluorescent color image, was introduced for objective evaluation of various facial skin lesions. This study introduces a commercial version of MFCIM, DermaVision-PRO, for routine clinical use in dermatology and demonstrates its dermatological feasibility for cross-evaluation of skin lesions. Methods/Results Sample images of subjects with actinic keratosis or non-melanoma skin cancers were obtained at four different imaging modes. Various image analysis methods were applied to cross-evaluate the skin lesion and, finally, extract valuable diagnostic information. DermaVision-PRO is potentially a useful tool as an objective macroscopic imaging modality for quick prescreening and cross-evaluation of facial skin lesions. Conclusion DermaVision-PRO may be utilized as a useful tool for cross-evaluation of widely distributed facial skin lesions and an efficient database management of patient information. PMID:20923462

  17. The Two Sides of Beauty: Laterality and the Duality of Facial Attractiveness

    ERIC Educational Resources Information Center

    Franklin, Robert G., Jr.; Adams, Reginald B., Jr.

    2010-01-01

    We hypothesized that facial attractiveness represents a dual judgment, a combination of reward-based, sexual processes, and aesthetic, cognitive processes. Herein we describe a study that demonstrates that sexual and nonsexual processes both contribute to attractiveness judgments and that these processes can be dissociated. Female participants…

  18. Attachment Patterns Trigger Differential Neural Signature of Emotional Processing in Adolescents

    PubMed Central

    Decety, Jean; Huepe, David; Cardona, Juan Felipe; Canales-Johnson, Andres; Sigman, Mariano; Mikulan, Ezequiel; Helgiu, Elena; Baez, Sandra; Manes, Facundo; Lopez, Vladimir; Ibañez, Agustín

    2013-01-01

    Background Research suggests that individuals with different attachment patterns process social information differently, especially in terms of facial emotion recognition. However, few studies have explored social information processes in adolescents. This study examined the behavioral and ERP correlates of emotional processing in adolescents with different attachment orientations (insecure attachment group and secure attachment group; IAG and SAG, respectively). This study also explored the association of these correlates to individual neuropsychological profiles. Methodology/Principal Findings We used a modified version of the dual valence task (DVT), in which participants classify stimuli (faces and words) according to emotional valence (positive or negative). Results showed that the IAG performed significantly worse than SAG on tests of executive function (EF attention, processing speed, visuospatial abilities and cognitive flexibility). In the behavioral DVT, the IAG presented lower performance and accuracy. The IAG also exhibited slower RTs for stimuli with negative valence. Compared to the SAG, the IAG showed a negative bias for faces; a larger P1 and attenuated N170 component over the right hemisphere was observed. A negative bias was also observed in the IAG for word stimuli, which was demonstrated by comparing the N170 amplitude of the IAG with the valence of the SAG. Finally, the amplitude of the N170 elicited by the facial stimuli correlated with EF in both groups (and negative valence with EF in the IAG). Conclusions/Significance Our results suggest that individuals with different attachment patterns process key emotional information and corresponding EF differently. This is evidenced by an early modulation of ERP components’ amplitudes, which are correlated with behavioral and neuropsychological effects. In brief, attachments patterns appear to impact multiple domains, such as emotional processing and EFs. PMID:23940552

  19. Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Lawi, Armin; Sya'Rani Machrizzandi, M.

    2018-03-01

    Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.

  20. Facial attractiveness as a moderator of the association between social and physical aggression and popularity in adolescents.

    PubMed

    Rosen, Lisa H; Underwood, Marion K

    2010-08-01

    This study examined the relations between facial attractiveness, aggression, and popularity in adolescence to determine whether facial attractiveness would buffer against the negative effects of aggression on popularity. We collected ratings of facial attractiveness from standardized photographs, and teachers provided information on adolescents' social aggression, physical aggression, and popularity for 143 seventh graders (70 girls). Regression analyses indicated that facial attractiveness moderated the relations between both types of aggression and popularity. Aggression was associated with a reduction in popularity for adolescents low on facial attractiveness. However, popularity did not decrease as a function of aggression for adolescents high on facial attractiveness. Aggressors with high facial attractiveness may experience fewer negative consequences to their social standing, thus contributing to higher overall rates of aggression in school settings. Copyright 2010 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  1. Dynamic Encoding of Face Information in the Human Fusiform Gyrus

    PubMed Central

    Ghuman, Avniel Singh; Brunet, Nicolas M.; Li, Yuanning; Konecky, Roma O.; Pyles, John A.; Walls, Shawn A.; Destefino, Vincent; Wang, Wei; Richardson, R. Mark

    2014-01-01

    Humans’ ability to rapidly and accurately detect, identify, and classify faces under variable conditions derives from a network of brain regions highly tuned to face information. The fusiform face area (FFA) is thought to be a computational hub for face processing, however temporal dynamics of face information processing in FFA remains unclear. Here we use multivariate pattern classification to decode the temporal dynamics of expression-invariant face information processing using electrodes placed directly upon FFA in humans. Early FFA activity (50-75 ms) contained information regarding whether participants were viewing a face. Activity between 200-500 ms contained expression-invariant information about which of 70 faces participants were viewing along with the individual differences in facial features and their configurations. Long-lasting (500+ ms) broadband gamma frequency activity predicted task performance. These results elucidate the dynamic computational role FFA plays in multiple face processing stages and indicate what information is used in performing these visual analyses. PMID:25482825

  2. Dynamic encoding of face information in the human fusiform gyrus.

    PubMed

    Ghuman, Avniel Singh; Brunet, Nicolas M; Li, Yuanning; Konecky, Roma O; Pyles, John A; Walls, Shawn A; Destefino, Vincent; Wang, Wei; Richardson, R Mark

    2014-12-08

    Humans' ability to rapidly and accurately detect, identify and classify faces under variable conditions derives from a network of brain regions highly tuned to face information. The fusiform face area (FFA) is thought to be a computational hub for face processing; however, temporal dynamics of face information processing in FFA remains unclear. Here we use multivariate pattern classification to decode the temporal dynamics of expression-invariant face information processing using electrodes placed directly on FFA in humans. Early FFA activity (50-75 ms) contained information regarding whether participants were viewing a face. Activity between 200 and 500 ms contained expression-invariant information about which of 70 faces participants were viewing along with the individual differences in facial features and their configurations. Long-lasting (500+ms) broadband gamma frequency activity predicted task performance. These results elucidate the dynamic computational role FFA plays in multiple face processing stages and indicate what information is used in performing these visual analyses.

  3. Face processing in chronic alcoholism: a specific deficit for emotional features.

    PubMed

    Maurage, P; Campanella, S; Philippot, P; Martin, S; de Timary, P

    2008-04-01

    It is well established that chronic alcoholism is associated with a deficit in the decoding of emotional facial expression (EFE). Nevertheless, it is still unclear whether this deficit is specifically for emotions or due to a more general impairment in visual or facial processing. This study was designed to clarify this issue using multiple control tasks and the subtraction method. Eighteen patients suffering from chronic alcoholism and 18 matched healthy control subjects were asked to perform several tasks evaluating (1) Basic visuo-spatial and facial identity processing; (2) Simple reaction times; (3) Complex facial features identification (namely age, emotion, gender, and race). Accuracy and reaction times were recorded. Alcoholic patients had a preserved performance for visuo-spatial and facial identity processing, but their performance was impaired for visuo-motor abilities and for the detection of complex facial aspects. More importantly, the subtraction method showed that alcoholism is associated with a specific EFE decoding deficit, still present when visuo-motor slowing down is controlled for. These results offer a post hoc confirmation of earlier data showing an EFE decoding deficit in alcoholism by strongly suggesting a specificity of this deficit for emotions. This may have implications for clinical situations, where emotional impairments are frequently observed among alcoholic subjects.

  4. Priming Facial Gender and Emotional Valence: The Influence of Spatial Frequency on Face Perception in ASD

    ERIC Educational Resources Information Center

    Vanmarcke, Steven; Wagemans, Johan

    2017-01-01

    Adolescents with and without autism spectrum disorder (ASD) performed two priming experiments in which they implicitly processed a prime stimulus, containing high and/or low spatial frequency information, and then explicitly categorized a target face either as male/female (gender task) or as positive/negative (Valence task). Adolescents with ASD…

  5. Facial Cosmetics Exert a Greater Influence on Processing of the Mouth Relative to the Eyes: Evidence from the N170 Event-Related Potential Component.

    PubMed

    Tanaka, Hideaki

    2016-01-01

    Cosmetic makeup significantly influences facial perception. Because faces consist of similar physical structures, cosmetic makeup is typically used to highlight individual features, particularly those of the eyes (i.e., eye shadow) and mouth (i.e., lipstick). Though event-related potentials have been utilized to study various aspects of facial processing, the influence of cosmetics on specific ERP components remains unclear. The present study aimed to investigate the relationship between the application of cosmetic makeup and the amplitudes of the P1 and N170 event-related potential components during facial perception tasks. Moreover, the influence of visual perception on N170 amplitude, was evaluated under three makeup conditions: Eye Shadow, Lipstick, and No Makeup. Electroencephalography was used to monitor 17 participants who were exposed to visual stimuli under each these three makeup conditions. The results of the present study subsequently demonstrated that the Lipstick condition elicited a significantly greater N170 amplitude than the No Makeup condition, while P1 amplitude was unaffected by any of the conditions. Such findings indicate that the application of cosmetic makeup alters general facial perception but exerts no influence on the perception of low-level visual features. Collectively, these results support the notion that the application of makeup induces subtle alterations in the processing of facial stimuli, with a particular effect on the processing of specific facial components (i.e., the mouth), as reflected by changes in N170 amplitude.

  6. Facial Cosmetics Exert a Greater Influence on Processing of the Mouth Relative to the Eyes: Evidence from the N170 Event-Related Potential Component

    PubMed Central

    Tanaka, Hideaki

    2016-01-01

    Cosmetic makeup significantly influences facial perception. Because faces consist of similar physical structures, cosmetic makeup is typically used to highlight individual features, particularly those of the eyes (i.e., eye shadow) and mouth (i.e., lipstick). Though event-related potentials have been utilized to study various aspects of facial processing, the influence of cosmetics on specific ERP components remains unclear. The present study aimed to investigate the relationship between the application of cosmetic makeup and the amplitudes of the P1 and N170 event-related potential components during facial perception tasks. Moreover, the influence of visual perception on N170 amplitude, was evaluated under three makeup conditions: Eye Shadow, Lipstick, and No Makeup. Electroencephalography was used to monitor 17 participants who were exposed to visual stimuli under each these three makeup conditions. The results of the present study subsequently demonstrated that the Lipstick condition elicited a significantly greater N170 amplitude than the No Makeup condition, while P1 amplitude was unaffected by any of the conditions. Such findings indicate that the application of cosmetic makeup alters general facial perception but exerts no influence on the perception of low-level visual features. Collectively, these results support the notion that the application of makeup induces subtle alterations in the processing of facial stimuli, with a particular effect on the processing of specific facial components (i.e., the mouth), as reflected by changes in N170 amplitude. PMID:27656161

  7. Neural processing of fearful and happy facial expressions during emotion-relevant and emotion-irrelevant tasks: a fixation-to-feature approach

    PubMed Central

    Neath-Tavares, Karly N.; Itier, Roxane J.

    2017-01-01

    Research suggests an important role of the eyes and mouth for discriminating facial expressions of emotion. A gaze-contingent procedure was used to test the impact of fixation to facial features on the neural response to fearful, happy and neutral facial expressions in an emotion discrimination (Exp.1) and an oddball detection (Exp.2) task. The N170 was the only eye-sensitive ERP component, and this sensitivity did not vary across facial expressions. In both tasks, compared to neutral faces, responses to happy expressions were seen as early as 100–120ms occipitally, while responses to fearful expressions started around 150ms, on or after the N170, at both occipital and lateral-posterior sites. Analyses of scalp topographies revealed different distributions of these two emotion effects across most of the epoch. Emotion processing interacted with fixation location at different times between tasks. Results suggest a role of both the eyes and mouth in the neural processing of fearful expressions and of the mouth in the processing of happy expressions, before 350ms. PMID:27430934

  8. No differences in emotion recognition strategies in children with autism spectrum disorder: evidence from hybrid faces.

    PubMed

    Evers, Kris; Kerkhof, Inneke; Steyaert, Jean; Noens, Ilse; Wagemans, Johan

    2014-01-01

    Emotion recognition problems are frequently reported in individuals with an autism spectrum disorder (ASD). However, this research area is characterized by inconsistent findings, with atypical emotion processing strategies possibly contributing to existing contradictions. In addition, an attenuated saliency of the eyes region is often demonstrated in ASD during face identity processing. We wanted to compare reliance on mouth versus eyes information in children with and without ASD, using hybrid facial expressions. A group of six-to-eight-year-old boys with ASD and an age- and intelligence-matched typically developing (TD) group without intellectual disability performed an emotion labelling task with hybrid facial expressions. Five static expressions were used: one neutral expression and four emotional expressions, namely, anger, fear, happiness, and sadness. Hybrid faces were created, consisting of an emotional face half (upper or lower face region) with the other face half showing a neutral expression. Results showed no emotion recognition problem in ASD. Moreover, we provided evidence for the existence of top- and bottom-emotions in children: correct identification of expressions mainly depends on information in the eyes (so-called top-emotions: happiness) or in the mouth region (so-called bottom-emotions: sadness, anger, and fear). No stronger reliance on mouth information was found in children with ASD.

  9. Gender differences in memory processing of female facial attractiveness: evidence from event-related potentials.

    PubMed

    Zhang, Yan; Wei, Bin; Zhao, Peiqiong; Zheng, Minxiao; Zhang, Lili

    2016-06-01

    High rates of agreement in the judgment of facial attractiveness suggest universal principles of beauty. This study investigated gender differences in recognition memory processing of female facial attractiveness. Thirty-four Chinese heterosexual participants (17 females, 17 males) aged 18-24 years (mean age 21.63 ± 1.51 years) participated in the experiment which used event-related potentials (ERPs) based on a study-test paradigm. The behavioral data results showed that both men and women had significantly higher accuracy rates for attractive faces than for unattractive faces, but men reacted faster to unattractive faces. Gender differences on ERPs showed that attractive faces elicited larger early components such as P1, N170, and P2 in men than in women. The results indicated that the effects of recognition bias during memory processing modulated by female facial attractiveness are greater for men than women. Behavioral and ERP evidences indicate that men and women differ in their attentional adhesion to attractive female faces; different mating-related motives may guide the selective processing of attractive men and women. These findings establish a contribution of gender differences on female facial attractiveness during memory processing from an evolutionary perspective.

  10. [Face recognition in patients with schizophrenia].

    PubMed

    Doi, Hirokazu; Shinohara, Kazuyuki

    2012-07-01

    It is well known that patients with schizophrenia show severe deficiencies in social communication skills. These deficiencies are believed to be partly derived from abnormalities in face recognition. However, the exact nature of these abnormalities exhibited by schizophrenic patients with respect to face recognition has yet to be clarified. In the present paper, we review the main findings on face recognition deficiencies in patients with schizophrenia, particularly focusing on abnormalities in the recognition of facial expression and gaze direction, which are the primary sources of information of others' mental states. The existing studies reveal that the abnormal recognition of facial expression and gaze direction in schizophrenic patients is attributable to impairments in both perceptual processing of visual stimuli, and cognitive-emotional responses to social information. Furthermore, schizophrenic patients show malfunctions in distributed neural regions, ranging from the fusiform gyrus recruited in the structural encoding of facial stimuli, to the amygdala which plays a primary role in the detection of the emotional significance of stimuli. These findings were obtained from research in patient groups with heterogeneous characteristics. Because previous studies have indicated that impairments in face recognition in schizophrenic patients might vary according to the types of symptoms, it is of primary importance to compare the nature of face recognition deficiencies and the impairments of underlying neural functions across sub-groups of patients.

  11. Facial nerve paralysis secondary to occult malignant neoplasms.

    PubMed

    Boahene, Derek O; Olsen, Kerry D; Driscoll, Colin; Lewis, Jean E; McDonald, Thomas J

    2004-04-01

    This study reviewed patients with unilateral facial paralysis and normal clinical and imaging findings who underwent diagnostic facial nerve exploration. Study design and setting Fifteen patients with facial paralysis and normal findings were seen in the Mayo Clinic Department of Otorhinolaryngology. Eleven patients were misdiagnosed as having Bell palsy or idiopathic paralysis. Progressive facial paralysis with sequential involvement of adjacent facial nerve branches occurred in all 15 patients. Seven patients had a history of regional skin squamous cell carcinoma, 13 patients had surgical exploration to rule out a neoplastic process, and 2 patients had negative exploration. At last follow-up, 5 patients were alive. Patients with facial paralysis and normal clinical and imaging findings should be considered for facial nerve exploration when the patient has a history of pain or regional skin cancer, involvement of other cranial nerves, and prolonged facial paralysis. Occult malignancy of the facial nerve may cause unilateral facial paralysis in patients with normal clinical and imaging findings.

  12. Do Dynamic Facial Expressions Convey Emotions to Children Better than Do Static Ones?

    ERIC Educational Resources Information Center

    Widen, Sherri C.; Russell, James A.

    2015-01-01

    Past research has shown that children recognize emotions from facial expressions poorly and improve only gradually with age, but the stimuli in such studies have been static faces. Because dynamic faces include more information, it may well be that children more readily recognize emotions from dynamic facial expressions. The current study of…

  13. Parametric modulation of neural activity by emotion in youth with bipolar disorder, youth with severe mood dysregulation, and healthy volunteers.

    PubMed

    Thomas, Laura A; Brotman, Melissa A; Muhrer, Eli J; Rosen, Brooke H; Bones, Brian L; Reynolds, Richard C; Deveney, Christen M; Pine, Daniel S; Leibenluft, Ellen

    2012-12-01

    CONTEXT Youth with bipolar disorder (BD) and those with severe, nonepisodic irritability (severe mood dysregulation [SMD]) exhibit amygdala dysfunction during facial emotion processing. However, studies have not compared such patients with each other and with comparison individuals in neural responsiveness to subtle changes in facial emotion; the ability to process such changes is important for social cognition. To evaluate this, we used a novel, parametrically designed faces paradigm. OBJECTIVE To compare activation in the amygdala and across the brain in BD patients, SMD patients, and healthy volunteers (HVs). DESIGN Case-control study. SETTING Government research institute. PARTICIPANTS Fifty-seven youths (19 BD, 15 SMD, and 23 HVs). MAIN OUTCOME MEASURE Blood oxygenation level-dependent data. Neutral faces were morphed with angry and happy faces in 25% intervals; static facial stimuli appeared for 3000 milliseconds. Participants performed hostility or nonemotional facial feature (ie, nose width) ratings. The slope of blood oxygenation level-dependent activity was calculated across neutral-to-angry and neutral-to-happy facial stimuli. RESULTS In HVs, but not BD or SMD participants, there was a positive association between left amygdala activity and anger on the face. In the neutral-to-happy whole-brain analysis, BD and SMD participants modulated parietal, temporal, and medial-frontal areas differently from each other and from that in HVs; with increasing facial happiness, SMD patients demonstrated increased, and BD patients decreased, activity in the parietal, temporal, and frontal regions. CONCLUSIONS Youth with BD or SMD differ from HVs in modulation of amygdala activity in response to small changes in facial anger displays. In contrast, individuals with BD or SMD show distinct perturbations in regions mediating attention and face processing in association with changes in the emotional intensity of facial happiness displays. These findings demonstrate similarities and differences in the neural correlates of facial emotion processing in BD and SMD, suggesting that these distinct clinical presentations may reflect differing dysfunctions along a mood disorders spectrum.

  14. Information processing of motion in facial expression and the geometry of dynamical systems

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Eghbalnia, Hamid; McMenamin, Brenton W.

    2005-01-01

    An interesting problem in analysis of video data concerns design of algorithms that detect perceptually significant features in an unsupervised manner, for instance methods of machine learning for automatic classification of human expression. A geometric formulation of this genre of problems could be modeled with help of perceptual psychology. In this article, we outline one approach for a special case where video segments are to be classified according to expression of emotion or other similar facial motions. The encoding of realistic facial motions that convey expression of emotions for a particular person P forms a parameter space XP whose study reveals the "objective geometry" for the problem of unsupervised feature detection from video. The geometric features and discrete representation of the space XP are independent of subjective evaluations by observers. While the "subjective geometry" of XP varies from observer to observer, levels of sensitivity and variation in perception of facial expressions appear to share a certain level of universality among members of similar cultures. Therefore, statistical geometry of invariants of XP for a sample of population could provide effective algorithms for extraction of such features. In cases where frequency of events is sufficiently large in the sample data, a suitable framework could be provided to facilitate the information-theoretic organization and study of statistical invariants of such features. This article provides a general approach to encode motion in terms of a particular genre of dynamical systems and the geometry of their flow. An example is provided to illustrate the general theory.

  15. Deeper than skin deep - The effect of botulinum toxin-A on emotion processing.

    PubMed

    Baumeister, J-C; Papa, G; Foroni, F

    2016-08-01

    The effect of facial botulinum Toxin-A (BTX) injections on the processing of emotional stimuli was investigated. The hypothesis, that BTX would interfere with processing of slightly emotional stimuli and less with very emotional or neutral stimuli, was largely confirmed. BTX-users rated slightly emotional sentences and facial expressions, but not very emotional or neutral ones, as less emotional after the treatment. Furthermore, they became slower at categorizing slightly emotional facial expressions under time pressure. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Preprocessing of emotional visual information in the human piriform cortex.

    PubMed

    Schulze, Patrick; Bestgen, Anne-Kathrin; Lech, Robert K; Kuchinke, Lars; Suchan, Boris

    2017-08-23

    This study examines the processing of visual information by the olfactory system in humans. Recent data point to the processing of visual stimuli by the piriform cortex, a region mainly known as part of the primary olfactory cortex. Moreover, the piriform cortex generates predictive templates of olfactory stimuli to facilitate olfactory processing. This study fills the gap relating to the question whether this region is also capable of preprocessing emotional visual information. To gain insight into the preprocessing and transfer of emotional visual information into olfactory processing, we recorded hemodynamic responses during affective priming using functional magnetic resonance imaging (fMRI). Odors of different valence (pleasant, neutral and unpleasant) were primed by images of emotional facial expressions (happy, neutral and disgust). Our findings are the first to demonstrate that the piriform cortex preprocesses emotional visual information prior to any olfactory stimulation and that the emotional connotation of this preprocessing is subsequently transferred and integrated into an extended olfactory network for olfactory processing.

  17. Basic abnormalities in visual processing affect face processing at an early age in autism spectrum disorder.

    PubMed

    Vlamings, Petra Hendrika Johanna Maria; Jonkman, Lisa Marthe; van Daalen, Emma; van der Gaag, Rutger Jan; Kemner, Chantal

    2010-12-15

    A detailed visual processing style has been noted in autism spectrum disorder (ASD); this contributes to problems in face processing and has been directly related to abnormal processing of spatial frequencies (SFs). Little is known about the early development of face processing in ASD and the relation with abnormal SF processing. We investigated whether young ASD children show abnormalities in low spatial frequency (LSF, global) and high spatial frequency (HSF, detailed) processing and explored whether these are crucially involved in the early development of face processing. Three- to 4-year-old children with ASD (n = 22) were compared with developmentally delayed children without ASD (n = 17). Spatial frequency processing was studied by recording visual evoked potentials from visual brain areas while children passively viewed gratings (HSF/LSF). In addition, children watched face stimuli with different expressions, filtered to include only HSF or LSF. Enhanced activity in visual brain areas was found in response to HSF versus LSF information in children with ASD, in contrast to control subjects. Furthermore, facial-expression processing was also primarily driven by detail in ASD. Enhanced visual processing of detailed (HSF) information is present early in ASD and occurs for neutral (gratings), as well as for socially relevant stimuli (facial expressions). These data indicate that there is a general abnormality in visual SF processing in early ASD and are in agreement with suggestions that a fast LSF subcortical face processing route might be affected in ASD. This could suggest that abnormal visual processing is causative in the development of social problems in ASD. Copyright © 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  18. Spatial Attention-Related Modulation of the N170 by Backward Masked Fearful Faces

    ERIC Educational Resources Information Center

    Carlson, Joshua M.; Reinke, Karen S.

    2010-01-01

    Facial expressions are a basic form of non-verbal communication that convey important social information to others. The relevancy of this information is highlighted by findings that backward masked facial expressions facilitate spatial attention. This attention effect appears to be mediated through a neural network consisting of the amygdala,…

  19. Network Interactions Explain Sensitivity to Dynamic Faces in the Superior Temporal Sulcus.

    PubMed

    Furl, Nicholas; Henson, Richard N; Friston, Karl J; Calder, Andrew J

    2015-09-01

    The superior temporal sulcus (STS) in the human and monkey is sensitive to the motion of complex forms such as facial and bodily actions. We used functional magnetic resonance imaging (fMRI) to explore network-level explanations for how the form and motion information in dynamic facial expressions might be combined in the human STS. Ventral occipitotemporal areas selective for facial form were localized in occipital and fusiform face areas (OFA and FFA), and motion sensitivity was localized in the more dorsal temporal area V5. We then tested various connectivity models that modeled communication between the ventral form and dorsal motion pathways. We show that facial form information modulated transmission of motion information from V5 to the STS, and that this face-selective modulation likely originated in OFA. This finding shows that form-selective motion sensitivity in the STS can be explained in terms of modulation of gain control on information flow in the motion pathway, and provides a substantial constraint for theories of the perception of faces and biological motion. © The Author 2014. Published by Oxford University Press.

  20. Subliminal and Supraliminal Processing of Facial Expression of Emotions: Brain Oscillation in the Left/Right Frontal Area

    PubMed Central

    Balconi, Michela; Ferrari, Chiara

    2012-01-01

    The unconscious effects of an emotional stimulus have been highlighted by a vast amount of research, whereover it remains questionable whether it is possible to assign a specific function to cortical brain oscillations in the unconscious perception of facial expressions of emotions. Alpha band variation was monitored within the right- and left-cortical side when subjects consciously (supraliminal stimulation) or unconsciously (subliminal stimulation) processed facial patterns. Twenty subjects looked at six facial expressions of emotions (anger, fear, surprise, disgust, happiness, sadness, and neutral) under two different conditions: supraliminal (200 ms) vs. subliminal (30 ms) stimulation (140 target-mask pairs for each condition). The results showed that conscious/unconscious processing and the significance of the stimulus can modulate the alpha power. Moreover, it was found that there was an increased right frontal activity for negative emotions vs. an increased left response for positive emotion. The significance of facial expressions was adduced to elucidate cortical different responses to emotional types. PMID:24962767

  1. Subliminal and supraliminal processing of facial expression of emotions: brain oscillation in the left/right frontal area.

    PubMed

    Balconi, Michela; Ferrari, Chiara

    2012-03-26

    The unconscious effects of an emotional stimulus have been highlighted by a vast amount of research, whereover it remains questionable whether it is possible to assign a specific function to cortical brain oscillations in the unconscious perception of facial expressions of emotions. Alpha band variation was monitored within the right- and left-cortical side when subjects consciously (supraliminal stimulation) or unconsciously (subliminal stimulation) processed facial patterns. Twenty subjects looked at six facial expressions of emotions (anger, fear, surprise, disgust, happiness, sadness, and neutral) under two different conditions: supraliminal (200 ms) vs. subliminal (30 ms) stimulation (140 target-mask pairs for each condition). The results showed that conscious/unconscious processing and the significance of the stimulus can modulate the alpha power. Moreover, it was found that there was an increased right frontal activity for negative emotions vs. an increased left response for positive emotion. The significance of facial expressions was adduced to elucidate cortical different responses to emotional types.

  2. Mere social categorization modulates identification of facial expressions of emotion.

    PubMed

    Young, Steven G; Hugenberg, Kurt

    2010-12-01

    The ability of the human face to communicate emotional states via facial expressions is well known, and past research has established the importance and universality of emotional facial expressions. However, recent evidence has revealed that facial expressions of emotion are most accurately recognized when the perceiver and expresser are from the same cultural ingroup. The current research builds on this literature and extends this work. Specifically, we find that mere social categorization, using a minimal-group paradigm, can create an ingroup emotion-identification advantage even when the culture of the target and perceiver is held constant. Follow-up experiments show that this effect is supported by differential motivation to process ingroup versus outgroup faces and that this motivational disparity leads to more configural processing of ingroup faces than of outgroup faces. Overall, the results point to distinct processing modes for ingroup and outgroup faces, resulting in differential identification accuracy for facial expressions of emotion. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  3. Automatic Neural Processing of Disorder-Related Stimuli in Social Anxiety Disorder: Faces and More

    PubMed Central

    Schulz, Claudia; Mothes-Lasch, Martin; Straube, Thomas

    2013-01-01

    It has been proposed that social anxiety disorder (SAD) is associated with automatic information processing biases resulting in hypersensitivity to signals of social threat such as negative facial expressions. However, the nature and extent of automatic processes in SAD on the behavioral and neural level is not entirely clear yet. The present review summarizes neuroscientific findings on automatic processing of facial threat but also other disorder-related stimuli such as emotional prosody or negative words in SAD. We review initial evidence for automatic activation of the amygdala, insula, and sensory cortices as well as for automatic early electrophysiological components. However, findings vary depending on tasks, stimuli, and neuroscientific methods. Only few studies set out to examine automatic neural processes directly and systematic attempts are as yet lacking. We suggest that future studies should: (1) use different stimulus modalities, (2) examine different emotional expressions, (3) compare findings in SAD with other anxiety disorders, (4) use more sophisticated experimental designs to investigate features of automaticity systematically, and (5) combine different neuroscientific methods (such as functional neuroimaging and electrophysiology). Finally, the understanding of neural automatic processes could also provide hints for therapeutic approaches. PMID:23745116

  4. Altered Kinematics of Facial Emotion Expression and Emotion Recognition Deficits Are Unrelated in Parkinson's Disease.

    PubMed

    Bologna, Matteo; Berardelli, Isabella; Paparella, Giulia; Marsili, Luca; Ricciardi, Lucia; Fabbrini, Giovanni; Berardelli, Alfredo

    2016-01-01

    Altered emotional processing, including reduced emotion facial expression and defective emotion recognition, has been reported in patients with Parkinson's disease (PD). However, few studies have objectively investigated facial expression abnormalities in PD using neurophysiological techniques. It is not known whether altered facial expression and recognition in PD are related. To investigate possible deficits in facial emotion expression and emotion recognition and their relationship, if any, in patients with PD. Eighteen patients with PD and 16 healthy controls were enrolled in this study. Facial expressions of emotion were recorded using a 3D optoelectronic system and analyzed using the facial action coding system. Possible deficits in emotion recognition were assessed using the Ekman test. Participants were assessed in one experimental session. Possible relationship between the kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients were evaluated using the Spearman's test and multiple regression analysis. The facial expression of all six basic emotions had slower velocity and lower amplitude in patients in comparison to healthy controls (all P s < 0.05). Patients also yielded worse Ekman global score and disgust, sadness, and fear sub-scores than healthy controls (all P s < 0.001). Altered facial expression kinematics and emotion recognition deficits were unrelated in patients (all P s > 0.05). Finally, no relationship emerged between kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients (all P s > 0.05). The results in this study provide further evidence of altered emotional processing in PD. The lack of any correlation between altered facial emotion expression kinematics and emotion recognition deficits in patients suggests that these abnormalities are mediated by separate pathophysiological mechanisms.

  5. A Virtual Environment to Improve the Detection of Oral-Facial Malfunction in Children with Cerebral Palsy

    PubMed Central

    Martín-Ruiz, María-Luisa; Máximo-Bocanegra, Nuria; Luna-Oliva, Laura

    2016-01-01

    The importance of an early rehabilitation process in children with cerebral palsy (CP) is widely recognized. On the one hand, new and useful treatment tools such as rehabilitation systems based on interactive technologies have appeared for rehabilitation of gross motor movements. On the other hand, from the therapeutic point of view, performing rehabilitation exercises with the facial muscles can improve the swallowing process, the facial expression through the management of muscles in the face, and even the speech of children with cerebral palsy. However, it is difficult to find interactive games to improve the detection and evaluation of oral-facial musculature dysfunctions in children with CP. This paper describes a framework based on strategies developed for interactive serious games that is created both for typically developed children and children with disabilities. Four interactive games are the core of a Virtual Environment called SONRIE. This paper demonstrates the benefits of SONRIE to monitor children’s oral-facial difficulties. The next steps will focus on the validation of SONRIE to carry out the rehabilitation process of oral-facial musculature in children with cerebral palsy. PMID:27023561

  6. Neurobiological mechanisms associated with facial affect recognition deficits after traumatic brain injury.

    PubMed

    Neumann, Dawn; McDonald, Brenna C; West, John; Keiski, Michelle A; Wang, Yang

    2016-06-01

    The neurobiological mechanisms that underlie facial affect recognition deficits after traumatic brain injury (TBI) have not yet been identified. Using functional magnetic resonance imaging (fMRI), study aims were to 1) determine if there are differences in brain activation during facial affect processing in people with TBI who have facial affect recognition impairments (TBI-I) relative to people with TBI and healthy controls who do not have facial affect recognition impairments (TBI-N and HC, respectively); and 2) identify relationships between neural activity and facial affect recognition performance. A facial affect recognition screening task performed outside the scanner was used to determine group classification; TBI patients who performed greater than one standard deviation below normal performance scores were classified as TBI-I, while TBI patients with normal scores were classified as TBI-N. An fMRI facial recognition paradigm was then performed within the 3T environment. Results from 35 participants are reported (TBI-I = 11, TBI-N = 12, and HC = 12). For the fMRI task, TBI-I and TBI-N groups scored significantly lower than the HC group. Blood oxygenation level-dependent (BOLD) signals for facial affect recognition compared to a baseline condition of viewing a scrambled face, revealed lower neural activation in the right fusiform gyrus (FG) in the TBI-I group than the HC group. Right fusiform gyrus activity correlated with accuracy on the facial affect recognition tasks (both within and outside the scanner). Decreased FG activity suggests facial affect recognition deficits after TBI may be the result of impaired holistic face processing. Future directions and clinical implications are discussed.

  7. Anatomically constrained neural network models for the categorization of facial expression

    NASA Astrophysics Data System (ADS)

    McMenamin, Brenton W.; Assadi, Amir H.

    2004-12-01

    The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.

  8. Anatomically constrained neural network models for the categorization of facial expression

    NASA Astrophysics Data System (ADS)

    McMenamin, Brenton W.; Assadi, Amir H.

    2005-01-01

    The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.

  9. Selective attention to a facial feature with and without facial context: an ERP-study.

    PubMed

    Wijers, A A; Van Besouw, N J P; Mulder, G

    2002-04-01

    The present experiment addressed the question whether selectively attending to a facial feature (mouth shape) would benefit from the presence of a correct facial context. Subjects attended selectively to one of two possible mouth shapes belonging to photographs of a face with a happy or sad expression, respectively. These mouths were presented randomly either in isolation, embedded in the original photos, or in an exchanged facial context. The ERP effect of attending mouth shape was a lateral posterior negativity, anterior positivity with an onset latency of 160-200 ms; this effect was completely unaffected by the type of facial context. When the mouth shape and the facial context conflicted, this resulted in a medial parieto-occipital positivity with an onset latency of 180 ms, independent of the relevance of the mouth shape. Finally, there was a late (onset at approx. 400 ms) expression (happy vs. sad) effect, which was strongly lateralized to the right posterior hemisphere and was most prominent for attended stimuli in the correct facial context. For the isolated mouth stimuli, a similarly distributed expression effect was observed at an earlier latency range (180-240 ms). These data suggest the existence of separate, independent and neuroanatomically segregated processors engaged in the selective processing of facial features and the detection of contextual congruence and emotional expression of face stimuli. The data do not support that early selective attention processes benefit from top-down constraints provided by the correct facial context.

  10. Incongruence Between Observers’ and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli

    PubMed Central

    Wingenbach, Tanja S. H.; Brosnan, Mark; Pfaltz, Monique C.; Plichta, Michael M.; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed. PMID:29928240

  11. Incongruence Between Observers' and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli.

    PubMed

    Wingenbach, Tanja S H; Brosnan, Mark; Pfaltz, Monique C; Plichta, Michael M; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others' facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others' facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others' faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions' order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.

  12. Neural evidence for the subliminal processing of facial trustworthiness in infancy.

    PubMed

    Jessen, Sarah; Grossmann, Tobias

    2017-04-22

    Face evaluation is thought to play a vital role in human social interactions. One prominent aspect is the evaluation of facial signs of trustworthiness, which has been shown to occur reliably, rapidly, and without conscious awareness in adults. Recent developmental work indicates that the sensitivity to facial trustworthiness has early ontogenetic origins as it can already be observed in infancy. However, it is unclear whether infants' sensitivity to facial signs of trustworthiness relies upon conscious processing of a face or, similar to adults, occurs also in response to subliminal faces. To investigate this question, we conducted an event-related brain potential (ERP) study, in which we presented 7-month-old infants with faces varying in trustworthiness. Facial stimuli were presented subliminally (below infants' face visibility threshold) for only 50ms and then masked by presenting a scrambled face image. Our data revealed that infants' ERP responses to subliminally presented faces differed as a function of trustworthiness. Specifically, untrustworthy faces elicited an enhanced negative slow wave (800-1000ms) at frontal and central electrodes. The current findings critically extend prior work by showing that, similar to adults, infants' neural detection of facial signs of trustworthiness occurs also in response to subliminal face. This supports the view that detecting facial trustworthiness is an early developing and automatic process in humans. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. [Applied anatomy of facial recess and posterior tympanum related to cochlear implantation].

    PubMed

    Zou, Tuanming; Xie, Nanping; Guo, Menghe; Shu, Fan; Zhang, Hongzheng

    2012-05-01

    To investigate the related parameters of temporal bone structure in the surgery of cochlear implantation through facial recess approach so as to offer a theoretical reference for the avoidance of facial nerve injury and the accurate localization. In a surgical simulation experiment, twenty human temporal bones were studied. The correlation parameters were measured under surgical microscope. Distance between suprameatal spine and short process of incus was (12.44 +/- 0.51) mm. Width from crotch of chorda tympani nerve to stylomastoid foramen was (2.67 +/- 0.51) mm. Distance between short process of incus and crotch of chorda tympani nerve was (15.22 +/- 0.83) mm. The location of maximal width of the facial recess into short process of incus, crotch of chorda tympani nerve were (6.28 +/- 0.41) mm, (9.81 +/- 0.71) mm, respectively. The maximal width of the facial recess was (2.73 +/- 0.20) mm. The value at level of stapes and round window were (2.48 +/- 0.20 mm) and (2.24 +/- 0.18) mm, respectively. Distance between pyramidalis eminence and anterior round window was (2.22 +/- 0.21) mm. Width from stapes to underneath round window was (2.16 +/- 0.14) mm. These parameters provide a reference value to determine the position of cochlear inserting the electrode array into the scale tympani and opening facial recess firstly to avoid potential damage to facial nerve in surgery.

  14. Impaired mixed emotion processing in the right ventrolateral prefrontal cortex in schizophrenia: an fMRI study.

    PubMed

    Szabó, Ádám György; Farkas, Kinga; Marosi, Csilla; Kozák, Lajos R; Rudas, Gábor; Réthelyi, János; Csukly, Gábor

    2017-12-08

    Schizophrenia has a negative effect on the activity of the temporal and prefrontal cortices in the processing of emotional facial expressions. However no previous research focused on the evaluation of mixed emotions in schizophrenia, albeit they are frequently expressed in everyday situations and negative emotions are frequently expressed by mixed facial expressions. Altogether 37 subjects, 19 patients with schizophrenia and 18 healthy control subjects were enrolled in the study. The two study groups did not differ in age and education. The stimulus set consisted of 10 fearful (100%), 10 happy (100%), 10 mixed fear (70% fear and 30% happy) and 10 mixed happy facial expressions. During the fMRI acquisition pictures were presented in a randomized order and subjects had to categorize expressions by button press. A decreased activation was found in the patient group during fear, mixed fear and mixed happy processing in the right ventrolateral prefrontal cortex (VLPFC) and the right anterior insula (RAI) at voxel and cluster level after familywise error correction. No difference was found between study groups in activations to happy facial condition. Patients with schizophrenia did not show a differential activation between mixed happy and happy facial expression similar to controls in the right dorsolateral prefrontal cortex (DLPFC). Patients with schizophrenia showed decreased functioning in right prefrontal regions responsible for salience signaling and valence evaluation during emotion recognition. Our results indicate that fear and mixed happy/fear processing are impaired in schizophrenia, while happy facial expression processing is relatively intact.

  15. Is that disgust I see? Political ideology and biased visual attention.

    PubMed

    Oosterhoff, Benjamin; Shook, Natalie J; Ford, Cameron

    2018-01-15

    Considerable evidence suggests that political liberals and conservatives vary in the way they process and respond to valenced (i.e., negative versus positive) information, with conservatives generally displaying greater negativity biases than liberals. Less is known about whether liberals and conservatives differentially prioritize certain forms of negative information over others. Across two studies using eye-tracking methodology, we examined differences in visual attention to negative scenes and facial expressions based on self-reported political ideology. In Study 1, scenes rated high in fear, disgust, sadness, and neutrality were presented simultaneously. Greater endorsement of socially conservative political attitudes was associated with less attentional engagement (i.e., lower dwell time) of disgust scenes and more attentional engagement toward neutral scenes. Socially conservative political attitudes were not significantly associated with visual attention to fear or sad scenes. In Study 2, images depicting facial expressions of fear, disgust, sadness, and neutrality were presented simultaneously. Greater endorsement of socially conservative political attitudes was associated with greater attentional engagement with facial expressions depicting disgust and less attentional engagement toward neutral faces. Visual attention to fearful or sad faces was not related to social conservatism. Endorsement of economically conservative political attitudes was not consistently associated with biases in visual attention across both studies. These findings support disease-avoidance models and suggest that social conservatism may be rooted within a greater sensitivity to disgust-related information. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Shyness and Emotion-Processing Skills in Preschoolers: A 6-Month Longitudinal Study

    ERIC Educational Resources Information Center

    Strand, Paul S.; Cerna, Sandra; Downs, Andrew

    2008-01-01

    The present study utilized a short-term longitudinal research design to examine the hypothesis that shyness in preschoolers is differentially related to different aspects of emotion processing. Using teacher reports of shyness and performance measures of emotion processing, including (1) facial emotion recognition, (2) non-facial emotion…

  17. Ethnic and Gender Considerations in the Use of Facial Injectables: Asian Patients.

    PubMed

    Liew, Steven

    2015-11-01

    Asians have distinct facial characteristics due to underlying skeletal and morphological features that differ greatly with those of whites. This together with the higher sun protection factor and the differences in the quality of the skin and soft tissue create a profound effect on their aging process. Understanding of these differences and their effects in the aging process in Asians is crucial in determining effective utilization and placement of injectable products to ensure optimal aesthetic outcomes. For younger Asian women, the main treatment goal is to address the inherent structural deficits through reshaping and the provision of facial support. Facial injectables are used to provide anterior projection, to reduce facial width, and to lengthen facial height. In the older group, the aim is for rejuvenation and also to address the underlying structural issues that has compounded due to age-related volume loss. Asian women requesting cosmetic procedures do not want to be Westernized but rather seeking to enhance and optimize their Asian ethnic features.

  18. Social orienting of children with autism to facial expressions and speech: a study with a wearable eye-tracker in naturalistic settings

    PubMed Central

    Magrelli, Silvia; Jermann, Patrick; Noris, Basilio; Ansermet, François; Hentsch, François; Nadel, Jacqueline; Billard, Aude

    2013-01-01

    This study investigates attention orienting to social stimuli in children with Autism Spectrum Conditions (ASC) during dyadic social interactions taking place in real-life settings. We study the effect of social cues that differ in complexity and distinguish between social cues produced by facial expressions of emotion and those produced during speech. We record the children's gazes using a head-mounted eye-tracking device and report on a detailed and quantitative analysis of the motion of the gaze in response to the social cues. The study encompasses a group of children with ASC from 2 to 11-years old (n = 14) and a group of typically developing (TD) children (n = 17) between 3 and 6-years old. While the two groups orient overtly to facial expressions, children with ASC do so to a lesser extent. Children with ASC differ importantly from TD children in the way they respond to speech cues, displaying little overt shifting of attention to speaking faces. When children with ASC orient to facial expressions, they show reaction times and first fixation lengths similar to those presented by TD children. However, children with ASC orient to speaking faces slower than TD children. These results support the hypothesis that individuals affected by ASC have difficulties processing complex social sounds and detecting intermodal correspondence between facial and vocal information. It also corroborates evidence that people with ASC show reduced overt attention toward social stimuli. PMID:24312064

  19. Cutaneous Sensibility Changes in Bell's Palsy Patients.

    PubMed

    Cárdenas Palacio, Carlos Andrés; Múnera Galarza, Francisco Alejandro

    2017-05-01

    Objective Bell's palsy is a cranial nerve VII dysfunction that renders the patient unable to control facial muscles from the affected side. Nevertheless, some patients have reported cutaneous changes in the paretic area. Therefore, cutaneous sensibility changes might be possible additional symptoms within the clinical presentation of this disorder. Accordingly, the aim of this research was to investigate the relationship between cutaneous sensibility and facial paralysis severity in these patients. Study Design Prospective longitudinal cohort study. Settings Tertiary care medical center. Subjects and Methods Twelve acute-onset Bell's palsy patients were enrolled from March to September 2009. In addition, 12 sex- and age-matched healthy volunteers were tested. Cutaneous sensibility was evaluated with pressure threshold and 2-point discrimination at 6 areas of the face. Facial paralysis severity was evaluated with the House-Brackmann scale. Results Statistically significant correlations based on the Spearman's test were found between facial paralysis severity and cutaneous sensitivity on forehead, eyelid, cheek, nose, and lip ( P < .05). Additionally, significant differences based on the Student's t test were observed between both sides of the face in 2-point discrimination on eyelid, cheek, and lip ( P < .05) in Bell's palsy patients but not in healthy subjects. Conclusion Such results suggest a possible relationship between the loss of motor control of the face and changes in facial sensory information processing. Such findings are worth further research about the neurophysiologic changes associated with the cutaneous sensibility disturbances of these patients.

  20. Facial decoding in schizophrenia is underpinned by basic visual processing impairments.

    PubMed

    Belge, Jan-Baptist; Maurage, Pierre; Mangelinckx, Camille; Leleux, Dominique; Delatte, Benoît; Constant, Eric

    2017-09-01

    Schizophrenia is associated with a strong deficit in the decoding of emotional facial expression (EFE). Nevertheless, it is still unclear whether this deficit is specific for emotions or due to a more general impairment for any type of facial processing. This study was designed to clarify this issue. Thirty patients suffering from schizophrenia and 30 matched healthy controls performed several tasks evaluating the recognition of both changeable (i.e. eyes orientation and emotions) and stable (i.e. gender, age) facial characteristics. Accuracy and reaction times were recorded. Schizophrenic patients presented a performance deficit (accuracy and reaction times) in the perception of both changeable and stable aspects of faces, without any specific deficit for emotional decoding. Our results demonstrate a generalized face recognition deficit in schizophrenic patients, probably caused by a perceptual deficit in basic visual processing. It seems that the deficit in the decoding of emotional facial expression (EFE) is not a specific deficit of emotion processing, but is at least partly related to a generalized perceptual deficit in lower-level perceptual processing, occurring before the stage of emotion processing, and underlying more complex cognitive dysfunctions. These findings should encourage future investigations to explore the neurophysiologic background of these generalized perceptual deficits, and stimulate a clinical approach focusing on more basic visual processing. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  1. Facial Movements Facilitate Part-Based, Not Holistic, Processing in Children, Adolescents, and Adults

    ERIC Educational Resources Information Center

    Xiao, Naiqi G.; Quinn, Paul C.; Ge, Liezhong; Lee, Kang

    2017-01-01

    Although most of the faces we encounter daily are moving ones, much of what we know about face processing and its development is based on studies using static faces that emphasize holistic processing as the hallmark of mature face processing. Here the authors examined the effects of facial movements on face processing developmentally in children…

  2. The “eye avoidance” hypothesis of autism face processing

    PubMed Central

    Tanaka, James W.; Sung, Andrew

    2013-01-01

    Although a growing body of research indicates that children with autism spectrum disorder (ASD) exhibit selective deficits in their ability to recognize facial identities and expressions, the source of their face impairment is, as yet, undetermined. In this paper, we consider three possible accounts of the autism face deficit: 1) the holistic hypothesis, 2) the local perceptual bias hypothesis and 3) the eye avoidance hypothesis. A review of the literature indicates that contrary to the holistic hypothesis, there is little evidence to suggest that individuals with autism do not perceive faces holistically. The local perceptual bias account also fails to explain the selective advantage that ASD individuals demonstrate for objects and their selective disadvantage for faces. The eye avoidance hypothesis provides a plausible explanation of face recognition deficits where individuals with ASD avoid the eye region because it is perceived as socially threatening. Direct eye contact elicits a heightened physiological response as indicated by heightened skin conductance and increased amgydala activity. For individuals with autism, avoiding the eyes is an adaptive strategy, however, this approach interferes with the ability to process facial cues of identity, expressions and intentions, The “eye avoidance” strategy has negative effects on the ability to decode facial information about identity, expression, and intentions, exacerbating the social challenges for persons with ASD. PMID:24150885

  3. Tolerance to spatial-relational transformations in unfamiliar faces: A further challenge to a configural processing account of identity recognition.

    PubMed

    Lorenzino, Martina; Caminati, Martina; Caudek, Corrado

    2018-05-25

    One of the most important questions in face perception research is to understand what information is extracted from a face in order to recognize its identity. Recognition of facial identity has been attributed to a special sensitivity to "configural" information. However, recent studies have challenged the configural account by showing that participants are poor in discriminating variations of metric distances among facial features, especially for familiar as opposed to unfamiliar faces, whereas a configural account predicts the opposite. We aimed to extend these previous results by examining classes of unfamiliar faces with which we have different levels of expertise. We hypothesized an inverse relation between sensitivity to configural information and expertise with a given class of faces, but only for neutral expressions. By first matching perceptual discriminability, we measured tolerance to subtle configural transformations with same-race (SR) versus other-race (OR) faces, and with upright versus upside-down faces. Consistently with our predictions, we found a lower sensitivity to at-threshold configural changes for SR compared to OR faces. We also found that, for our stimuli, the face inversion effect disappeared for neutral but not for emotional faces - a result that can also be attributed to a lower sensitivity to configural transformations for faces presented in a more familiar orientation. The present findings question a purely configural account of face processing and suggest that the role of spatial-relational information in face processing varies according to the functional demands of the task and to the characteristics of the stimuli. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Quantifying water flow and retention in an unsaturated fracture-facial domain

    USGS Publications Warehouse

    Nimmo, John R.; Malek-Mohammadi, Siamak

    2015-01-01

    Hydrologically significant flow and storage of water occur in macropores and fractures that are only partially filled. To accommodate such processes in flow models, we propose a three-domain framework. Two of the domains correspond to water flow and water storage in a fracture-facial region, in addition to the third domain of matrix water. The fracture-facial region, typically within a fraction of a millimeter of the fracture wall, includes a flowing phase whose fullness is determined by the availability and flux of preferentially flowing water, and a static storage portion whose fullness is determined by the local matric potential. The flow domain can be modeled with the source-responsive preferential flow model, and the roughness-storage domain can be modeled with capillary relations applied on the fracture-facial area. The matrix domain is treated using traditional unsaturated flow theory. We tested the model with application to the hydrology of the Chalk formation in southern England, coherently linking hydrologic information including recharge estimates, streamflow, water table fluctuation, imaging by electron microscopy, and surface roughness. The quantitative consistency of the three-domain matrix-microcavity-film model with this body of diverse data supports the hypothesized distinctions and active mechanisms of the three domains and establishes the usefulness of this framework.

  5. Neutral face classification using personalized appearance models for fast and robust emotion detection.

    PubMed

    Chiranjeevi, Pojala; Gopalakrishnan, Viswanath; Moogi, Pratibha

    2015-09-01

    Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning-based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, and so on, in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as user stays neutral for majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this paper, we propose a light-weight neutral versus emotion classification engine, which acts as a pre-processer to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at key emotion (KE) points using a statistical texture model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a statistical texture model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves emotion recognition (ER) accuracy and simultaneously reduces computational complexity of the ER system, as validated on multiple databases.

  6. Face processing in autism: Reduced integration of cross-feature dynamics.

    PubMed

    Shah, Punit; Bird, Geoffrey; Cook, Richard

    2016-02-01

    Characteristic problems with social interaction have prompted considerable interest in the face processing of individuals with Autism Spectrum Disorder (ASD). Studies suggest that reduced integration of information from disparate facial regions likely contributes to difficulties recognizing static faces in this population. Recent work also indicates that observers with ASD have problems using patterns of facial motion to judge identity and gender, and may be less able to derive global motion percepts. These findings raise the possibility that feature integration deficits also impact the perception of moving faces. To test this hypothesis, we examined whether observers with ASD exhibit susceptibility to a new dynamic face illusion, thought to index integration of moving facial features. When typical observers view eye-opening and -closing in the presence of asynchronous mouth-opening and -closing, the concurrent mouth movements induce a strong illusory slowing of the eye transitions. However, we find that observers with ASD are not susceptible to this illusion, suggestive of weaker integration of cross-feature dynamics. Nevertheless, observers with ASD and typical controls were equally able to detect the physical differences between comparison eye transitions. Importantly, this confirms that observers with ASD were able to fixate the eye-region, indicating that the striking group difference has a perceptual, not attentional origin. The clarity of the present results contrasts starkly with the modest effect sizes and equivocal findings seen throughout the literature on static face perception in ASD. We speculate that differences in the perception of facial motion may be a more reliable feature of this condition. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Automatic processing of facial affects in patients with borderline personality disorder: associations with symptomatology and comorbid disorders.

    PubMed

    Donges, Uta-Susan; Dukalski, Bibiana; Kersting, Anette; Suslow, Thomas

    2015-01-01

    Instability of affects and interpersonal relations are important features of borderline personality disorder (BPD). Interpersonal problems of individuals suffering from BPD might develop based on abnormalities in the processing of facial affects and high sensitivity to negative affective expressions. The aims of the present study were to examine automatic evaluative shifts and latencies as a function of masked facial affects in patients with BPD compared to healthy individuals. As BPD comorbidity rates for mental and personality disorders are high, we investigated also the relationships of affective processing characteristics with specific borderline symptoms and comorbidity. Twenty-nine women with BPD and 38 healthy women participated in the study. The majority of patients suffered from additional Axis I disorders and/or additional personality disorders. In the priming experiment, angry, happy, neutral, or no facial expression was briefly presented (for 33 ms) and masked by neutral faces that had to be evaluated. Evaluative decisions and response latencies were registered. Borderline-typical symptomatology was assessed with the Borderline Symptom List. In the total sample, valence-congruent evaluative shifts and delays of evaluative decision due to facial affect were observed. No between-group differences were obtained for evaluative decisions and latencies. The presence of comorbid anxiety disorders was found to be positively correlated with evaluative shifting owing to masked happy primes, regardless of baseline-neutral or no facial expression condition. The presence of comorbid depressive disorder, paranoid personality disorder, and symptoms of social isolation and self-aggression were significantly correlated with response delay due to masked angry faces, regardless of baseline. In the present affective priming study, no abnormalities in the automatic recognition and processing of facial affects were observed in BPD patients compared to healthy individuals. The presence of comorbid anxiety disorders could make patients more susceptible to the influence of a happy expression on judgment processes at an automatic processing level. Comorbid depressive disorder, paranoid personality disorder, and symptoms of social isolation and self-aggression may enhance automatic attention allocation to threatening facial expressions in BPD. Increased automatic vigilance for social threat stimuli might contribute to affective instability and interpersonal problems in specific patients with BPD.

  8. The Systems Engineering Design of a Smart Forward Operating Base Surveillance System for Forward Operating Base Protection

    DTIC Science & Technology

    2013-06-01

    fixed sensors located along the perimeter of the FOB. The video is analyzed for facial recognition to alert the Network Operations Center (NOC...the UAV is processed on board for facial recognition and video for behavior analysis is sent directly to the Network Operations Center (NOC). Video...captured by the fixed sensors are sent directly to the NOC for facial recognition and behavior analysis processing. The multi- directional signal

  9. The Functional Role of the Periphery in Emotional Language Comprehension

    PubMed Central

    Havas, David A.; Matheson, James

    2013-01-01

    Language can impact emotion, even when it makes no reference to emotion states. For example, reading sentences with positive meanings (“The water park is refreshing on the hot summer day”) induces patterns of facial feedback congruent with the sentence emotionality (smiling), whereas sentences with negative meanings induce a frown. Moreover, blocking facial afference with botox selectively slows comprehension of emotional sentences. Therefore, theories of cognition should account for emotion-language interactions above the level of explicit emotion words, and the role of peripheral feedback in comprehension. For this special issue exploring frontiers in the role of the body and environment in cognition, we propose a theory in which facial feedback provides a context-sensitive constraint on the simulation of actions described in language. Paralleling the role of emotions in real-world behavior, our account proposes that (1) facial expressions accompany sudden shifts in wellbeing as described in language; (2) facial expressions modulate emotional action systems during reading; and (3) emotional action systems prepare the reader for an effective simulation of the ensuing language content. To inform the theory and guide future research, we outline a framework based on internal models for motor control. To support the theory, we assemble evidence from diverse areas of research. Taking a functional view of emotion, we tie the theory to behavioral and neural evidence for a role of facial feedback in cognition. Our theoretical framework provides a detailed account that can guide future research on the role of emotional feedback in language processing, and on interactions of language and emotion. It also highlights the bodily periphery as relevant to theories of embodied cognition. PMID:23750145

  10. Agency and facial emotion judgment in context.

    PubMed

    Ito, Kenichi; Masuda, Takahiko; Li, Liman Man Wai

    2013-06-01

    Past research showed that East Asians' belief in holism was expressed as their tendencies to include background facial emotions into the evaluation of target faces more than North Americans. However, this pattern can be interpreted as North Americans' tendency to downplay background facial emotions due to their conceptualization of facial emotion as volitional expression of internal states. Examining this alternative explanation, we investigated whether different types of contextual information produce varying degrees of effect on one's face evaluation across cultures. In three studies, European Canadians and East Asians rated the intensity of target facial emotions surrounded with either affectively salient landscape sceneries or background facial emotions. The results showed that, although affectively salient landscapes influenced the judgment of both cultural groups, only European Canadians downplayed the background facial emotions. The role of agency as differently conceptualized across cultures and multilayered systems of cultural meanings are discussed.

  11. Marker optimization for facial motion acquisition and deformation.

    PubMed

    Le, Binh H; Zhu, Mingyang; Deng, Zhigang

    2013-11-01

    A long-standing problem in marker-based facial motion capture is what are the optimal facial mocap marker layouts. Despite its wide range of potential applications, this problem has not yet been systematically explored to date. This paper describes an approach to compute optimized marker layouts for facial motion acquisition as optimization of characteristic control points from a set of high-resolution, ground-truth facial mesh sequences. Specifically, the thin-shell linear deformation model is imposed onto the example pose reconstruction process via optional hard constraints such as symmetry and multiresolution constraints. Through our experiments and comparisons, we validate the effectiveness, robustness, and accuracy of our approach. Besides guiding minimal yet effective placement of facial mocap markers, we also describe and demonstrate its two selected applications: marker-based facial mesh skinning and multiresolution facial performance capture.

  12. Reference frames for spatial frequency in face representation differ in the temporal visual cortex and amygdala.

    PubMed

    Inagaki, Mikio; Fujita, Ichiro

    2011-07-13

    Social communication in nonhuman primates and humans is strongly affected by facial information from other individuals. Many cortical and subcortical brain areas are known to be involved in processing facial information. However, how the neural representation of faces differs across different brain areas remains unclear. Here, we demonstrate that the reference frame for spatial frequency (SF) tuning of face-responsive neurons differs in the temporal visual cortex and amygdala in monkeys. Consistent with psychophysical properties for face recognition, temporal cortex neurons were tuned to image-based SFs (cycles/image) and showed viewing distance-invariant representation of face patterns. On the other hand, many amygdala neurons were influenced by retina-based SFs (cycles/degree), a characteristic that is useful for social distance computation. The two brain areas also differed in the luminance contrast sensitivity of face-responsive neurons; amygdala neurons sharply reduced their responses to low luminance contrast images, while temporal cortex neurons maintained the level of their responses. From these results, we conclude that different types of visual processing in the temporal visual cortex and the amygdala contribute to the construction of the neural representations of faces.

  13. The Development of Facial Emotion Recognition: The Role of Configural Information

    ERIC Educational Resources Information Center

    Durand, Karine; Gallay, Mathieu; Seigneuric, Alix; Robichon, Fabrice; Baudouin, Jean-Yves

    2007-01-01

    The development of children's ability to recognize facial emotions and the role of configural information in this development were investigated. In the study, 100 5-, 7-, 9-, and 11-year-olds and 26 adults needed to recognize the emotion displayed by upright and upside-down faces. The same participants needed to recognize the emotion displayed by…

  14. Judging Pain Intensity in Children with Autism Undergoing Venepuncture: The Influence of Facial Activity

    ERIC Educational Resources Information Center

    Messmer, Rosemary L.; Nader, Rami; Craig, Kenneth D.

    2008-01-01

    The biasing effect of pain sensitivity information and the impact of facial activity on observers' judgments of pain intensity of children with autism were examined. Observers received information that pain experience in children with autism is either the same as, more intense than, or less intense than children without autism. After viewing six…

  15. Parallel Processing in Face Perception

    ERIC Educational Resources Information Center

    Martens, Ulla; Leuthold, Hartmut; Schweinberger, Stefan R.

    2010-01-01

    The authors examined face perception models with regard to the functional and temporal organization of facial identity and expression analysis. Participants performed a manual 2-choice go/no-go task to classify faces, where response hand depended on facial familiarity (famous vs. unfamiliar) and response execution depended on facial expression…

  16. Compensatory premotor activity during affective face processing in subclinical carriers of a single mutant Parkin allele.

    PubMed

    Anders, Silke; Sack, Benjamin; Pohl, Anna; Münte, Thomas; Pramstaller, Peter; Klein, Christine; Binkofski, Ferdinand

    2012-04-01

    Patients with Parkinson's disease suffer from significant motor impairments and accompanying cognitive and affective dysfunction due to progressive disturbances of basal ganglia-cortical gating loops. Parkinson's disease has a long presymptomatic stage, which indicates a substantial capacity of the human brain to compensate for dopaminergic nerve degeneration before clinical manifestation of the disease. Neuroimaging studies provide evidence that increased motor-related cortical activity can compensate for progressive dopaminergic nerve degeneration in carriers of a single mutant Parkin or PINK1 gene, who show a mild but significant reduction of dopamine metabolism in the basal ganglia in the complete absence of clinical motor signs. However, it is currently unknown whether similar compensatory mechanisms are effective in non-motor basal ganglia-cortical gating loops. Here, we ask whether asymptomatic Parkin mutation carriers show altered patterns of brain activity during processing of facial gestures, and whether this might compensate for latent facial emotion recognition deficits. Current theories in social neuroscience assume that execution and perception of facial gestures are linked by a special class of visuomotor neurons ('mirror neurons') in the ventrolateral premotor cortex/pars opercularis of the inferior frontal gyrus (Brodmann area 44/6). We hypothesized that asymptomatic Parkin mutation carriers would show increased activity in this area during processing of affective facial gestures, replicating the compensatory motor effects that have previously been observed in these individuals. Additionally, Parkin mutation carriers might show altered activity in other basal ganglia-cortical gating loops. Eight asymptomatic heterozygous Parkin mutation carriers and eight matched controls underwent functional magnetic resonance imaging and a subsequent facial emotion recognition task. As predicted, Parkin mutation carriers showed significantly stronger activity in the right ventrolateral premotor cortex during execution and perception of affective facial gestures than healthy controls. Furthermore, Parkin mutation carriers showed a slightly reduced ability to recognize facial emotions that was least severe in individuals who showed the strongest increase of ventrolateral premotor activity. In addition, Parkin mutation carriers showed a significantly weaker than normal increase of activity in the left lateral orbitofrontal cortex (inferior frontal gyrus pars orbitalis, Brodmann area 47), which was unrelated to facial emotion recognition ability. These findings are consistent with the hypothesis that compensatory activity in the ventrolateral premotor cortex during processing of affective facial gestures can reduce impairments in facial emotion recognition in subclinical Parkin mutation carriers. A breakdown of this compensatory mechanism might lead to the impairment of facial expressivity and facial emotion recognition observed in manifest Parkinson's disease.

  17. Compensatory premotor activity during affective face processing in subclinical carriers of a single mutant Parkin allele

    PubMed Central

    Sack, Benjamin; Pohl, Anna; Münte, Thomas; Pramstaller, Peter; Klein, Christine; Binkofski, Ferdinand

    2012-01-01

    Patients with Parkinson's disease suffer from significant motor impairments and accompanying cognitive and affective dysfunction due to progressive disturbances of basal ganglia–cortical gating loops. Parkinson's disease has a long presymptomatic stage, which indicates a substantial capacity of the human brain to compensate for dopaminergic nerve degeneration before clinical manifestation of the disease. Neuroimaging studies provide evidence that increased motor-related cortical activity can compensate for progressive dopaminergic nerve degeneration in carriers of a single mutant Parkin or PINK1 gene, who show a mild but significant reduction of dopamine metabolism in the basal ganglia in the complete absence of clinical motor signs. However, it is currently unknown whether similar compensatory mechanisms are effective in non-motor basal ganglia–cortical gating loops. Here, we ask whether asymptomatic Parkin mutation carriers show altered patterns of brain activity during processing of facial gestures, and whether this might compensate for latent facial emotion recognition deficits. Current theories in social neuroscience assume that execution and perception of facial gestures are linked by a special class of visuomotor neurons (‘mirror neurons’) in the ventrolateral premotor cortex/pars opercularis of the inferior frontal gyrus (Brodmann area 44/6). We hypothesized that asymptomatic Parkin mutation carriers would show increased activity in this area during processing of affective facial gestures, replicating the compensatory motor effects that have previously been observed in these individuals. Additionally, Parkin mutation carriers might show altered activity in other basal ganglia–cortical gating loops. Eight asymptomatic heterozygous Parkin mutation carriers and eight matched controls underwent functional magnetic resonance imaging and a subsequent facial emotion recognition task. As predicted, Parkin mutation carriers showed significantly stronger activity in the right ventrolateral premotor cortex during execution and perception of affective facial gestures than healthy controls. Furthermore, Parkin mutation carriers showed a slightly reduced ability to recognize facial emotions that was least severe in individuals who showed the strongest increase of ventrolateral premotor activity. In addition, Parkin mutation carriers showed a significantly weaker than normal increase of activity in the left lateral orbitofrontal cortex (inferior frontal gyrus pars orbitalis, Brodmann area 47), which was unrelated to facial emotion recognition ability. These findings are consistent with the hypothesis that compensatory activity in the ventrolateral premotor cortex during processing of affective facial gestures can reduce impairments in facial emotion recognition in subclinical Parkin mutation carriers. A breakdown of this compensatory mechanism might lead to the impairment of facial expressivity and facial emotion recognition observed in manifest Parkinson's disease. PMID:22434215

  18. Novel dynamic Bayesian networks for facial action element recognition and understanding

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong

    2011-12-01

    In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.

  19. SparCLeS: dynamic l₁ sparse classifiers with level sets for robust beard/moustache detection and segmentation.

    PubMed

    Le, T Hoang Ngan; Luu, Khoa; Savvides, Marios

    2013-08-01

    Robust facial hair detection and segmentation is a highly valued soft biometric attribute for carrying out forensic facial analysis. In this paper, we propose a novel and fully automatic system, called SparCLeS, for beard/moustache detection and segmentation in challenging facial images. SparCLeS uses the multiscale self-quotient (MSQ) algorithm to preprocess facial images and deal with illumination variation. Histogram of oriented gradients (HOG) features are extracted from the preprocessed images and a dynamic sparse classifier is built using these features to classify a facial region as either containing skin or facial hair. A level set based approach, which makes use of the advantages of both global and local information, is then used to segment the regions of a face containing facial hair. Experimental results demonstrate the effectiveness of our proposed system in detecting and segmenting facial hair regions in images drawn from three databases, i.e., the NIST Multiple Biometric Grand Challenge (MBGC) still face database, the NIST Color Facial Recognition Technology FERET database, and the Labeled Faces in the Wild (LFW) database.

  20. Identity recognition and happy and sad facial expression recall: influence of depressive symptoms.

    PubMed

    Jermann, Françoise; van der Linden, Martial; D'Argembeau, Arnaud

    2008-05-01

    Relatively few studies have examined memory bias for social stimuli in depression or dysphoria. The aim of this study was to investigate the influence of depressive symptoms on memory for facial information. A total of 234 participants completed the Beck Depression Inventory II and a task examining memory for facial identity and expression of happy and sad faces. For both facial identity and expression, the recollective experience was measured with the Remember/Know/Guess procedure (Gardiner & Richardson-Klavehn, 2000). The results show no major association between depressive symptoms and memory for identities. However, dysphoric individuals consciously recalled (Remember responses) more sad facial expressions than non-dysphoric individuals. These findings suggest that sad facial expressions led to more elaborate encoding, and thereby better recollection, in dysphoric individuals.

  1. Rapid processing of emotional expressions without conscious awareness.

    PubMed

    Smith, Marie L

    2012-08-01

    Rapid accurate categorization of the emotional state of our peers is of critical importance and as such many have proposed that facial expressions of emotion can be processed without conscious awareness. Typically, studies focus selectively on fearful expressions due to their evolutionary significance, leaving the subliminal processing of other facial expressions largely unexplored. Here, I investigated the time course of processing of 3 facial expressions (fearful, disgusted, and happy) plus an emotionally neutral face, during objectively unaware and aware perception. Participants completed the challenging "which expression?" task in response to briefly presented backward-masked expressive faces. Although participant's behavioral responses did not differentiate between the emotional content of the stimuli in the unaware condition, activity over frontal and occipitotemporal (OT) brain regions indicated an emotional modulation of the neuronal response. Over frontal regions this was driven by negative facial expressions and was present on all emotional trials independent of later categorization. Whereas the N170 component, recorded on lateral OT electrodes, was enhanced for all facial expressions but only on trials that would later be categorized as emotional. The results indicate that emotional faces, not only fearful, are processed without conscious awareness at an early stage and highlight the critical importance of considering categorization response when studying subliminal perception.

  2. Holistic face processing can inhibit recognition of forensic facial composites.

    PubMed

    McIntyre, Alex H; Hancock, Peter J B; Frowd, Charlie D; Langton, Stephen R H

    2016-04-01

    Facial composite systems help eyewitnesses to show the appearance of criminals. However, likenesses created by unfamiliar witnesses will not be completely accurate, and people familiar with the target can find them difficult to identify. Faces are processed holistically; we explore whether this impairs identification of inaccurate composite images and whether recognition can be improved. In Experiment 1 (n = 64) an imaging technique was used to make composites of celebrity faces more accurate and identification was contrasted with the original composite images. Corrected composites were better recognized, confirming that errors in production of the likenesses impair identification. The influence of holistic face processing was explored by misaligning the top and bottom parts of the composites (cf. Young, Hellawell, & Hay, 1987). Misalignment impaired recognition of corrected composites but identification of the original, inaccurate composites significantly improved. This effect was replicated with facial composites of noncelebrities in Experiment 2 (n = 57). We conclude that, like real faces, facial composites are processed holistically: recognition is impaired because unlike real faces, composites contain inaccuracies and holistic face processing makes it difficult to perceive identifiable features. This effect was consistent across composites of celebrities and composites of people who are personally familiar. Our findings suggest that identification of forensic facial composites can be enhanced by presenting composites in a misaligned format. (c) 2016 APA, all rights reserved).

  3. Depth Structure from Asymmetric Shading Supports Face Discrimination

    PubMed Central

    Chen, Chien-Chung; Chen, Chin-Mei; Tyler, Christopher W.

    2013-01-01

    To examine the effect of illumination direction on the ability of observers to discriminate between faces, we manipulated the direction of illumination on scanned 3D face models. In order to dissociate the surface reflectance and illumination components of front-view face images, we introduce a symmetry algorithm that can separate the symmetric and asymmetric components of the face in both low and high spatial frequency bands. Based on this approach, hybrid faces stimuli were constructed with different combinations of symmetric and asymmetric spatial content. Discrimination results with these images showed that asymmetric illumination information biased face perception toward the structure of the shading component, while the symmetric illumination information had little, if any, effect. Measures of perceived depth showed that this property increased systematically with the asymmetric but not the symmetric low spatial frequency component. Together, these results suggest that (1) the asymmetric 3D shading information dramatically affects both the perceived facial information and the perceived depth of the facial structure; and (2) these effects both increase as the illumination direction is shifted to the side. Thus, our results support the hypothesis that face processing has a strong 3D component. PMID:23457484

  4. Neural processing of fearful and happy facial expressions during emotion-relevant and emotion-irrelevant tasks: A fixation-to-feature approach.

    PubMed

    Neath-Tavares, Karly N; Itier, Roxane J

    2016-09-01

    Research suggests an important role of the eyes and mouth for discriminating facial expressions of emotion. A gaze-contingent procedure was used to test the impact of fixation to facial features on the neural response to fearful, happy and neutral facial expressions in an emotion discrimination (Exp.1) and an oddball detection (Exp.2) task. The N170 was the only eye-sensitive ERP component, and this sensitivity did not vary across facial expressions. In both tasks, compared to neutral faces, responses to happy expressions were seen as early as 100-120ms occipitally, while responses to fearful expressions started around 150ms, on or after the N170, at both occipital and lateral-posterior sites. Analyses of scalp topographies revealed different distributions of these two emotion effects across most of the epoch. Emotion processing interacted with fixation location at different times between tasks. Results suggest a role of both the eyes and mouth in the neural processing of fearful expressions and of the mouth in the processing of happy expressions, before 350ms. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Correlation between hedonic liking and facial expression measurement using dynamic affective response representation.

    PubMed

    Zhi, Ruicong; Wan, Jingwei; Zhang, Dezheng; Li, Weiping

    2018-06-01

    Emotional reactions towards products play an essential role in consumers' decision making, and are more important than rational evaluation of sensory attributes. It is crucial to understand consumers' emotion, and the relationship between sensory properties, human liking and choice. There are many inconsistencies between Asian and Western consumers in the usage of hedonic scale, as well as the intensity of facial reactions, due to different culture and consuming habits. However, very few studies discussed the facial responses characteristics of Asian consumers during food consumption. In this paper, explicit liking measurement (hedonic scale) and implicit emotional measurement (facial expressions) were evaluated to judge the consumers' emotions elicited by five types of juices. The contributions of this study included: (1) Constructed the relationship model between hedonic liking and facial expressions analyzed by face reading technology. Negative emotions "sadness", "anger", and "disgust" showed noticeable high negative correlation tendency to hedonic scores. The "liking" hedonic scores could be characterized by positive emotion "happiness". (2) Several emotional intensity based parameters, especially dynamic parameter, were extracted to describe the facial characteristic in sensory evaluation procedure. Both amplitude information and frequency information were involved in the dynamic parameters to remain more information of the emotional responses signals. From the comparison of four types of emotional descriptive parameters, the maximum parameter and dynamic parameter were suggested to be utilized for representing emotional state and intensities. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Automatic facial animation parameters extraction in MPEG-4 visual communication

    NASA Astrophysics Data System (ADS)

    Yang, Chenggen; Gong, Wanwei; Yu, Lu

    2002-01-01

    Facial Animation Parameters (FAPs) are defined in MPEG-4 to animate a facial object. The algorithm proposed in this paper to extract these FAPs is applied to very low bit-rate video communication, in which the scene is composed of a head-and-shoulder object with complex background. This paper addresses the algorithm to automatically extract all FAPs needed to animate a generic facial model, estimate the 3D motion of head by points. The proposed algorithm extracts human facial region by color segmentation and intra-frame and inter-frame edge detection. Facial structure and edge distribution of facial feature such as vertical and horizontal gradient histograms are used to locate the facial feature region. Parabola and circle deformable templates are employed to fit facial feature and extract a part of FAPs. A special data structure is proposed to describe deformable templates to reduce time consumption for computing energy functions. Another part of FAPs, 3D rigid head motion vectors, are estimated by corresponding-points method. A 3D head wire-frame model provides facial semantic information for selection of proper corresponding points, which helps to increase accuracy of 3D rigid object motion estimation.

  7. Face-to-face: Perceived personal relevance amplifies face processing

    PubMed Central

    Pittig, Andre; Schupp, Harald T.; Alpers, Georg W.

    2017-01-01

    Abstract The human face conveys emotional and social information, but it is not well understood how these two aspects influence face perception. In order to model a group situation, two faces displaying happy, neutral or angry expressions were presented. Importantly, faces were either facing the observer, or they were presented in profile view directed towards, or looking away from each other. In Experiment 1 (n = 64), face pairs were rated regarding perceived relevance, wish-to-interact, and displayed interactivity, as well as valence and arousal. All variables revealed main effects of facial expression (emotional > neutral), face orientation (facing observer > towards > away) and interactions showed that evaluation of emotional faces strongly varies with their orientation. Experiment 2 (n = 33) examined the temporal dynamics of perceptual-attentional processing of these face constellations with event-related potentials. Processing of emotional and neutral faces differed significantly in N170 amplitudes, early posterior negativity (EPN), and sustained positive potentials. Importantly, selective emotional face processing varied as a function of face orientation, indicating early emotion-specific (N170, EPN) and late threat-specific effects (LPP, sustained positivity). Taken together, perceived personal relevance to the observer—conveyed by facial expression and face direction—amplifies emotional face processing within triadic group situations. PMID:28158672

  8. Are Happy Faces Attractive? The Roles of Early vs. Late Processing

    PubMed Central

    Sun, Delin; Chan, Chetwyn C. H.; Fan, Jintu; Wu, Yi; Lee, Tatia M. C.

    2015-01-01

    Facial attractiveness is closely related to romantic love. To understand if the neural underpinnings of perceived facial attractiveness and facial expression are similar constructs, we recorded neural signals using an event-related potential (ERP) methodology for 20 participants who were viewing faces with varied attractiveness and expressions. We found that attractiveness and expression were reflected by two early components, P2-lateral (P2l) and P2-medial (P2m), respectively; their interaction effect was reflected by LPP, a late component. The findings suggested that facial attractiveness and expression are first processed in parallel for discrimination between stimuli. After the initial processing, more attentional resources are allocated to the faces with the most positive or most negative valence in both the attractiveness and expression dimensions. The findings contribute to the theoretical model of face perception. PMID:26648885

  9. Improved facial affect recognition in schizophrenia following an emotion intervention, but not training attention-to-facial-features or treatment-as-usual.

    PubMed

    Tsotsi, Stella; Kosmidis, Mary H; Bozikas, Vasilis P

    2017-08-01

    In schizophrenia, impaired facial affect recognition (FAR) has been associated with patients' overall social functioning. Interventions targeting attention or FAR per se have invariably yielded improved FAR performance in these patients. Here, we compared the effects of two interventions, one targeting FAR and one targeting attention-to-facial-features, with treatment-as-usual on patients' FAR performance. Thirty-nine outpatients with schizophrenia were randomly assigned to one of three groups: FAR intervention (training to recognize emotional information, conveyed by changes in facial features), attention-to-facial-features intervention (training to detect changes in facial features), and treatment-as-usual. Also, 24 healthy controls, matched for age and education, were assigned to one of the two interventions. Two FAR measurements, baseline and post-intervention, were conducted using an original experimental procedure with alternative sets of stimuli. We found improved FAR performance following the intervention targeting FAR in comparison to the other patient groups, which in fact was comparable to the pre-intervention performance of healthy controls in the corresponding intervention group. This improvement was more pronounced in recognizing fear. Our findings suggest that compared to interventions targeting attention, and treatment-as-usual, training programs targeting FAR can be more effective in improving FAR in patients with schizophrenia, particularly assisting them in perceiving threat-related information more accurately. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  10. Male facial attractiveness and masculinity may provide sex- and culture-independent cues to semen quality.

    PubMed

    Soler, C; Kekäläinen, J; Núñez, M; Sancho, M; Álvarez, J G; Núñez, J; Yaber, I; Gutiérrez, R

    2014-09-01

    Phenotype-linked fertility hypothesis (PLFH) predicts that male secondary sexual traits reveal honest information about male fertilization ability. However, PLFH has rarely been studied in humans. The aim of the present study was to test PLFH in humans and to investigate whether potential ability to select fertile partners is independent of sex or cultural background. We found that on the contrary to the hypothesis, facial masculinity was negatively associated with semen quality. As increased levels of testosterone have been demonstrated to impair sperm production, this finding may indicate a trade-off between investments in secondary sexual signalling (i.e. facial masculinity) and fertility or status-dependent differences in investments in semen quality. In both sexes and nationalities (Spanish and Colombian), ranked male facial attractiveness predicted male semen quality. However, Spanish males and females estimated facial images generally more attractive (gave higher ranks) than Colombian raters, and in both nationalities, males gave higher ranks than females. This suggests that male facial cues may provide culture- and sex-independent information about male fertility. However, our results also indicate that humans may be more sensitive to facial attractiveness cues within their own populations and also that males may generally overestimate the attractiveness of other men to females. © 2014 The Authors. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.

  11. Brain Systems for Assessing Facial Attractiveness

    ERIC Educational Resources Information Center

    Winston, Joel S.; O'Doherty, John; Kilner, James M.; Perrett, David I.; Dolan, Raymond J.

    2007-01-01

    Attractiveness is a facial attribute that shapes human affiliative behaviours. In a previous study we reported a linear response to facial attractiveness in orbitofrontal cortex (OFC), a region involved in reward processing. There are strong theoretical grounds for the hypothesis that coding stimulus reward value also involves the amygdala. The…

  12. Cradling Side Preference Is Associated with Lateralized Processing of Baby Facial Expressions in Females

    ERIC Educational Resources Information Center

    Huggenberger, Harriet J.; Suter, Susanne E.; Reijnen, Ester; Schachinger, Hartmut

    2009-01-01

    Women's cradling side preference has been related to contralateral hemispheric specialization of processing emotional signals; but not of processing baby's facial expression. Therefore, 46 nulliparous female volunteers were characterized as left or non-left holders (HG) during a doll holding task. During a signal detection task they were then…

  13. The role of holistic processing in judgments of facial attractiveness.

    PubMed

    Abbas, Zara-Angela; Duchaine, Bradley

    2008-01-01

    Previous work has demonstrated that facial identity recognition, expression recognition, gender categorisation, and race categorisation rely on a holistic representation. Here we examine whether a holistic representation is also used for judgments of facial attractiveness. Like past studies, we used the composite paradigm to assess holistic processing (Young et al 1987, Perception 16 747-759). Experiment 1 showed that top halves of upright faces are judged to be more attractive when aligned with an attractive bottom half than when aligned with an unattractive bottom half. To assess whether this effect resulted from holistic processing or more general effects, we examined the impact of the attractive and unattractive bottom halves when upright halves were misaligned and when aligned and misaligned halves were presented upside-down. The bottom halves had no effect in either condition. These results demonstrate that the perceptual processes underlying upright facial-attractiveness judgments represent the face holistically. Our findings with attractiveness judgments and previous demonstrations involving other aspects of face processing suggest that a common holistic representation is used for most types of face processing.

  14. Altered saccadic targets when processing facial expressions under different attentional and stimulus conditions.

    PubMed

    Boutsen, Frank A; Dvorak, Justin D; Pulusu, Vinay K; Ross, Elliott D

    2017-04-01

    Depending on a subject's attentional bias, robust changes in emotional perception occur when facial blends (different emotions expressed on upper/lower face) are presented tachistoscopically. If no instructions are given, subjects overwhelmingly identify the lower facial expression when blends are presented to either visual field. If asked to attend to the upper face, subjects overwhelmingly identify the upper facial expression in the left visual field but remain slightly biased to the lower facial expression in the right visual field. The current investigation sought to determine whether differences in initial saccadic targets could help explain the perceptual biases described above. Ten subjects were presented with full and blend facial expressions under different attentional conditions. No saccadic differences were found for left versus right visual field presentations or for full facial versus blend stimuli. When asked to identify the presented emotion, saccades were directed to the lower face. When asked to attend to the upper face, saccades were directed to the upper face. When asked to attend to the upper face and try to identify the emotion, saccades were directed to the upper face but to a lesser degree. Thus, saccadic behavior supports the concept that there are cognitive-attentional pre-attunements when subjects visually process facial expressions. However, these pre-attunements do not fully explain the perceptual superiority of the left visual field for identifying the upper facial expression when facial blends are presented tachistoscopically. Hence other perceptual factors must be in play, such as the phenomenon of virtual scanning. Published by Elsevier Ltd.

  15. A unified coding strategy for processing faces and voices

    PubMed Central

    Yovel, Galit; Belin, Pascal

    2013-01-01

    Both faces and voices are rich in socially-relevant information, which humans are remarkably adept at extracting, including a person's identity, age, gender, affective state, personality, etc. Here, we review accumulating evidence from behavioral, neuropsychological, electrophysiological, and neuroimaging studies which suggest that the cognitive and neural processing mechanisms engaged by perceiving faces or voices are highly similar, despite the very different nature of their sensory input. The similarity between the two mechanisms likely facilitates the multi-modal integration of facial and vocal information during everyday social interactions. These findings emphasize a parsimonious principle of cerebral organization, where similar computational problems in different modalities are solved using similar solutions. PMID:23664703

  16. Identification and intensity of disgust: Distinguishing visual, linguistic and facial expressions processing in Parkinson disease.

    PubMed

    Sedda, Anna; Petito, Sara; Guarino, Maria; Stracciari, Andrea

    2017-07-14

    Most of the studies since now show an impairment for facial displays of disgust recognition in Parkinson disease. A general impairment in disgust processing in patients with Parkinson disease might adversely affect their social interactions, given the relevance of this emotion for human relations. However, despite the importance of faces, disgust is also expressed through other format of visual stimuli such as sentences and visual images. The aim of our study was to explore disgust processing in a sample of patients affected by Parkinson disease, by means of various tests tackling not only facial recognition but also other format of visual stimuli through which disgust can be recognized. Our results confirm that patients are impaired in recognizing facial displays of disgust. Further analyses show that patients are also impaired and slower for other facial expressions, with the only exception of happiness. Notably however, patients with Parkinson disease processed visual images and sentences as controls. Our findings show a dissociation within different formats of visual stimuli of disgust, suggesting that Parkinson disease is not characterized by a general compromising of disgust processing, as often suggested. The involvement of the basal ganglia-frontal cortex system might spare some cognitive components of emotional processing, related to memory and culture, at least for disgust. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. More than a face: a unified theoretical perspective on nonverbal social cue processing in social anxiety

    PubMed Central

    Gilboa-Schechtman, Eva; Shachar-Lavie, Iris

    2013-01-01

    Processing of nonverbal social cues (NVSCs) is essential to interpersonal functioning and is particularly relevant to models of social anxiety. This article provides a review of the literature on NVSC processing from the perspective of social rank and affiliation biobehavioral systems (ABSs), based on functional analysis of human sociality. We examine the potential of this framework for integrating cognitive, interpersonal, and evolutionary accounts of social anxiety. We argue that NVSCs are uniquely suited to rapid and effective conveyance of emotional, motivational, and trait information and that various channels are differentially effective in transmitting such information. First, we review studies on perception of NVSCs through face, voice, and body. We begin with studies that utilized information processing or imaging paradigms to assess NVSC perception. This research demonstrated that social anxiety is associated with biased attention to, and interpretation of, emotional facial expressions (EFEs) and emotional prosody. Findings regarding body and posture remain scarce. Next, we review studies on NVSC expression, which pinpointed links between social anxiety and disturbances in eye gaze, facial expressivity, and vocal properties of spontaneous and planned speech. Again, links between social anxiety and posture were understudied. Although cognitive, interpersonal, and evolutionary theories have described different pathways to social anxiety, all three models focus on interrelations among cognition, subjective experience, and social behavior. NVSC processing and production comprise the juncture where these theories intersect. In light of the conceptualizations emerging from the review, we highlight several directions for future research including focus on NVSCs as indexing reactions to changes in belongingness and social rank, the moderating role of gender, and the therapeutic opportunities offered by embodied cognition to treat social anxiety. PMID:24427129

  18. Biomask: An Advanced Robotic System for the Real-time, Autonomous Monitoring and Treatment of Facial Burns of Wounded Soldiers

    DTIC Science & Technology

    2013-04-01

    bioreactor systems, a microfluidic -based flexible fluid exchange patch was developed for porcine wound models. A novel design and fabrication process...to be established. 15. SUBJECT TERMS Biomask, burn injury, facial reconstruction, wound-healing, bioreactor, flexible microfluidic , and...and layers of facial skin using different cell types and matrices to produce a reliable, physiologic facial and skin construct to restore functional

  19. A study on facial expressions recognition

    NASA Astrophysics Data System (ADS)

    Xu, Jingjing

    2017-09-01

    In terms of communication, postures and facial expressions of such feelings like happiness, anger and sadness play important roles in conveying information. With the development of the technology, recently a number of algorithms dealing with face alignment, face landmark detection, classification, facial landmark localization and pose estimation have been put forward. However, there are a lot of challenges and problems need to be fixed. In this paper, a few technologies have been concluded and analyzed, and they all relate to handling facial expressions recognition and poses like pose-indexed based multi-view method for face alignment, robust facial landmark detection under significant head pose and occlusion, partitioning the input domain for classification, robust statistics face formalization.

  20. Revisiting the Relationship between the Processing of Gaze Direction and the Processing of Facial Expression

    ERIC Educational Resources Information Center

    Ganel, Tzvi

    2011-01-01

    There is mixed evidence on the nature of the relationship between the perception of gaze direction and the perception of facial expressions. Major support for shared processing of gaze and expression comes from behavioral studies that showed that observers cannot process expression or gaze and ignore irrelevant variations in the other dimension.…

  1. Biomedical visual data analysis to build an intelligent diagnostic decision support system in medical genetics.

    PubMed

    Kuru, Kaya; Niranjan, Mahesan; Tunca, Yusuf; Osvank, Erhan; Azim, Tayyaba

    2014-10-01

    In general, medical geneticists aim to pre-diagnose underlying syndromes based on facial features before performing cytological or molecular analyses where a genotype-phenotype interrelation is possible. However, determining correct genotype-phenotype interrelationships among many syndromes is tedious and labor-intensive, especially for extremely rare syndromes. Thus, a computer-aided system for pre-diagnosis can facilitate effective and efficient decision support, particularly when few similar cases are available, or in remote rural districts where diagnostic knowledge of syndromes is not readily available. The proposed methodology, visual diagnostic decision support system (visual diagnostic DSS), employs machine learning (ML) algorithms and digital image processing techniques in a hybrid approach for automated diagnosis in medical genetics. This approach uses facial features in reference images of disorders to identify visual genotype-phenotype interrelationships. Our statistical method describes facial image data as principal component features and diagnoses syndromes using these features. The proposed system was trained using a real dataset of previously published face images of subjects with syndromes, which provided accurate diagnostic information. The method was tested using a leave-one-out cross-validation scheme with 15 different syndromes, each of comprised 5-9 cases, i.e., 92 cases in total. An accuracy rate of 83% was achieved using this automated diagnosis technique, which was statistically significant (p<0.01). Furthermore, the sensitivity and specificity values were 0.857 and 0.870, respectively. Our results show that the accurate classification of syndromes is feasible using ML techniques. Thus, a large number of syndromes with characteristic facial anomaly patterns could be diagnosed with similar diagnostic DSSs to that described in the present study, i.e., visual diagnostic DSS, thereby demonstrating the benefits of using hybrid image processing and ML-based computer-aided diagnostics for identifying facial phenotypes. Copyright © 2014. Published by Elsevier B.V.

  2. Encapsulated social perception of emotional expressions.

    PubMed

    Smortchkova, Joulia

    2017-01-01

    In this paper I argue that the detection of emotional expressions is, in its early stages, informationally encapsulated. I clarify and defend such a view via the appeal to data from social perception on the visual processing of faces, bodies, facial and bodily expressions. Encapsulated social perception might exist alongside processes that are cognitively penetrated, and that have to do with recognition and categorization, and play a central evolutionary function in preparing early and rapid responses to the emotional stimuli. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. The Automaticity of Emotional Face-Context Integration

    PubMed Central

    Aviezer, Hillel; Dudarev, Veronica; Bentin, Shlomo; Hassin, Ran R.

    2011-01-01

    Recent studies have demonstrated that context can dramatically influence the recognition of basic facial expressions, yet the nature of this phenomenon is largely unknown. In the present paper we begin to characterize the underlying process of face-context integration. Specifically, we examine whether it is a relatively controlled or automatic process. In Experiment 1 participants were motivated and instructed to avoid using the context while categorizing contextualized facial expression, or they were led to believe that the context was irrelevant. Nevertheless, they were unable to disregard the context, which exerted a strong effect on their emotion recognition. In Experiment 2, participants categorized contextualized facial expressions while engaged in a concurrent working memory task. Despite the load, the context exerted a strong influence on their recognition of facial expressions. These results suggest that facial expressions and their body contexts are integrated in an unintentional, uncontrollable, and relatively effortless manner. PMID:21707150

  4. Residual fMRI sensitivity for identity changes in acquired prosopagnosia.

    PubMed

    Fox, Christopher J; Iaria, Giuseppe; Duchaine, Bradley C; Barton, Jason J S

    2013-01-01

    While a network of cortical regions contribute to face processing, the lesions in acquired prosopagnosia are highly variable, and likely result in different combinations of spared and affected regions of this network. To assess the residual functional sensitivities of spared regions in prosopagnosia, we designed a rapid event-related functional magnetic resonance imaging (fMRI) experiment that included pairs of faces with same or different identities and same or different expressions. By measuring the release from adaptation to these facial changes we determined the residual sensitivity of face-selective regions-of-interest. We tested three patients with acquired prosopagnosia, and all three of these patients demonstrated residual sensitivity for facial identity changes in surviving fusiform and occipital face areas of either the right or left hemisphere, but not in the right posterior superior temporal sulcus. The patients also showed some residual capabilities for facial discrimination with normal performance on the Benton Facial Recognition Test, but impaired performance on more complex tasks of facial discrimination. We conclude that fMRI can demonstrate residual processing of facial identity in acquired prosopagnosia, that this adaptation can occur in the same structures that show similar processing in healthy subjects, and further, that this adaptation may be related to behavioral indices of face perception.

  5. Residual fMRI sensitivity for identity changes in acquired prosopagnosia

    PubMed Central

    Fox, Christopher J.; Iaria, Giuseppe; Duchaine, Bradley C.; Barton, Jason J. S.

    2013-01-01

    While a network of cortical regions contribute to face processing, the lesions in acquired prosopagnosia are highly variable, and likely result in different combinations of spared and affected regions of this network. To assess the residual functional sensitivities of spared regions in prosopagnosia, we designed a rapid event-related functional magnetic resonance imaging (fMRI) experiment that included pairs of faces with same or different identities and same or different expressions. By measuring the release from adaptation to these facial changes we determined the residual sensitivity of face-selective regions-of-interest. We tested three patients with acquired prosopagnosia, and all three of these patients demonstrated residual sensitivity for facial identity changes in surviving fusiform and occipital face areas of either the right or left hemisphere, but not in the right posterior superior temporal sulcus. The patients also showed some residual capabilities for facial discrimination with normal performance on the Benton Facial Recognition Test, but impaired performance on more complex tasks of facial discrimination. We conclude that fMRI can demonstrate residual processing of facial identity in acquired prosopagnosia, that this adaptation can occur in the same structures that show similar processing in healthy subjects, and further, that this adaptation may be related to behavioral indices of face perception. PMID:24151479

  6. The effects of serotonin manipulations on emotional information processing and mood.

    PubMed

    Merens, Wendelien; Willem Van der Does, A J; Spinhoven, Philip

    2007-11-01

    Serotonin is implicated in both mood and cognition. It has recently been shown that antidepressant treatment has immediate effects on emotional information processing, which is much faster than any clinically significant effects. This review aims to investigate whether the effects on emotional information processing are reliable, and whether these effects are related to eventual clinical outcome. Treatment-efficiency may be greatly improved if early changes in emotional information processing are found to predict clinical outcome following antidepressant treatment. Review of studies investigating the short-term effects of serotonin manipulations (including medication) on the processing of emotional information, using PubMed and PsycInfo databases. Twenty-five studies were identified. Serotonin manipulations were found to affect attentional bias, facial emotion recognition, emotional memory, dysfunctional attitudes and decision making. The sequential link between changes in emotional processing and mood remains to be further investigated. The number of studies on serotonin manipulations and emotional information processing in currently depressed subjects is small. No studies yet have directly tested the link between emotional information processing and clinical outcome during the course of antidepressant treatment. Serotonin function is related to several aspects of emotional information processing, but it is unknown whether these changes predict or have any relationship with clinical outcome. Suggestions for future research are provided.

  7. Automatic 2.5-D Facial Landmarking and Emotion Annotation for Social Interaction Assistance.

    PubMed

    Zhao, Xi; Zou, Jianhua; Li, Huibin; Dellandrea, Emmanuel; Kakadiaris, Ioannis A; Chen, Liming

    2016-09-01

    People with low vision, Alzheimer's disease, and autism spectrum disorder experience difficulties in perceiving or interpreting facial expression of emotion in their social lives. Though automatic facial expression recognition (FER) methods on 2-D videos have been extensively investigated, their performance was constrained by challenges in head pose and lighting conditions. The shape information in 3-D facial data can reduce or even overcome these challenges. However, high expenses of 3-D cameras prevent their widespread use. Fortunately, 2.5-D facial data from emerging portable RGB-D cameras provide a good balance for this dilemma. In this paper, we propose an automatic emotion annotation solution on 2.5-D facial data collected from RGB-D cameras. The solution consists of a facial landmarking method and a FER method. Specifically, we propose building a deformable partial face model and fit the model to a 2.5-D face for localizing facial landmarks automatically. In FER, a novel action unit (AU) space-based FER method has been proposed. Facial features are extracted using landmarks and further represented as coordinates in the AU space, which are classified into facial expressions. Evaluated on three publicly accessible facial databases, namely EURECOM, FRGC, and Bosphorus databases, the proposed facial landmarking and expression recognition methods have achieved satisfactory results. Possible real-world applications using our algorithms have also been discussed.

  8. Relative preservation of the recognition of positive facial expression "happiness" in Alzheimer disease.

    PubMed

    Maki, Yohko; Yoshida, Hiroshi; Yamaguchi, Tomoharu; Yamaguchi, Haruyasu

    2013-01-01

    Positivity recognition bias has been reported for facial expression as well as memory and visual stimuli in aged individuals, whereas emotional facial recognition in Alzheimer disease (AD) patients is controversial, with possible involvement of confounding factors such as deficits in spatial processing of non-emotional facial features and in verbal processing to express emotions. Thus, we examined whether recognition of positive facial expressions was preserved in AD patients, by adapting a new method that eliminated the influences of these confounding factors. Sensitivity of six basic facial expressions (happiness, sadness, surprise, anger, disgust, and fear) was evaluated in 12 outpatients with mild AD, 17 aged normal controls (ANC), and 25 young normal controls (YNC). To eliminate the factors related to non-emotional facial features, averaged faces were prepared as stimuli. To eliminate the factors related to verbal processing, the participants were required to match the images of stimulus and answer, avoiding the use of verbal labels. In recognition of happiness, there was no difference in sensitivity between YNC and ANC, and between ANC and AD patients. AD patients were less sensitive than ANC in recognition of sadness, surprise, and anger. ANC were less sensitive than YNC in recognition of surprise, anger, and disgust. Within the AD patient group, sensitivity of happiness was significantly higher than those of the other five expressions. In AD patient, recognition of happiness was relatively preserved; recognition of happiness was most sensitive and was preserved against the influences of age and disease.

  9. Following the time course of face gender and expression processing: a task-dependent ERP study.

    PubMed

    Valdés-Conroy, Berenice; Aguado, Luis; Fernández-Cahill, María; Romero-Ferreiro, Verónica; Diéguez-Risco, Teresa

    2014-05-01

    The effects of task demands and the interaction between gender and expression in face perception were studied using event-related potentials (ERPs). Participants performed three different tasks with male and female faces that were emotionally inexpressive or that showed happy or angry expressions. In two of the tasks (gender and expression categorization) facial properties were task-relevant while in a third task (symbol discrimination) facial information was irrelevant. Effects of expression were observed on the visual P100 component under all task conditions, suggesting the operation of an automatic process that is not influenced by task demands. The earliest interaction between expression and gender was observed later in the face-sensitive N170 component. This component showed differential modulations by specific combinations of gender and expression (e.g., angry male vs. angry female faces). Main effects of expression and task were observed in a later occipito-temporal component peaking around 230 ms post-stimulus onset (EPN or early posterior negativity). Less positive amplitudes in the presence of angry faces and during performance of the gender and expression tasks were observed. Finally, task demands also modulated a positive component peaking around 400 ms (LPC, or late positive complex) that showed enhanced amplitude for the gender task. The pattern of results obtained here adds new evidence about the sequence of operations involved in face processing and the interaction of facial properties (gender and expression) in response to different task demands. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Positive association between vocal and facial attractiveness in women but not in men: A cross-cultural study.

    PubMed

    Valentova, Jaroslava Varella; Varella, Marco Antonio Corrêa; Havlíček, Jan; Kleisner, Karel

    2017-02-01

    Various species use multiple sensory modalities in the communication processes. In humans, female facial appearance and vocal display are correlated and it has been suggested that they serve as redundant markers indicating the bearer's reproductive potential and/or residual fertility. In men, evidence for redundancy of facial and vocal attractiveness is ambiguous. We tested the redundancy/multiple signals hypothesis by correlating perceived facial and vocal attractiveness in men and women from two different populations, Brazil and the Czech Republic. We also investigated whether facial and vocal attractiveness are linked to facial morphology. Standardized facial pictures and vocal samples of 86 women (47 from Brazil) and 81 men (41 from Brazil), aged 18-35, were rated for attractiveness by opposite-sex raters. Facial and vocal attractiveness were found to positively correlate in women but not in men. We further applied geometric morphometrics and regressed facial shape coordinates on facial and vocal attractiveness ratings. In women, facial shape was linked to their facial attractiveness but there was no association between facial shape and vocal attractiveness. In men, none of these associations was significant. Having shown that women with more attractive faces possess also more attractive voices, we thus only partly supported the redundant signal hypothesis. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Support vector machine-based facial-expression recognition method combining shape and appearance

    NASA Astrophysics Data System (ADS)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  12. Subliminal Face Emotion Processing: A Comparison of Fearful and Disgusted Faces

    PubMed Central

    Khalid, Shah; Ansorge, Ulrich

    2017-01-01

    Prior research has provided evidence for (1) subcortical processing of subliminal facial expressions of emotion and (2) for the emotion-specificity of these processes. Here, we investigated if this is also true for the processing of the subliminal facial display of disgust. In Experiment 1, we used differently filtered masked prime faces portraying emotionally neutral or disgusted expressions presented prior to clearly visible target faces to test if the masked primes exerted an influence on target processing nonetheless. Whereas we found evidence for subliminal face congruence or priming effects, in particular, reverse priming by low spatial frequencies disgusted face primes, we did not find any support for a subcortical origin of the effect. In Experiment 2, we compared the influence of subliminal disgusted faces with that of subliminal fearful faces and demonstrated a behavioral performance difference between the two, pointing to an emotion-specific processing of the disgusted facial expressions. In both experiments, we also tested for the dependence of the subliminal emotional face processing on spatial attention – with mixed results, suggesting an attention-independence in Experiment 1 but not in Experiment 2 –, and we found perfect masking of the face primes – that is, proof of the subliminality of the prime faces. Based on our findings, we speculate that subliminal facial expressions of disgust could afford easy avoidance of these faces. This could be a unique effect of disgusted faces as compared to other emotional facial displays, at least under the conditions studied here. PMID:28680413

  13. Subliminal Face Emotion Processing: A Comparison of Fearful and Disgusted Faces.

    PubMed

    Khalid, Shah; Ansorge, Ulrich

    2017-01-01

    Prior research has provided evidence for (1) subcortical processing of subliminal facial expressions of emotion and (2) for the emotion-specificity of these processes. Here, we investigated if this is also true for the processing of the subliminal facial display of disgust. In Experiment 1, we used differently filtered masked prime faces portraying emotionally neutral or disgusted expressions presented prior to clearly visible target faces to test if the masked primes exerted an influence on target processing nonetheless. Whereas we found evidence for subliminal face congruence or priming effects, in particular, reverse priming by low spatial frequencies disgusted face primes, we did not find any support for a subcortical origin of the effect. In Experiment 2, we compared the influence of subliminal disgusted faces with that of subliminal fearful faces and demonstrated a behavioral performance difference between the two, pointing to an emotion-specific processing of the disgusted facial expressions. In both experiments, we also tested for the dependence of the subliminal emotional face processing on spatial attention - with mixed results, suggesting an attention-independence in Experiment 1 but not in Experiment 2 -, and we found perfect masking of the face primes - that is, proof of the subliminality of the prime faces. Based on our findings, we speculate that subliminal facial expressions of disgust could afford easy avoidance of these faces. This could be a unique effect of disgusted faces as compared to other emotional facial displays, at least under the conditions studied here.

  14. Common and distinct neural correlates of facial emotion processing in social anxiety disorder and Williams syndrome: A systematic review and voxel-based meta-analysis of functional resonance imaging studies.

    PubMed

    Binelli, C; Subirà, S; Batalla, A; Muñiz, A; Sugranyés, G; Crippa, J A; Farré, M; Pérez-Jurado, L; Martín-Santos, R

    2014-11-01

    Social Anxiety Disorder (SAD) and Williams-Beuren Syndrome (WS) are two conditions which seem to be at opposite ends in the continuum of social fear but show compromised abilities in some overlapping areas, including some social interactions, gaze contact and processing of facial emotional cues. The increase in the number of neuroimaging studies has greatly expanded our knowledge of the neural bases of facial emotion processing in both conditions. However, to date, SAD and WS have not been compared. We conducted a systematic review of functional magnetic resonance imaging (fMRI) studies comparing SAD and WS cases to healthy control participants (HC) using facial emotion processing paradigms. Two researchers conducted comprehensive PubMed/Medline searches to identify all fMRI studies of facial emotion processing in SAD and WS. The following search key-words were used: "emotion processing"; "facial emotion"; "social anxiety"; "social phobia"; "Williams syndrome"; "neuroimaging"; "functional magnetic resonance"; "fMRI" and their combinations, as well as terms specifying individual facial emotions. We extracted spatial coordinates from each study and conducted two separate voxel-wise activation likelihood estimation meta-analyses, one for SAD and one for WS. Twenty-two studies met the inclusion criteria: 17 studies of SAD and five of WS. We found evidence for both common and distinct patterns of neural activation. Limbic engagement was common to SAD and WS during facial emotion processing, although we observed opposite patterns of activation for each disorder. Compared to HC, SAD cases showed hyperactivation of the amygdala, the parahippocampal gyrus and the globus pallidus. Compared to controls, participants with WS showed hypoactivation of these regions. Differential activation in a number of regions specific to either condition was also identified: SAD cases exhibited greater activation of the insula, putamen, the superior temporal gyrus, medial frontal regions and the cuneus, while WS subjects showed decreased activation in the inferior region of the parietal lobule. The identification of limbic structures as a shared correlate and the patterns of activation observed for each condition may reflect the aberrant patterns of facial emotion processing that the two conditions share, and may contribute to explaining part of the underlying neural substrate of exaggerated/diminished fear responses to social cues that characterize SAD and WS respectively. We believe that insights from WS and the inclusion of this syndrome as a control group in future experimental studies may improve our understanding of the neural correlates of social fear in general, and of SAD in particular. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Effects of delta-9-tetrahydrocannabinol on evaluation of emotional images

    PubMed Central

    Ballard, Michael E; Bedi, Gillinder; de Wit, Harriet

    2013-01-01

    There is growing evidence that drugs of abuse alter processing of emotional information in ways that could be attractive to users. Our recent report that Δ9-tetrahydrocannabinol (THC) diminishes amygdalar activation in response to threat-related faces suggests that THC may modify evaluation of emotionally-salient, particularly negative or threatening, stimuli. In this study, we examined the effects of acute THC on evaluation of emotional images. Healthy volunteers received two doses of THC (7.5 and 15 mg; p.o.) and placebo across separate sessions before performing tasks assessing facial emotion recognition and emotional responses to pictures of emotional scenes. THC significantly impaired recognition of facial fear and anger, but it only marginally impaired recognition of sadness and happiness. The drug did not consistently affect ratings of emotional scenes. THC' effects on emotional evaluation were not clearly related to its mood-altering effects. These results support our previous work, and show that THC reduces perception of facial threat. Nevertheless, THC does not appear to positively bias evaluation of emotional stimuli in general PMID:22585232

  16. A small-world network model of facial emotion recognition.

    PubMed

    Takehara, Takuma; Ochiai, Fumio; Suzuki, Naoto

    2016-01-01

    Various models have been proposed to increase understanding of the cognitive basis of facial emotions. Despite those efforts, interactions between facial emotions have received minimal attention. If collective behaviours relating to each facial emotion in the comprehensive cognitive system could be assumed, specific facial emotion relationship patterns might emerge. In this study, we demonstrate that the frameworks of complex networks can effectively capture those patterns. We generate 81 facial emotion images (6 prototypes and 75 morphs) and then ask participants to rate degrees of similarity in 3240 facial emotion pairs in a paired comparison task. A facial emotion network constructed on the basis of similarity clearly forms a small-world network, which features an extremely short average network distance and close connectivity. Further, even if two facial emotions have opposing valences, they are connected within only two steps. In addition, we show that intermediary morphs are crucial for maintaining full network integration, whereas prototypes are not at all important. These results suggest the existence of collective behaviours in the cognitive systems of facial emotions and also describe why people can efficiently recognize facial emotions in terms of information transmission and propagation. For comparison, we construct three simulated networks--one based on the categorical model, one based on the dimensional model, and one random network. The results reveal that small-world connectivity in facial emotion networks is apparently different from those networks, suggesting that a small-world network is the most suitable model for capturing the cognitive basis of facial emotions.

  17. Face recognition via edge-based Gabor feature representation for plastic surgery-altered images

    NASA Astrophysics Data System (ADS)

    Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.

    2014-12-01

    Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.

  18. Arguments Against a Configural Processing Account of Familiar Face Recognition.

    PubMed

    Burton, A Mike; Schweinberger, Stefan R; Jenkins, Rob; Kaufmann, Jürgen M

    2015-07-01

    Face recognition is a remarkable human ability, which underlies a great deal of people's social behavior. Individuals can recognize family members, friends, and acquaintances over a very large range of conditions, and yet the processes by which they do this remain poorly understood, despite decades of research. Although a detailed understanding remains elusive, face recognition is widely thought to rely on configural processing, specifically an analysis of spatial relations between facial features (so-called second-order configurations). In this article, we challenge this traditional view, raising four problems: (1) configural theories are underspecified; (2) large configural changes leave recognition unharmed; (3) recognition is harmed by nonconfigural changes; and (4) in separate analyses of face shape and face texture, identification tends to be dominated by texture. We review evidence from a variety of sources and suggest that failure to acknowledge the impact of familiarity on facial representations may have led to an overgeneralization of the configural account. We argue instead that second-order configural information is remarkably unimportant for familiar face recognition. © The Author(s) 2015.

  19. Facial expression primes and implicit regulation of negative emotion.

    PubMed

    Yoon, HeungSik; Kim, Shin Ah; Kim, Sang Hee

    2015-06-17

    An individual's responses to emotional information are influenced not only by the emotional quality of the information, but also by the context in which the information is presented. We hypothesized that facial expressions of happiness and anger would serve as primes to modulate subjective and neural responses to subsequently presented negative information. To test this hypothesis, we conducted a functional MRI study in which the brains of healthy adults were scanned while they performed an emotion-rating task. During the task, participants viewed a series of negative and neutral photos, one at a time; each photo was presented after a picture showing a face expressing a happy, angry, or neutral emotion. Brain imaging results showed that compared with neutral primes, happy facial primes increased activation during negative emotion in the dorsal anterior cingulated cortex and the right ventrolateral prefrontal cortex, which are typically implicated in conflict detection and implicit emotion control, respectively. Conversely, relative to neutral primes, angry primes activated the right middle temporal gyrus and the left supramarginal gyrus during the experience of negative emotion. Activity in the amygdala in response to negative emotion was marginally reduced after exposure to happy primes compared with angry primes. Relative to neutral primes, angry facial primes increased the subjectively experienced intensity of negative emotion. The current study results suggest that prior exposure to facial expressions of emotions modulates the subsequent experience of negative emotion by implicitly activating the emotion-regulation system.

  20. Preferential responses in amygdala and insula during presentation of facial contempt and disgust.

    PubMed

    Sambataro, Fabio; Dimalta, Savino; Di Giorgio, Annabella; Taurisano, Paolo; Blasi, Giuseppe; Scarabino, Tommaso; Giannatempo, Giuseppe; Nardini, Marcello; Bertolino, Alessandro

    2006-10-01

    Some authors consider contempt to be a basic emotion while others consider it a variant of disgust. The neural correlates of contempt have not so far been specifically contrasted with disgust. Using functional magnetic resonance imaging (fMRI), we investigated the neural networks involved in the processing of facial contempt and disgust in 24 healthy subjects. Facial recognition of contempt was lower than that of disgust and of neutral faces. The imaging data indicated significant activity in the amygdala and in globus pallidus and putamen during processing of contemptuous faces. Bilateral insula and caudate nuclei and left as well as right inferior frontal gyrus were engaged during processing of disgusted faces. Moreover, direct comparisons of contempt vs. disgust yielded significantly different activations in the amygdala. On the other hand, disgusted faces elicited greater activation than contemptuous faces in the right insula and caudate. Our findings suggest preferential involvement of different neural substrates in the processing of facial emotional expressions of contempt and disgust.

  1. Coherence explored between emotion components: evidence from event-related potentials and facial electromyography.

    PubMed

    Gentsch, Kornelia; Grandjean, Didier; Scherer, Klaus R

    2014-04-01

    Componential theories assume that emotion episodes consist of emergent and dynamic response changes to relevant events in different components, such as appraisal, physiology, motivation, expression, and subjective feeling. In particular, Scherer's Component Process Model hypothesizes that subjective feeling emerges when the synchronization (or coherence) of appraisal-driven changes between emotion components has reached a critical threshold. We examined the prerequisite of this synchronization hypothesis for appraisal-driven response changes in facial expression. The appraisal process was manipulated by using feedback stimuli, presented in a gambling task. Participants' responses to the feedback were investigated in concurrently recorded brain activity related to appraisal (event-related potentials, ERP) and facial muscle activity (electromyography, EMG). Using principal component analysis, the prediction of appraisal-driven response changes in facial EMG was examined. Results support this prediction: early cognitive processes (related to the feedback-related negativity) seem to primarily affect the upper face, whereas processes that modulate P300 amplitudes tend to predominantly drive cheek region responses. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. The human amygdala parametrically encodes the intensity of specific facial emotions and their categorical ambiguity

    PubMed Central

    Wang, Shuo; Yu, Rongjun; Tyszka, J. Michael; Zhen, Shanshan; Kovach, Christopher; Sun, Sai; Huang, Yi; Hurlemann, Rene; Ross, Ian B.; Chung, Jeffrey M.; Mamelak, Adam N.; Adolphs, Ralph; Rutishauser, Ueli

    2017-01-01

    The human amygdala is a key structure for processing emotional facial expressions, but it remains unclear what aspects of emotion are processed. We investigated this question with three different approaches: behavioural analysis of 3 amygdala lesion patients, neuroimaging of 19 healthy adults, and single-neuron recordings in 9 neurosurgical patients. The lesion patients showed a shift in behavioural sensitivity to fear, and amygdala BOLD responses were modulated by both fear and emotion ambiguity (the uncertainty that a facial expression is categorized as fearful or happy). We found two populations of neurons, one whose response correlated with increasing degree of fear, or happiness, and a second whose response primarily decreased as a linear function of emotion ambiguity. Together, our results indicate that the human amygdala processes both the degree of emotion in facial expressions and the categorical ambiguity of the emotion shown and that these two aspects of amygdala processing can be most clearly distinguished at the level of single neurons. PMID:28429707

  3. Effects of touch on emotional face processing: A study of event-related potentials, facial EMG and cardiac activity.

    PubMed

    Spapé, M M; Harjunen, Ville; Ravaja, N

    2017-03-01

    Being touched is known to affect emotion, and even a casual touch can elicit positive feelings and affinity. Psychophysiological studies have recently shown that tactile primes affect visual evoked potentials to emotional stimuli, suggesting altered affective stimulus processing. As, however, these studies approached emotion from a purely unidimensional perspective, it remains unclear whether touch biases emotional evaluation or a more general feature such as salience. Here, we investigated how simple tactile primes modulate event related potentials (ERPs), facial EMG and cardiac response to pictures of facial expressions of emotion. All measures replicated known effects of emotional face processing: Disgust and fear modulated early ERPs, anger increased the cardiac orienting response, and expressions elicited emotion-congruent facial EMG activity. Tactile primes also affected these measures, but priming never interacted with the type of emotional expression. Thus, touch may additively affect general stimulus processing, but it does not bias or modulate immediate affective evaluation. Copyright © 2017. Published by Elsevier B.V.

  4. The persuasive power of emotions: Effects of emotional expressions on attitude formation and change.

    PubMed

    Van Kleef, Gerben A; van den Berg, Helma; Heerdink, Marc W

    2015-07-01

    Despite a long-standing interest in the intrapersonal role of affect in persuasion, the interpersonal effects of emotions on persuasion remain poorly understood-how do one person's emotional expressions shape others' attitudes? Drawing on emotions as social information (EASI) theory (Van Kleef, 2009), we hypothesized that people use the emotional expressions of others to inform their own attitudes, but only when they are sufficiently motivated and able to process those expressions. Five experiments support these ideas. Participants reported more positive attitudes about various topics after seeing a source's sad (rather than happy) expressions when topics were negatively framed (e.g., abandoning bobsleighing from the Olympics). Conversely, participants reported more positive attitudes after seeing happy (rather than sad) expressions when topics were positively framed (e.g., introducing kite surfing at the Olympics). This suggests that participants used the source's emotional expressions as information when forming their own attitudes. Supporting this interpretation, effects were mitigated when participants' information processing was undermined by cognitive load or was chronically low. Moreover, a source's anger expressions engendered negative attitude change when directed at the attitude object and positive change when directed at the recipient's attitude. Effects occurred regardless of whether emotional expressions were manipulated through written words, pictures of facial expressions, film clips containing both facial and vocal emotional expressions, or emoticons. The findings support EASI theory and indicate that emotional expressions are a powerful source of social influence. (c) 2015 APA, all rights reserved).

  5. Age-Related Changes in the Processing of Emotional Faces in a Dual-Task Paradigm.

    PubMed

    Casares-Guillén, Carmen; García-Rodríguez, Beatriz; Delgado, Marisa; Ellgring, Heiner

    2016-01-01

    Background/ Study Context: Age-related changes appear to affect the ability to identify emotional facial expressions in dual-task conditions (i.e., while simultaneously performing a second visual task). The level of interference generated by the secondary task depends on the phase of emotional processing affected by the interference and the nature of the secondary task. The aim of the present study was to investigate the effect of these variables on age-related changes in the processing of emotional faces. The identification of emotional facial expressions (EFEs) was assessed in a dual-task paradigm using the following variables: (a) the phase during which interference was applied (encoding vs. retrieval phase); and (b) the nature of the interfering stimulus (visuospatial vs. verbal). The sample population consisted of 24 healthy aged adults (mean age = 75.38) and 40 younger adults (mean age = 26.90). The accuracy of EFE identification was calculated for all experimental conditions. Consistent with our hypothesis, the performance of the older group was poorer than that of the younger group in all experimental conditions. Dual-task performance was poorer when the interference occurred during the encoding phase of emotional face processing and when both tasks were of the same nature (i.e., when the experimental condition was more demanding in terms of attention). These results provide empirical evidence of age-related deficits in the identification of emotional facial expressions, which may be partially explained by the impairment of cognitive resources specific to this task. These findings may account for the difficulties experienced by the elderly during social interactions that require the concomitant processing of emotional and environmental information.

  6. Cognitive penetrability and emotion recognition in human facial expressions

    PubMed Central

    Marchi, Francesco

    2015-01-01

    Do our background beliefs, desires, and mental images influence our perceptual experience of the emotions of others? In this paper, we will address the possibility of cognitive penetration (CP) of perceptual experience in the domain of social cognition. In particular, we focus on emotion recognition based on the visual experience of facial expressions. After introducing the current debate on CP, we review examples of perceptual adaptation for facial expressions of emotion. This evidence supports the idea that facial expressions are perceptually processed as wholes. That is, the perceptual system integrates lower-level facial features, such as eyebrow orientation, mouth angle etc., into facial compounds. We then present additional experimental evidence showing that in some cases, emotion recognition on the basis of facial expression is sensitive to and modified by the background knowledge of the subject. We argue that such sensitivity is best explained as a difference in the visual experience of the facial expression, not just as a modification of the judgment based on this experience. The difference in experience is characterized as the result of the interference of background knowledge with the perceptual integration process for faces. Thus, according to the best explanation, we have to accept CP in some cases of emotion recognition. Finally, we discuss a recently proposed mechanism for CP in the face-based recognition of emotion. PMID:26150796

  7. Sex, Sexual Orientation, and Identification of Positive and Negative Facial Affect

    ERIC Educational Resources Information Center

    Rahman, Qazi; Wilson, Glenn D.; Abrahams, Sharon

    2004-01-01

    Sex and sexual orientation related differences in processing of happy and sad facial emotions were examined using an experimental facial emotion recognition paradigm with a large sample (N=240). Analysis of covariance (controlling for age and IQ) revealed that women (irrespective of sexual orientation) had faster reaction times than men for…

  8. Does Gaze Direction Modulate Facial Expression Processing in Children with Autism Spectrum Disorder?

    ERIC Educational Resources Information Center

    Akechi, Hironori; Senju, Atsushi; Kikuchi, Yukiko; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2009-01-01

    Two experiments investigated whether children with autism spectrum disorder (ASD) integrate relevant communicative signals, such as gaze direction, when decoding a facial expression. In Experiment 1, typically developing children (9-14 years old; n = 14) were faster at detecting a facial expression accompanying a gaze direction with a congruent…

  9. Contextual interference processing during fast categorisations of facial expressions.

    PubMed

    Frühholz, Sascha; Trautmann-Lengsfeld, Sina A; Herrmann, Manfred

    2011-09-01

    We examined interference effects of emotionally associated background colours during fast valence categorisations of negative, neutral and positive expressions. According to implicitly learned colour-emotion associations, facial expressions were presented with colours that either matched the valence of these expressions or not. Experiment 1 included infrequent non-matching trials and Experiment 2 a balanced ratio of matching and non-matching trials. Besides general modulatory effects of contextual features on the processing of facial expressions, we found differential effects depending on the valance of target facial expressions. Whereas performance accuracy was mainly affected for neutral expressions, performance speed was specifically modulated by emotional expressions indicating some susceptibility of emotional expressions to contextual features. Experiment 3 used two further colour-emotion combinations, but revealed only marginal interference effects most likely due to missing colour-emotion associations. The results are discussed with respect to inherent processing demands of emotional and neutral expressions and their susceptibility to contextual interference.

  10. The relationship between level of autistic traits and local bias in the context of the McGurk effect

    PubMed Central

    Ujiie, Yuta; Asai, Tomohisa; Wakabayashi, Akio

    2015-01-01

    The McGurk effect is a well-known illustration that demonstrates the influence of visual information on hearing in the context of speech perception. Some studies have reported that individuals with autism spectrum disorder (ASD) display abnormal processing of audio-visual speech integration, while other studies showed contradictory results. Based on the dimensional model of ASD, we administered two analog studies to examine the link between level of autistic traits, as assessed by the Autism Spectrum Quotient (AQ), and the McGurk effect among a sample of university students. In the first experiment, we found that autistic traits correlated negatively with fused (McGurk) responses. Then, we manipulated presentation types of visual stimuli to examine whether the local bias toward visual speech cues modulated individual differences in the McGurk effect. The presentation included four types of visual images, comprising no image, mouth only, mouth and eyes, and full face. The results revealed that global facial information facilitates the influence of visual speech cues on McGurk stimuli. Moreover, individual differences between groups with low and high levels of autistic traits appeared when the full-face visual speech cue with an incongruent voice condition was presented. These results suggest that individual differences in the McGurk effect might be due to a weak ability to process global facial information in individuals with high levels of autistic traits. PMID:26175705

  11. A selective emotional decision-making bias elicited by facial expressions.

    PubMed

    Furl, Nicholas; Gallagher, Shannon; Averbeck, Bruno B

    2012-01-01

    Emotional and social information can sway otherwise rational decisions. For example, when participants decide between two faces that are probabilistically rewarded, they make biased choices that favor smiling relative to angry faces. This bias may arise because facial expressions evoke positive and negative emotional responses, which in turn may motivate social approach and avoidance. We tested a wide range of pictures that evoke emotions or convey social information, including animals, words, foods, a variety of scenes, and faces differing in trustworthiness or attractiveness, but we found only facial expressions biased decisions. Our results extend brain imaging and pharmacological findings, which suggest that a brain mechanism supporting social interaction may be involved. Facial expressions appear to exert special influence over this social interaction mechanism, one capable of biasing otherwise rational choices. These results illustrate that only specific types of emotional experiences can best sway our choices.

  12. A Selective Emotional Decision-Making Bias Elicited by Facial Expressions

    PubMed Central

    Furl, Nicholas; Gallagher, Shannon; Averbeck, Bruno B.

    2012-01-01

    Emotional and social information can sway otherwise rational decisions. For example, when participants decide between two faces that are probabilistically rewarded, they make biased choices that favor smiling relative to angry faces. This bias may arise because facial expressions evoke positive and negative emotional responses, which in turn may motivate social approach and avoidance. We tested a wide range of pictures that evoke emotions or convey social information, including animals, words, foods, a variety of scenes, and faces differing in trustworthiness or attractiveness, but we found only facial expressions biased decisions. Our results extend brain imaging and pharmacological findings, which suggest that a brain mechanism supporting social interaction may be involved. Facial expressions appear to exert special influence over this social interaction mechanism, one capable of biasing otherwise rational choices. These results illustrate that only specific types of emotional experiences can best sway our choices. PMID:22438936

  13. Gestural coupling and social cognition: Möbius Syndrome as a case study

    PubMed Central

    Krueger, Joel; Michael, John

    2012-01-01

    Social cognition researchers have become increasingly interested in the ways that behavioral, physiological, and neural coupling facilitate social interaction and interpersonal understanding. We distinguish two ways of conceptualizing the role of such coupling processes in social cognition: strong and moderate interactionism. According to strong interactionism (SI), low-level coupling processes are alternatives to higher-level individual cognitive processes; the former at least sometimes render the latter superfluous. Moderate interactionism (MI) on the other hand, is an integrative approach. Its guiding assumption is that higher-level cognitive processes are likely to have been shaped by the need to coordinate, modulate, and extract information from low-level coupling processes. In this paper, we present a case study on Möbius Syndrome (MS) in order to contrast SI and MI. We show how MS—a form of congenital bilateral facial paralysis—can be a fruitful source of insight for research exploring the relation between high-level cognition and low-level coupling. Lacking a capacity for facial expression, individuals with MS are deprived of a primary channel for gestural coupling. According to SI, they lack an essential enabling feature for social interaction and interpersonal understanding more generally and thus ought to exhibit severe deficits in these areas. We challenge SI's prediction and show how MS cases offer compelling reasons for instead adopting MI's pluralistic model of social interaction and interpersonal understanding. We conclude that investigations of coupling processes within social interaction should inform rather than marginalize or eliminate investigation of higher-level individual cognition. PMID:22514529

  14. Decoding Task and Stimulus Representations in Face-responsive Cortex

    PubMed Central

    Kliemann, Dorit; Jacoby, Nir; Anzellotti, Stefano; Saxe, Rebecca R.

    2017-01-01

    Faces provide rich social information about others’ stable traits (e.g., age) and fleeting states of mind (e.g., emotional expression). While some of these facial aspects may be processed automatically, observers can also deliberately attend to some features while ignoring others. It remains unclear how internal goals (e.g., task context) influence the representational geometry of variable and stable facial aspects in face-responsive cortex. We investigated neural response patterns related to decoding i) the intention to attend to a facial aspect before its perception, ii) the attended aspect of a face and iii) stimulus properties. We measured neural responses while subjects watched videos of dynamic positive and negative expressions, and judged the age or the expression’s valence. Split-half multivoxel pattern analyses (MVPA) showed that (i) the intention to attend to a specific aspect of a face can be decoded from left fronto-lateral, but not face-responsive regions; (ii) during face perception, the attend aspect (age vs emotion) could be robustly decoded from almost all face-responsive regions; and (iii) a stimulus property (valence), was represented in right posterior superior temporal sulcus and medial prefrontal cortices. The effect of deliberately shifting the focus of attention on representations suggest a powerful influence of top-down signals on cortical representation of social information, varying across cortical regions, likely reflecting neural flexibility to optimally integrate internal goals and dynamic perceptual input. PMID:27978778

  15. Information-Processing Alternatives to Holistic Perception: Identifying the Mechanisms of Secondary-Level Holism within a Categorization Paradigm

    ERIC Educational Resources Information Center

    Fific, Mario; Townsend, James T.

    2010-01-01

    Failure to selectively attend to a facial feature, in the part-to-whole paradigm, has been taken as evidence of holistic perception in a large body of face perception literature. In this article, we demonstrate that although failure of selective attention is a necessary property of holistic perception, its presence alone is not sufficient to…

  16. Sex Differences in Face Processing: Are Women Less Lateralized and Faster than Men?

    ERIC Educational Resources Information Center

    Godard, Ornella; Fiori, Nicole

    2010-01-01

    The aim of this study was to determine the influence of sex on hemispheric asymmetry and cooperation in a face recognition task. We used a masked priming paradigm in which the prime stimulus was centrally presented; it could be a bisymmetric face or a hemi-face in which facial information was presented in the left or the right visual field and…

  17. Lateralisation for processing facial emotion and anxiety: contrasting state, trait and social anxiety.

    PubMed

    Bourne, Victoria J; Vladeanu, Matei

    2011-04-01

    Recent neuropsychological studies have attempted to distinguish between different types of anxiety by contrasting patterns of brain organisation or activation; however, lateralisation for processing emotional stimuli has received relatively little attention. This study examines the relationship between strength of lateralisation for the processing of facial expressions of emotion and three measures of anxiety: state anxiety, trait anxiety and social anxiety. Across all six of the basic emotions (anger, disgust, fear, happiness, sadness, surprise) the same patterns of association were found. Participants with high levels of trait anxiety were more strongly lateralised to the right hemisphere for processing facial emotion. In contrast, participants with high levels of self-reported physiological arousal in response to social anxiety were more weakly lateralised to the right hemisphere, or even lateralised to the left hemisphere, for the processing of facial emotion. There were also sex differences in these associations: the relationships were evident for males only. The finding of distinct patterns of lateralisation for trait anxiety and self-reported physiological arousal suggests different neural circuitry for trait and social anxiety. Copyright © 2011. Published by Elsevier Ltd.

  18. [Establishment of the database of the 3D facial models for the plastic surgery based on network].

    PubMed

    Liu, Zhe; Zhang, Hai-Lin; Zhang, Zheng-Guo; Qiao, Qun

    2008-07-01

    To collect the three-dimensional (3D) facial data of 30 facial deformity patients by the 3D scanner and establish a professional database based on Internet. It can be helpful for the clinical intervention. The primitive point data of face topography were collected by the 3D scanner. Then the 3D point cloud was edited by reverse engineering software to reconstruct the 3D model of the face. The database system was divided into three parts, including basic information, disease information and surgery information. The programming language of the web system is Java. The linkages between every table of the database are credibility. The query operation and the data mining are convenient. The users can visit the database via the Internet and use the image analysis system to observe the 3D facial models interactively. In this paper we presented a database and a web system adapt to the plastic surgery of human face. It can be used both in clinic and in basic research.

  19. iFER: facial expression recognition using automatically selected geometric eye and eyebrow features

    NASA Astrophysics Data System (ADS)

    Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz

    2018-03-01

    Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.

  20. Right Hemispheric Dominance in Processing of Unconscious Negative Emotion

    ERIC Educational Resources Information Center

    Sato, Wataru; Aoki, Satoshi

    2006-01-01

    Right hemispheric dominance in unconscious emotional processing has been suggested, but remains controversial. This issue was investigated using the subliminal affective priming paradigm combined with unilateral visual presentation in 40 normal subjects. In either left or right visual fields, angry facial expressions, happy facial expressions, or…

  1. The implicit processing of categorical and dimensional strategies: an fMRI study of facial emotion perception

    PubMed Central

    Matsuda, Yoshi-Taka; Fujimura, Tomomi; Katahira, Kentaro; Okada, Masato; Ueno, Kenichi; Cheng, Kang; Okanoya, Kazuo

    2013-01-01

    Our understanding of facial emotion perception has been dominated by two seemingly opposing theories: the categorical and dimensional theories. However, we have recently demonstrated that hybrid processing involving both categorical and dimensional perception can be induced in an implicit manner (Fujimura etal., 2012). The underlying neural mechanisms of this hybrid processing remain unknown. In this study, we tested the hypothesis that separate neural loci might intrinsically encode categorical and dimensional processing functions that serve as a basis for hybrid processing. We used functional magnetic resonance imaging to measure neural correlates while subjects passively viewed emotional faces and performed tasks that were unrelated to facial emotion processing. Activity in the right fusiform face area (FFA) increased in response to psychologically obvious emotions and decreased in response to ambiguous expressions, demonstrating the role of the FFA in categorical processing. The amygdala, insula and medial prefrontal cortex exhibited evidence of dimensional (linear) processing that correlated with physical changes in the emotional face stimuli. The occipital face area and superior temporal sulcus did not respond to these changes in the presented stimuli. Our results indicated that distinct neural loci process the physical and psychological aspects of facial emotion perception in a region-specific and implicit manner. PMID:24133426

  2. Gender discrimination and prediction on the basis of facial metric information.

    PubMed

    Fellous, J M

    1997-07-01

    Horizontal and vertical facial measurements are statistically independent. Discriminant analysis shows that five of such normalized distances explain over 95% of the gender differences of "training" samples and predict the gender of 90% novel test faces exhibiting various facial expressions. The robustness of the method and its results are assessed. It is argued that these distances (termed fiducial) are compatible with those found experimentally by psychophysical and neurophysiological studies. In consequence, partial explanations for the effects observed in these experiments can be found in the intrinsic statistical nature of the facial stimuli used.

  3. When Early Experiences Build a Wall to Others’ Emotions: An Electrophysiological and Autonomic Study

    PubMed Central

    Ardizzi, Martina; Martini, Francesca; Umiltà, Maria Alessandra; Sestito, Mariateresa; Ravera, Roberto; Gallese, Vittorio

    2013-01-01

    Facial expression of emotions is a powerful vehicle for communicating information about others’ emotional states and it normally induces facial mimicry in the observers. The aim of this study was to investigate if early aversive experiences could interfere with emotion recognition, facial mimicry, and with the autonomic regulation of social behaviors. We conducted a facial emotion recognition task in a group of “street-boys” and in an age-matched control group. We recorded facial electromyography (EMG), a marker of facial mimicry, and respiratory sinus arrhythmia (RSA), an index of the recruitment of autonomic system promoting social behaviors and predisposition, in response to the observation of facial expressions of emotions. Results showed an over-attribution of anger, and reduced EMG responses during the observation of both positive and negative expressions only among street-boys. Street-boys also showed lower RSA after observation of facial expressions and ineffective RSA suppression during presentation of non-threatening expressions. Our findings suggest that early aversive experiences alter not only emotion recognition but also facial mimicry of emotions. These deficits affect the autonomic regulation of social behaviors inducing lower social predisposition after the visualization of facial expressions and an ineffective recruitment of defensive behavior in response to non-threatening expressions. PMID:23593374

  4. The Right Place at the Right Time: Priming Facial Expressions with Emotional Face Components in Developmental Visual Agnosia

    PubMed Central

    Aviezer, Hillel; Hassin, Ran. R.; Perry, Anat; Dudarev, Veronica; Bentin, Shlomo

    2012-01-01

    The current study examined the nature of deficits in emotion recognition from facial expressions in case LG, an individual with a rare form of developmental visual agnosia (DVA). LG presents with profoundly impaired recognition of facial expressions, yet the underlying nature of his deficit remains unknown. During typical face processing, normal sighted individuals extract information about expressed emotions from face regions with activity diagnostic for specific emotion categories. Given LG’s impairment, we sought to shed light on his emotion perception by examining if priming facial expressions with diagnostic emotional face components would facilitate his recognition of the emotion expressed by the face. LG and control participants matched isolated face components with components appearing in a subsequently presented full-face and then categorized the face’s emotion. Critically, the matched components were from regions which were diagnostic or non-diagnostic of the emotion portrayed by the full face. In experiment 1, when the full faces were briefly presented (150 ms), LG’s performance was strongly influenced by the diagnosticity of the components: His emotion recognition was boosted within normal limits when diagnostic components were used and was obliterated when non-diagnostic components were used. By contrast, in experiment 2, when the face-exposure duration was extended (2000 ms), the beneficial effect of the diagnostic matching was diminished as was the detrimental effect of the non-diagnostic matching. These data highlight the impact of diagnostic facial features in normal expression recognition and suggest that impaired emotion recognition in DVA results from deficient visual integration across diagnostic face components. PMID:22349446

  5. Emotion perception across cultures: the role of cognitive mechanisms

    PubMed Central

    Engelmann, Jan B.; Pogosyan, Marianna

    2012-01-01

    Despite consistently documented cultural differences in the perception of facial expressions of emotion, the role of culture in shaping cognitive mechanisms that are central to emotion perception has received relatively little attention in past research. We review recent developments in cross-cultural psychology that provide particular insights into the modulatory role of culture on cognitive mechanisms involved in interpretations of facial expressions of emotion through two distinct routes: display rules and cognitive styles. Investigations of emotion intensity perception have demonstrated that facial expressions with varying levels of intensity of positive affect are perceived and categorized differently across cultures. Specifically, recent findings indicating significant levels of differentiation between intensity levels of facial expressions among American participants, as well as deviations from clear categorization of high and low intensity expressions among Japanese and Russian participants, suggest that display rules shape mental representations of emotions, such as intensity levels of emotion prototypes. Furthermore, a series of recent studies using eye tracking as a proxy for overt attention during face perception have identified culture-specific cognitive styles, such as the propensity to attend to very specific features of the face. Together, these results suggest a cascade of cultural influences on cognitive mechanisms involved in interpretations of facial expressions of emotion, whereby cultures impart specific behavioral practices that shape the way individuals process information from the environment. These cultural influences lead to differences in cognitive styles due to culture-specific attentional biases and emotion prototypes, which partially account for the gradient of cultural agreements and disagreements obtained in past investigations of emotion perception. PMID:23486743

  6. Emotion perception across cultures: the role of cognitive mechanisms.

    PubMed

    Engelmann, Jan B; Pogosyan, Marianna

    2013-01-01

    Despite consistently documented cultural differences in the perception of facial expressions of emotion, the role of culture in shaping cognitive mechanisms that are central to emotion perception has received relatively little attention in past research. We review recent developments in cross-cultural psychology that provide particular insights into the modulatory role of culture on cognitive mechanisms involved in interpretations of facial expressions of emotion through two distinct routes: display rules and cognitive styles. Investigations of emotion intensity perception have demonstrated that facial expressions with varying levels of intensity of positive affect are perceived and categorized differently across cultures. Specifically, recent findings indicating significant levels of differentiation between intensity levels of facial expressions among American participants, as well as deviations from clear categorization of high and low intensity expressions among Japanese and Russian participants, suggest that display rules shape mental representations of emotions, such as intensity levels of emotion prototypes. Furthermore, a series of recent studies using eye tracking as a proxy for overt attention during face perception have identified culture-specific cognitive styles, such as the propensity to attend to very specific features of the face. Together, these results suggest a cascade of cultural influences on cognitive mechanisms involved in interpretations of facial expressions of emotion, whereby cultures impart specific behavioral practices that shape the way individuals process information from the environment. These cultural influences lead to differences in cognitive styles due to culture-specific attentional biases and emotion prototypes, which partially account for the gradient of cultural agreements and disagreements obtained in past investigations of emotion perception.

  7. Effects of levodopa-carbidopa-entacapone and smoked cocaine on facial affect recognition in cocaine smokers.

    PubMed

    Bedi, Gillinder; Shiffrin, Laura; Vadhan, Nehal P; Nunes, Edward V; Foltin, Richard W; Bisaga, Adam

    2016-04-01

    In addition to difficulties in daily social functioning, regular cocaine users have decrements in social processing (the cognitive and affective processes underlying social behavior) relative to non-users. Little is known, however, about the effects of clinically-relevant pharmacological agents, such as cocaine and potential treatment medications, on social processing in cocaine users. Such drug effects could potentially alleviate or compound baseline social processing decrements in cocaine abusers. Here, we assessed the individual and combined effects of smoked cocaine and a potential treatment medication, levodopa-carbidopa-entacapone (LCE), on facial emotion recognition in cocaine smokers. Healthy non-treatment-seeking cocaine smokers (N = 14; two female) completed this 11-day inpatient within-subjects study. Participants received LCE (titrated to 400mg/100mg/200mg b.i.d.) for five days with the remaining time on placebo. The order of medication administration was counterbalanced. Facial emotion recognition was measured twice during target LCE dosing and twice on placebo: once without cocaine and once after repeated cocaine doses. LCE increased the response threshold for identification of facial fear, biasing responses away from fear identification. Cocaine had no effect on facial emotion recognition. Results highlight the possibility for candidate pharmacotherapies to have unintended impacts on social processing in cocaine users, potentially exacerbating already existing difficulties in this population. © The Author(s) 2016.

  8. Steady-state visual evoked potentials as a research tool in social affective neuroscience

    PubMed Central

    Wieser, Matthias J.; Miskovic, Vladimir; Keil, Andreas

    2017-01-01

    Like many other primates, humans place a high premium on social information transmission and processing. One important aspect of this information concerns the emotional state of other individuals, conveyed by distinct visual cues such as facial expressions, overt actions, or by cues extracted from the situational context. A rich body of theoretical and empirical work has demonstrated that these socio-emotional cues are processed by the human visual system in a prioritized fashion, in the service of optimizing social behavior. Furthermore, socio-emotional perception is highly dependent on situational contexts and previous experience. Here, we review current issues in this area of research and discuss the utility of the steady-state visual evoked potential (ssVEP) technique for addressing key empirical questions. Methodological advantages and caveats are discussed with particular regard to quantifying time-varying competition among multiple perceptual objects, trial-by-trial analysis of visual cortical activation, functional connectivity, and the control of low-level stimulus features. Studies on facial expression and emotional scene processing are summarized, with an emphasis on viewing faces and other social cues in emotional contexts, or when competing with each other. Further, because the ssVEP technique can be readily accommodated to studying the viewing of complex scenes with multiple elements, it enables researchers to advance theoretical models of socio-emotional perception, based on complex, quasi-naturalistic viewing situations. PMID:27699794

  9. Multimodal human communication--targeting facial expressions, speech content and prosody.

    PubMed

    Regenbogen, Christina; Schneider, Daniel A; Gur, Raquel E; Schneider, Frank; Habel, Ute; Kellermann, Thilo

    2012-05-01

    Human communication is based on a dynamic information exchange of the communication channels facial expressions, prosody, and speech content. This fMRI study elucidated the impact of multimodal emotion processing and the specific contribution of each channel on behavioral empathy and its prerequisites. Ninety-six video clips displaying actors who told self-related stories were presented to 27 healthy participants. In two conditions, all channels uniformly transported only emotional or neutral information. Three conditions selectively presented two emotional channels and one neutral channel. Subjects indicated the actors' emotional valence and their own while fMRI was recorded. Activation patterns of tri-channel emotional communication reflected multimodal processing and facilitative effects for empathy. Accordingly, subjects' behavioral empathy rates significantly deteriorated once one source was neutral. However, emotionality expressed via two of three channels yielded activation in a network associated with theory-of-mind-processes. This suggested participants' effort to infer mental states of their counterparts and was accompanied by a decline of behavioral empathy, driven by the participants' emotional responses. Channel-specific emotional contributions were present in modality-specific areas. The identification of different network-nodes associated with human interactions constitutes a prerequisite for understanding dynamics that underlie multimodal integration and explain the observed decline in empathy rates. This task might also shed light on behavioral deficits and neural changes that accompany psychiatric diseases. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. The beneficial effects of honeybee-venom serum on facial wrinkles in humans

    PubMed Central

    Han, Sang Mi; Hong, In Phyo; Woo, Soon Ok; Chun, Sung Nam; Park, Kwan Kyu; Nicholls, Young Mee; Pak, Sok Cheon

    2015-01-01

    Facial wrinkles are an undesirable outcome caused by extrinsic photodamage and intrinsic aging processes. Currently, no effective strategies are known to prevent facial wrinkles. We assessed the beneficial effects of bee-venom serum on the clinical signs of aging skin. Our results show that bee-venom serum treatment clinically improved facial wrinkles by decreasing total wrinkle area, total wrinkle count, and average wrinkle depth. Therefore, bee-venom serum may be effective for the improvement of skin wrinkles. PMID:26491274

  11. Negative ion treatment increases positive emotional processing in seasonal affective disorder.

    PubMed

    Harmer, C J; Charles, M; McTavish, S; Favaron, E; Cowen, P J

    2012-08-01

    Antidepressant drug treatments increase the processing of positive compared to negative affective information early in treatment. Such effects have been hypothesized to play a key role in the development of later therapeutic responses to treatment. However, it is unknown whether these effects are a common mechanism of action for different treatment modalities. High-density negative ion (HDNI) treatment is an environmental manipulation that has efficacy in randomized clinical trials in seasonal affective disorder (SAD). The current study investigated whether a single session of HDNI treatment could reverse negative affective biases seen in seasonal depression using a battery of emotional processing tasks in a double-blind, placebo-controlled randomized study. Under placebo conditions, participants with seasonal mood disturbance showed reduced recognition of happy facial expressions, increased recognition memory for negative personality characteristics and increased vigilance to masked presentation of negative words in a dot-probe task compared to matched healthy controls. Negative ion treatment increased the recognition of positive compared to negative facial expression and improved vigilance to unmasked stimuli across participants with seasonal depression and healthy controls. Negative ion treatment also improved recognition memory for positive information in the SAD group alone. These effects were seen in the absence of changes in subjective state or mood. These results are consistent with the hypothesis that early change in emotional processing may be an important mechanism for treatment action in depression and suggest that these effects are also apparent with negative ion treatment in seasonal depression.

  12. How social exclusion modulates social information processing: A behavioural dissociation between facial expressions and gaze direction

    PubMed Central

    Gallucci, Marcello; Ricciardelli, Paola

    2018-01-01

    Social exclusion is a painful experience that is felt as a threat to the human need to belong and can lead to increased aggressive and anti-social behaviours, and results in emotional and cognitive numbness. Excluded individuals also seem to show an automatic tuning to positivity: they tend to increase their selective attention towards social acceptance signals. Despite these effects known in the literature, the consequences of social exclusion on social information processing still need to be explored in depth. The aim of this study was to investigate the effects of social exclusion on processing two features that are strictly bound in the appraisal of the meaning of facial expressions: gaze direction and emotional expression. In two experiments (N = 60, N = 45), participants were asked to identify gaze direction or emotional expressions from facial stimuli, in which both these features were manipulated. They performed these tasks in a four-block crossed design after being socially included or excluded using the Cyberball game. Participants’ empathy and self-reported emotions were recorded using the Empathy Quotient (EQ) and PANAS questionnaires. The Need Threat Scale and three additional questions were also used as manipulation checks in the second experiment. In both experiments, excluded participants showed to be less accurate than included participants in gaze direction discrimination. Modulatory effects of direct gaze (Experiment 1) and sad expression (Experiment 2) on the effects of social exclusion were found on response times (RTs) in the emotion recognition task. Specific differences in the reaction to social exclusion between males and females were also found in Experiment 2: excluded male participants tended to be less accurate and faster than included male participants, while excluded females showed a more accurate and slower performance than included female participants. No influence of social exclusion on PANAS or EQ scores was found. Results are discussed in the context of the importance of identifying gaze direction in appraisal theories. PMID:29617410

  13. Photographic Standards for Patients With Facial Palsy and Recommendations by Members of the Sir Charles Bell Society.

    PubMed

    Santosa, Katherine B; Fattah, Adel; Gavilán, Javier; Hadlock, Tessa A; Snyder-Warwick, Alison K

    2017-07-01

    There is no widely accepted assessment tool or common language used by clinicians caring for patients with facial palsy, making exchange of information challenging. Standardized photography may represent such a language and is imperative for precise exchange of information and comparison of outcomes in this special patient population. To review the literature to evaluate the use of facial photography in the management of patients with facial palsy and to examine the use of photography in documenting facial nerve function among members of the Sir Charles Bell Society-a group of medical professionals dedicated to care of patients with facial palsy. A literature search was performed to review photographic standards in patients with facial palsy. In addition, a cross-sectional survey of members of the Sir Charles Bell Society was conducted to examine use of medical photography in documenting facial nerve function. The literature search and analysis was performed in August and September 2015, and the survey was conducted in August and September 2013. The literature review searched EMBASE, CINAHL, and MEDLINE databases from inception of each database through September 2015. Additional studies were identified by scanning references from relevant studies. Only English-language articles were eligible for inclusion. Articles that discussed patients with facial palsy and outlined photographic guidelines for this patient population were included in the study. The survey was disseminated to the Sir Charles Bell Society members in electronic form. It consisted of 10 questions related to facial grading scales, patient-reported outcome measures, other psychological assessment tools, and photographic and videographic recordings. In total, 393 articles were identified in the literature search, 7 of which fit the inclusion criteria. Six of the 7 articles discussed or proposed views specific to patients with facial palsy. However, none of the articles specifically focused on photographic standards for the population with facial palsy. Eighty-three of 151 members (55%) of the Sir Charles Bell Society responded to the survey. All survey respondents used photographic documentation, but there was variability in which facial expressions were used. Eighty-two percent (68 of 83) used some form of videography. From these data, we propose a set of minimum photographic standards for patients with facial palsy, including the following 10 static views: at rest or repose, small closed-mouth smile, large smile showing teeth, elevation of eyebrows, closure of eyes gently, closure of eyes tightly, puckering of lips, showing bottom teeth, snarling or wrinkling of the nose, and nasal base view. There is no consensus on photographic standardization to report outcomes for patients with facial palsy. Minimum photographic standards for facial paralysis publications are proposed. Videography of the dynamic movements of these views should also be recorded. NA.

  14. Facial Affect Processing and Depression Susceptibility: Cognitive Biases and Cognitive Neuroscience

    ERIC Educational Resources Information Center

    Bistricky, Steven L.; Ingram, Rick E.; Atchley, Ruth Ann

    2011-01-01

    Facial affect processing is essential to social development and functioning and is particularly relevant to models of depression. Although cognitive and interpersonal theories have long described different pathways to depression, cognitive-interpersonal and evolutionary social risk models of depression focus on the interrelation of interpersonal…

  15. Discrimination against Facially Stigmatized Applicants in Interviews: An Eye-Tracking and Face-to-Face Investigation

    ERIC Educational Resources Information Center

    Madera, Juan M.; Hebl, Michelle R.

    2012-01-01

    Drawing from theory and research on perceived stigma (Pryor, Reeder, Yeadon, & Hesson-McInnis, 2004), attentional processes (Rinck & Becker, 2006), working memory (Baddeley & Hitch, 1974), and regulatory resources (Muraven & Baumeister, 2000), the authors examined discrimination against facially stigmatized applicants and the processes involved.…

  16. Holistic Processing of Static and Moving Faces

    ERIC Educational Resources Information Center

    Zhao, Mintao; Bülthoff, Isabelle

    2017-01-01

    Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability--holistic face processing--remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based…

  17. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    PubMed Central

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  18. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    PubMed

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-07-25

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  19. Blend Shape Interpolation and FACS for Realistic Avatar

    NASA Astrophysics Data System (ADS)

    Alkawaz, Mohammed Hazim; Mohamad, Dzulkifli; Basori, Ahmad Hoirul; Saba, Tanzila

    2015-03-01

    The quest of developing realistic facial animation is ever-growing. The emergence of sophisticated algorithms, new graphical user interfaces, laser scans and advanced 3D tools imparted further impetus towards the rapid advancement of complex virtual human facial model. Face-to-face communication being the most natural way of human interaction, the facial animation systems became more attractive in the information technology era for sundry applications. The production of computer-animated movies using synthetic actors are still challenging issues. Proposed facial expression carries the signature of happiness, sadness, angry or cheerful, etc. The mood of a particular person in the midst of a large group can immediately be identified via very subtle changes in facial expressions. Facial expressions being very complex as well as important nonverbal communication channel are tricky to synthesize realistically using computer graphics. Computer synthesis of practical facial expressions must deal with the geometric representation of the human face and the control of the facial animation. We developed a new approach by integrating blend shape interpolation (BSI) and facial action coding system (FACS) to create a realistic and expressive computer facial animation design. The BSI is used to generate the natural face while the FACS is employed to reflect the exact facial muscle movements for four basic natural emotional expressions such as angry, happy, sad and fear with high fidelity. The results in perceiving the realistic facial expression for virtual human emotions based on facial skin color and texture may contribute towards the development of virtual reality and game environment of computer aided graphics animation systems.

  20. What's in a face? The role of skin tone, facial physiognomy, and color presentation mode of facial primes in affective priming effects.

    PubMed

    Stepanova, Elena V; Strube, Michael J

    2012-01-01

    Participants (N = 106) performed an affective priming task with facial primes that varied in their skin tone and facial physiognomy, and, which were presented either in color or in gray-scale. Participants' racial evaluations were more positive for Eurocentric than for Afrocentric physiognomy faces. Light skin tone faces were evaluated more positively than dark skin tone faces, but the magnitude of this effect depended on the mode of color presentation. The results suggest that in affective priming tasks, faces might not be processed holistically, and instead, visual features of facial priming stimuli independently affect implicit evaluations.

  1. Knowing how you are feeling depends on what's on my mind: Cognitive load and expression categorization.

    PubMed

    Ahmed, Lubna

    2018-03-01

    The ability to correctly interpret facial expressions is key to effective social interactions. People are well rehearsed and generally very efficient at correctly categorizing expressions. However, does their ability to do so depend on how cognitively loaded they are at the time? Using repeated-measures designs, we assessed the sensitivity of facial expression categorization to cognitive resources availability by measuring people's expression categorization performance during concurrent low and high cognitive load situations. In Experiment1, participants categorized the 6 basic upright facial expressions in a 6-automated-facial-coding response paradigm while maintaining low or high loading information in working memory (N = 40; 60 observations per load condition). In Experiment 2, they did so for both upright and inverted faces (N = 46; 60 observations per load and inversion condition). In both experiments, expression categorization for upright faces was worse during high versus low load. Categorization rates actually improved with increased load for the inverted faces. The opposing effects of cognitive load on upright and inverted expressions are explained in terms of a cognitive load-related dispersion in the attentional window. Overall, the findings support that expression categorization is sensitive to cognitive resources availability and moreover suggest that, in this paradigm, it is the perceptual processing stage of expression categorization that is affected by cognitive load. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Automatic mimicry reactions as related to differences in emotional empathy.

    PubMed

    Sonnby-Borgström, Marianne

    2002-12-01

    The hypotheses of this investigation were derived by conceiving of automatic mimicking as a component of emotional empathy. Differences between subjects high and low in emotional empathy were investigated. The parameters compared were facial mimicry reactions, as represented by electromyographic (EMG) activity when subjects were exposed to pictures of angry or happy faces, and the degree of correspondence between subjects' facial EMG reactions and their self-reported feelings. The comparisons were made at different stimulus exposure times in order to elicit reactions at different levels of information processing. The high-empathy subjects were found to have a higher degree of mimicking behavior than the low-empathy subjects, a difference that emerged at short exposure times (17-40 ms) that represented automatic reactions. The low-empathy subjects tended already at short exposure times (17-40 ms) to show inverse zygomaticus muscle reactions, namely "smiling" when exposed to an angry face. The high-empathy group was characterized by a significantly higher correspondence between facial expressions and self-reported feelings. No differences were found between the high- and low-empathy subjects in their verbally reported feelings when presented a happy or an angry face. Thus, the differences between the groups in emotional empathy appeared to be related to differences in automatic somatic reactions to facial stimuli rather than to differences in their conscious interpretation of the emotional situation.

  3. Abnormalities in early visual processes are linked to hypersociability and atypical evaluation of facial trustworthiness: An ERP study with Williams syndrome.

    PubMed

    Shore, Danielle M; Ng, Rowena; Bellugi, Ursula; Mills, Debra L

    2017-10-01

    Accurate assessment of trustworthiness is fundamental to successful and adaptive social behavior. Initially, people assess trustworthiness from facial appearance alone. These assessments then inform critical approach or avoid decisions. Individuals with Williams syndrome (WS) exhibit a heightened social drive, especially toward strangers. This study investigated the temporal dynamics of facial trustworthiness evaluation in neurotypic adults (TD) and individuals with WS. We examined whether differences in neural activity during trustworthiness evaluation may explain increased approach motivation in WS compared to TD individuals. Event-related potentials were recorded while participants appraised faces previously rated as trustworthy or untrustworthy. TD participants showed increased sensitivity to untrustworthy faces within the first 65-90 ms, indexed by the negative-going rise of the P1 onset (oP1). The amplitude of the oP1 difference to untrustworthy minus trustworthy faces was correlated with lower approachability scores. In contrast, participants with WS showed increased N170 amplitudes to trustworthy faces. The N170 difference to low-high-trust faces was correlated with low approachability in TD and high approachability in WS. The findings suggest that hypersociability associated with WS may arise from abnormalities in the timing and organization of early visual brain activity during trustworthiness evaluation. More generally, the study provides support for the hypothesis that impairments in low-level perceptual processes can have a cascading effect on social cognition.

  4. Processing of prosodic changes in natural speech stimuli in school-age children.

    PubMed

    Lindström, R; Lepistö, T; Makkonen, T; Kujala, T

    2012-12-01

    Speech prosody conveys information about important aspects of communication: the meaning of the sentence and the emotional state or intention of the speaker. The present study addressed processing of emotional prosodic changes in natural speech stimuli in school-age children (mean age 10 years) by recording the electroencephalogram, facial electromyography, and behavioral responses. The stimulus was a semantically neutral Finnish word uttered with four different emotional connotations: neutral, commanding, sad, and scornful. In the behavioral sound-discrimination task the reaction times were fastest for the commanding stimulus and longest for the scornful stimulus, and faster for the neutral than for the sad stimulus. EEG and EMG responses were measured during non-attentive oddball paradigm. Prosodic changes elicited a negative-going, fronto-centrally distributed neural response peaking at about 500 ms from the onset of the stimulus, followed by a fronto-central positive deflection, peaking at about 740 ms. For the commanding stimulus also a rapid negative deflection peaking at about 290 ms from stimulus onset was elicited. No reliable stimulus type specific rapid facial reactions were found. The results show that prosodic changes in natural speech stimuli activate pre-attentive neural change-detection mechanisms in school-age children. However, the results do not support the suggestion of automaticity of emotion specific facial muscle responses to non-attended emotional speech stimuli in children. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. [Analysis of different health status based on characteristics of the facial spectrum photometric color].

    PubMed

    Xu, Jiatuo; Wu, Hongjin; Lu, Luming; Tu, Liping; Zhang, Zhifeng; Chen, Xiao

    2012-12-01

    This paper is aimed to observe the difference of facial color of people with different health status by spectral photometric color measuring technique according to the theory of facial color diagnosis in Internal Classic. We gathered the facial color information about the health status of persons in healthy group (183), sub-healthy group (287) and disease group (370) respectively. The information included L, a, b, C values and reflection of different wavelengths in 400-700nm with CM-2600D spectral photometric color measuring instrument on 8 points. The results indicated that overall complexion color values of the people in the three groups were significantly different. The persons in the disease group looked deep dark in features. The people in the sub-healthy group looked pale in features. The loci L, a, b, C values were with varying degrees of significant differences (P < 0.05) at 6 points among the groups, and the central position of the face in all the groups was the position with most significant differences. Comparing the facial color information at the same point of the people in the three groups, we obtained each group's diagnostic special point. There existed diagnostic values in distinguishing disease status and various status of health in some degree by spectral photometric color measuring technique. The present method provides a prosperous quantitative basis for Chinese medical inspection of the complexion diagnosis.

  6. A motivational determinant of facial emotion recognition: regulatory focus affects recognition of emotions in faces.

    PubMed

    Sassenrath, Claudia; Sassenberg, Kai; Ray, Devin G; Scheiter, Katharina; Jarodzka, Halszka

    2014-01-01

    Two studies examined an unexplored motivational determinant of facial emotion recognition: observer regulatory focus. It was predicted that a promotion focus would enhance facial emotion recognition relative to a prevention focus because the attentional strategies associated with promotion focus enhance performance on well-learned or innate tasks - such as facial emotion recognition. In Study 1, a promotion or a prevention focus was experimentally induced and better facial emotion recognition was observed in a promotion focus compared to a prevention focus. In Study 2, individual differences in chronic regulatory focus were assessed and attention allocation was measured using eye tracking during the facial emotion recognition task. Results indicated that the positive relation between a promotion focus and facial emotion recognition is mediated by shorter fixation duration on the face which reflects a pattern of attention allocation matched to the eager strategy in a promotion focus (i.e., striving to make hits). A prevention focus did not have an impact neither on perceptual processing nor on facial emotion recognition. Taken together, these findings demonstrate important mechanisms and consequences of observer motivational orientation for facial emotion recognition.

  7. Social perception and aging: The relationship between aging and the perception of subtle changes in facial happiness and identity.

    PubMed

    Yang, Tao; Penton, Tegan; Köybaşı, Şerife Leman; Banissy, Michael J

    2017-09-01

    Previous findings suggest that older adults show impairments in the social perception of faces, including the perception of emotion and facial identity. The majority of this work has tended to examine performance on tasks involving young adult faces and prototypical emotions. While useful, this can influence performance differences between groups due to perceptual biases and limitations on task performance. Here we sought to examine how typical aging is associated with the perception of subtle changes in facial happiness and facial identity in older adult faces. We developed novel tasks that permitted the ability to assess facial happiness, facial identity, and non-social perception (object perception) across similar task parameters. We observe that aging is linked with declines in the ability to make fine-grained judgements in the perception of facial happiness and facial identity (from older adult faces), but not for non-social (object) perception. This pattern of results is discussed in relation to mechanisms that may contribute to declines in facial perceptual processing in older adulthood. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Neural responses to facial expressions support the role of the amygdala in processing threat

    PubMed Central

    Sormaz, Mladen; Flack, Tessa; Asghar, Aziz U. R.; Fan, Siyan; Frey, Julia; Manssuer, Luis; Usten, Deniz; Young, Andrew W.; Andrews, Timothy J.

    2014-01-01

    The amygdala is known to play an important role in the response to facial expressions that convey fear. However, it remains unclear whether the amygdala’s response to fear reflects its role in the interpretation of danger and threat, or whether it is to some extent activated by all facial expressions of emotion. Previous attempts to address this issue using neuroimaging have been confounded by differences in the use of control stimuli across studies. Here, we address this issue using a block design functional magnetic resonance imaging paradigm, in which we compared the response to face images posing expressions of fear, anger, happiness, disgust and sadness with a range of control conditions. The responses in the amygdala to different facial expressions were compared with the responses to a non-face condition (buildings), to mildly happy faces and to neutral faces. Results showed that only fear and anger elicited significantly greater responses compared with the control conditions involving faces. Overall, these findings are consistent with the role of the amygdala in processing threat, rather than in the processing of all facial expressions of emotion, and demonstrate the critical importance of the choice of comparison condition to the pattern of results. PMID:24097376

  9. Extracranial Facial Nerve Schwannoma Treated by Hypo-fractionated CyberKnife Radiosurgery.

    PubMed

    Sasaki, Ayaka; Miyazaki, Shinichiro; Hori, Tomokatsu

    2016-09-21

    Facial nerve schwannoma is a rare intracranial tumor. Treatment for this benign tumor has been controversial. Here, we report a case of extracranial facial nerve schwannoma treated successfully by hypo-fractionated CyberKnife (Accuray, Sunnyvale, CA) radiosurgery and discuss the efficacy of this treatment. A 34-year-old female noticed a swelling in her right mastoid process. The lesion enlarged over a seven-month period, and she experienced facial spasm on the right side. She was diagnosed with a facial schwannoma via a magnetic resonance imaging (MRI) scan of the head and neck and was told to wait until the facial nerve palsy subsides. She was referred to our hospital for radiation therapy. We planned a fractionated CyberKnife radiosurgery for three consecutive days. After CyberKnife radiosurgery, the mass in the right parotid gradually decreased in size, and the facial nerve palsy disappeared. At her eight-month follow-up, her facial spasm had completely disappeared. There has been no recurrence and the facial nerve function has been normal. We successfully demonstrated the efficacy of CyberKnife radiosurgery as an alternative treatment that also preserves neurofunction for facial nerve schwannomas.

  10. Harnessing the power of multimedia in offender-based law enforcement information systems

    NASA Astrophysics Data System (ADS)

    Zimmerman, Alan P.

    1997-02-01

    Criminal offenders are increasingly administratively processed by automated multimedia information systems. During this processing, case and offender biographical data, mugshot photos, fingerprints and other valuable information and media are collected by law enforcement officers. As part of their criminal investigations, law enforcement officers are routinely called to solve criminal cases based upon limited evidence . . . evidence increasingly comprised of human DNA, ballistic casings and projectiles, chemical residues, latent fingerprints, surveillance camera facial images and voices. As multimedia systems receive greater use in law enforcement, traditional approaches used to index text data are not appropriate for images and signal data which comprise a multimedia database. Multimedia systems with integrated advanced pattern matching tools will provide law enforcement the ability to effectively locate multimedia information based upon content, without reliance upon the accuracy or completeness of text-based indexing.

  11. Spatiotemporal neural network dynamics for the processing of dynamic facial expressions.

    PubMed

    Sato, Wataru; Kochiyama, Takanori; Uono, Shota

    2015-07-24

    The dynamic facial expressions of emotion automatically elicit multifaceted psychological activities; however, the temporal profiles and dynamic interaction patterns of brain activities remain unknown. We investigated these issues using magnetoencephalography. Participants passively observed dynamic facial expressions of fear and happiness, or dynamic mosaics. Source-reconstruction analyses utilizing functional magnetic-resonance imaging data revealed higher activation in broad regions of the bilateral occipital and temporal cortices in response to dynamic facial expressions than in response to dynamic mosaics at 150-200 ms and some later time points. The right inferior frontal gyrus exhibited higher activity for dynamic faces versus mosaics at 300-350 ms. Dynamic causal-modeling analyses revealed that dynamic faces activated the dual visual routes and visual-motor route. Superior influences of feedforward and feedback connections were identified before and after 200 ms, respectively. These results indicate that hierarchical, bidirectional neural network dynamics within a few hundred milliseconds implement the processing of dynamic facial expressions.

  12. Spatiotemporal neural network dynamics for the processing of dynamic facial expressions

    PubMed Central

    Sato, Wataru; Kochiyama, Takanori; Uono, Shota

    2015-01-01

    The dynamic facial expressions of emotion automatically elicit multifaceted psychological activities; however, the temporal profiles and dynamic interaction patterns of brain activities remain unknown. We investigated these issues using magnetoencephalography. Participants passively observed dynamic facial expressions of fear and happiness, or dynamic mosaics. Source-reconstruction analyses utilizing functional magnetic-resonance imaging data revealed higher activation in broad regions of the bilateral occipital and temporal cortices in response to dynamic facial expressions than in response to dynamic mosaics at 150–200 ms and some later time points. The right inferior frontal gyrus exhibited higher activity for dynamic faces versus mosaics at 300–350 ms. Dynamic causal-modeling analyses revealed that dynamic faces activated the dual visual routes and visual–motor route. Superior influences of feedforward and feedback connections were identified before and after 200 ms, respectively. These results indicate that hierarchical, bidirectional neural network dynamics within a few hundred milliseconds implement the processing of dynamic facial expressions. PMID:26206708

  13. Facial fluid synthesis for assessment of acne vulgaris using luminescent visualization system through optical imaging and integration of fluorescent imaging system

    NASA Astrophysics Data System (ADS)

    Balbin, Jessie R.; Dela Cruz, Jennifer C.; Camba, Clarisse O.; Gozo, Angelo D.; Jimenez, Sheena Mariz B.; Tribiana, Aivje C.

    2017-06-01

    Acne vulgaris, commonly called as acne, is a skin problem that occurs when oil and dead skin cells clog up in a person's pores. This is because hormones change which makes the skin oilier. The problem is people really do not know the real assessment of sensitivity of their skin in terms of fluid development on their faces that tends to develop acne vulgaris, thus having more complications. This research aims to assess Acne Vulgaris using luminescent visualization system through optical imaging and integration of image processing algorithms. Specifically, this research aims to design a prototype for facial fluid analysis using luminescent visualization system through optical imaging and integration of fluorescent imaging system, and to classify different facial fluids present in each person. Throughout the process, some structures and layers of the face will be excluded, leaving only a mapped facial structure with acne regions. Facial fluid regions are distinguished from the acne region as they are characterized differently.

  14. Gender differences in the motivational processing of facial beauty☆

    PubMed Central

    Levy, Boaz; Ariely, Dan; Mazar, Nina; Chi, Won; Lukas, Scott; Elman, Igor

    2013-01-01

    Gender may be involved in the motivational processing of facial beauty. This study applied a behavioral probe, known to activate brain motivational regions, to healthy heterosexual subjects. Matched samples of men and women were administered two tasks: (a) key pressing to change the viewing time of average or beautiful female or male facial images, and (b) rating the attractiveness of these images. Men expended more effort (via the key-press task) to extend the viewing time of the beautiful female faces. Women displayed similarly increased effort for beautiful male and female images, but the magnitude of this effort was substantially lower than that of men for beautiful females. Heterosexual facial attractiveness ratings were comparable in both groups. These findings demonstrate heterosexual specificity of facial motivational targets for men, but not for women. Moreover, heightened drive for the pursuit of heterosexual beauty in the face of regular valuational assessments, displayed by men, suggests a gender-specific incentive sensitization phenomenon. PMID:24282336

  15. Once more: is beauty in the eye of the beholder? Relative contributions of private and shared taste to judgments of facial attractiveness.

    PubMed

    Hönekopp, Johannes

    2006-04-01

    Misconstruing the meaning of Cronbach's alpha, experts on facial attractiveness have conveyed the impression that facial-attractiveness judgment standards are largely shared. This claim is unsubstantiated, because information necessary for deciding whether judgments of facial attractiveness are more influenced by commonly shared or by privately held evaluation standards is lacking. Three experiments, using diverse face and rater samples to investigate the relative contributions of private and shared taste to judgments of facial attractiveness, are reported. These experiments show that for a variety of ancillary conditions, and contrary to the prevalent notion in the literature, private taste is about as powerful as shared taste. Important implications for scientific research strategy and laypeople's self-esteem are discussed.

  16. Face puzzle—two new video-based tasks for measuring explicit and implicit aspects of facial emotion recognition

    PubMed Central

    Kliemann, Dorit; Rosenblau, Gabriela; Bölte, Sven; Heekeren, Hauke R.; Dziobek, Isabel

    2013-01-01

    Recognizing others' emotional states is crucial for effective social interaction. While most facial emotion recognition tasks use explicit prompts that trigger consciously controlled processing, emotional faces are almost exclusively processed implicitly in real life. Recent attempts in social cognition suggest a dual process perspective, whereby explicit and implicit processes largely operate independently. However, due to differences in methodology the direct comparison of implicit and explicit social cognition has remained a challenge. Here, we introduce a new tool to comparably measure implicit and explicit processing aspects comprising basic and complex emotions in facial expressions. We developed two video-based tasks with similar answer formats to assess performance in respective facial emotion recognition processes: Face Puzzle, implicit and explicit. To assess the tasks' sensitivity to atypical social cognition and to infer interrelationship patterns between explicit and implicit processes in typical and atypical development, we included healthy adults (NT, n = 24) and adults with autism spectrum disorder (ASD, n = 24). Item analyses yielded good reliability of the new tasks. Group-specific results indicated sensitivity to subtle social impairments in high-functioning ASD. Correlation analyses with established implicit and explicit socio-cognitive measures were further in favor of the tasks' external validity. Between group comparisons provide first hints of differential relations between implicit and explicit aspects of facial emotion recognition processes in healthy compared to ASD participants. In addition, an increased magnitude of between group differences in the implicit task was found for a speed-accuracy composite measure. The new Face Puzzle tool thus provides two new tasks to separately assess explicit and implicit social functioning, for instance, to measure subtle impairments as well as potential improvements due to social cognitive interventions. PMID:23805122

  17. Facial paralysis for the plastic surgeon.

    PubMed

    Kosins, Aaron M; Hurvitz, Keith A; Evans, Gregory Rd; Wirth, Garrett A

    2007-01-01

    Facial paralysis presents a significant and challenging reconstructive problem for plastic surgeons. An aesthetically pleasing and acceptable outcome requires not only good surgical skills and techniques, but also knowledge of facial nerve anatomy and an understanding of the causes of facial paralysis.The loss of the ability to move the face has both social and functional consequences for the patient. At the Facial Palsy Clinic in Edinburgh, Scotland, 22,954 patients were surveyed, and over 50% were found to have a considerable degree of psychological distress and social withdrawal as a consequence of their facial paralysis. Functionally, patients present with unilateral or bilateral loss of voluntary and nonvoluntary facial muscle movements. Signs and symptoms can include an asymmetric smile, synkinesis, epiphora or dry eye, abnormal blink, problems with speech articulation, drooling, hyperacusis, change in taste and facial pain.With respect to facial paralysis, surgeons tend to focus on the surgical, or 'hands-on', aspect. However, it is believed that an understanding of the disease process is equally (if not more) important to a successful surgical outcome. The purpose of the present review is to describe the anatomy and diagnostic patterns of the facial nerve, and the epidemiology and common causes of facial paralysis, including clinical features and diagnosis. Treatment options for paralysis are vast, and may include nerve decompression, facial reanimation surgery and botulinum toxin injection, but these are beyond the scope of the present paper.

  18. Facial paralysis for the plastic surgeon

    PubMed Central

    Kosins, Aaron M; Hurvitz, Keith A; Evans, Gregory RD; Wirth, Garrett A

    2007-01-01

    Facial paralysis presents a significant and challenging reconstructive problem for plastic surgeons. An aesthetically pleasing and acceptable outcome requires not only good surgical skills and techniques, but also knowledge of facial nerve anatomy and an understanding of the causes of facial paralysis. The loss of the ability to move the face has both social and functional consequences for the patient. At the Facial Palsy Clinic in Edinburgh, Scotland, 22,954 patients were surveyed, and over 50% were found to have a considerable degree of psychological distress and social withdrawal as a consequence of their facial paralysis. Functionally, patients present with unilateral or bilateral loss of voluntary and nonvoluntary facial muscle movements. Signs and symptoms can include an asymmetric smile, synkinesis, epiphora or dry eye, abnormal blink, problems with speech articulation, drooling, hyperacusis, change in taste and facial pain. With respect to facial paralysis, surgeons tend to focus on the surgical, or ‘hands-on’, aspect. However, it is believed that an understanding of the disease process is equally (if not more) important to a successful surgical outcome. The purpose of the present review is to describe the anatomy and diagnostic patterns of the facial nerve, and the epidemiology and common causes of facial paralysis, including clinical features and diagnosis. Treatment options for paralysis are vast, and may include nerve decompression, facial reanimation surgery and botulinum toxin injection, but these are beyond the scope of the present paper. PMID:19554190

  19. Mood-congruent amygdala responses to subliminally presented facial expressions in major depression: associations with anhedonia

    PubMed Central

    Stuhrmann, Anja; Dohm, Katharina; Kugel, Harald; Zwanzger, Peter; Redlich, Ronny; Grotegerd, Dominik; Rauch, Astrid Veronika; Arolt, Volker; Heindel, Walter; Suslow, Thomas; Zwitserlood, Pienie; Dannlowski, Udo

    2013-01-01

    Background Anhedonia has long been recognized as a key feature of major depressive disorders, but little is known about the association between hedonic symptoms and neurobiological processes in depressed patients. We investigated whether amygdala mood-congruent responses to emotional stimuli in depressed patients are correlated with anhedonic symptoms at automatic levels of processing. Methods We measured amygdala responsiveness to subliminally presented sad and happy facial expressions in depressed patients and matched healthy controls using functional magnetic resonance imaging. Amygdala responsiveness was compared between patients and healthy controls within a 2 (group) × 2 (emotion) design. In addition, we correlated patients’ amygdala responsiveness to sad and happy facial stimuli with self-report questionnaire measures of anhedonia. Results We included 35 patients and 35 controls in our study. As in previous studies, we observed a strong emotion × group interaction in the bilateral amygdala: depressed patients showed greater amygdala responses to sad than happy faces, whereas healthy controls responded more strongly to happy than sad faces. The lack of automatic right amygdala responsiveness to happy faces in depressed patients was associated with higher physical anhedonia scores. Limitations Almost all depressed patients were taking antidepressant medications. Conclusion We replicated our previous finding of depressed patients showing automatic amygdala mood-congruent biases in terms of enhanced reactivity to negative emotional stimuli and reduced activity to positive emotional stimuli. The altered amygdala processing of positive stimuli in patients was associated with anhedonia scores. The results indicate that reduced amygdala responsiveness to positive stimuli may contribute to an-hedonic symptoms due to reduced/inappropriate salience attribution to positive information at very early processing levels. PMID:23171695

  20. Iatrogenic facial palsy: the cost.

    PubMed

    Pulec, J L

    1996-11-01

    The cost of iatrogenic facial paralysis can be high. Ways to avoid facial nerve injury during surgery and, should it occur, ways to minimize the disability and cost are discussed. These include adequate preparation and training by the surgeon, the exercise of sound judgment, the presence of high morals by the surgeon, adequate preoperative diagnosis and surgical instrumentation and thorough preoperative oral and written informed consent. Should facial nerve injury occur, immediate consultation and reparative decompression, anastomosis or grafting should be performed to obtain the best ultimate result. The value of prompt, competent, sympathetic and continuing concern offered by the surgeon to the patient cannot be over emphasized.

  1. Face Processing and Facial Emotion Recognition in Adults with Down Syndrome

    ERIC Educational Resources Information Center

    Barisnikov, Koviljka; Hippolyte, Loyse; Van der Linden, Martial

    2008-01-01

    Face processing and facial expression recognition was investigated in 17 adults with Down syndrome, and results were compared with those of a child control group matched for receptive vocabulary. On the tasks involving faces without emotional content, the adults with Down syndrome performed significantly worse than did the controls. However, their…

  2. Gender Differences in the Motivational Processing of Facial Beauty

    ERIC Educational Resources Information Center

    Levy, Boaz; Ariely, Dan; Mazar, Nina; Chi, Won; Lukas, Scott; Elman, Igor

    2008-01-01

    Gender may be involved in the motivational processing of facial beauty. This study applied a behavioral probe, known to activate brain motivational regions, to healthy heterosexual subjects. Matched samples of men and women were administered two tasks: (a) key pressing to change the viewing time of average or beautiful female or male facial…

  3. Effects of task demands on the early neural processing of fearful and happy facial expressions

    PubMed Central

    Itier, Roxane J.; Neath-Tavares, Karly N.

    2017-01-01

    Task demands shape how we process environmental stimuli but their impact on the early neural processing of facial expressions remains unclear. In a within-subject design, ERPs were recorded to the same fearful, happy and neutral facial expressions presented during a gender discrimination, an explicit emotion discrimination and an oddball detection tasks, the most studied tasks in the field. Using an eye tracker, fixation on the face nose was enforced using a gaze-contingent presentation. Task demands modulated amplitudes from 200–350ms at occipito-temporal sites spanning the EPN component. Amplitudes were more negative for fearful than neutral expressions starting on N170 from 150–350ms, with a temporo-occipital distribution, whereas no clear effect of happy expressions was seen. Task and emotion effects never interacted in any time window or for the ERP components analyzed (P1, N170, EPN). Thus, whether emotion is explicitly discriminated or irrelevant for the task at hand, neural correlates of fearful and happy facial expressions seem immune to these task demands during the first 350ms of visual processing. PMID:28315309

  4. A dynamic appearance descriptor approach to facial actions temporal modeling.

    PubMed

    Jiang, Bihan; Valstar, Michel; Martinez, Brais; Pantic, Maja

    2014-02-01

    Both the configuration and the dynamics of facial expressions are crucial for the interpretation of human facial behavior. Yet to date, the vast majority of reported efforts in the field either do not take the dynamics of facial expressions into account, or focus only on prototypic facial expressions of six basic emotions. Facial dynamics can be explicitly analyzed by detecting the constituent temporal segments in Facial Action Coding System (FACS) Action Units (AUs)-onset, apex, and offset. In this paper, we present a novel approach to explicit analysis of temporal dynamics of facial actions using the dynamic appearance descriptor Local Phase Quantization from Three Orthogonal Planes (LPQ-TOP). Temporal segments are detected by combining a discriminative classifier for detecting the temporal segments on a frame-by-frame basis with Markov Models that enforce temporal consistency over the whole episode. The system is evaluated in detail over the MMI facial expression database, the UNBC-McMaster pain database, the SAL database, the GEMEP-FERA dataset in database-dependent experiments, in cross-database experiments using the Cohn-Kanade, and the SEMAINE databases. The comparison with other state-of-the-art methods shows that the proposed LPQ-TOP method outperforms the other approaches for the problem of AU temporal segment detection, and that overall AU activation detection benefits from dynamic appearance information.

  5. Facial soft tissue thickness in skeletal type I Japanese children.

    PubMed

    Utsuno, Hajime; Kageyama, Toru; Deguchi, Toshio; Umemura, Yasunobu; Yoshino, Mineo; Nakamura, Hiroshi; Miyazawa, Hiroo; Inoue, Katsuhiro

    2007-10-25

    Facial reconstruction techniques used in forensic anthropology require knowledge of the facial soft tissue thickness of each race if facial features are to be reconstructed correctly. If this is inaccurate, so also will be the reconstructed face. Knowledge of differences by age and sex are also required. Therefore, when unknown human skeletal remains are found, the forensic anthropologist investigates for race, sex, and age, and for other variables of relevance. Cephalometric X-ray images of living persons can help to provide this information. They give an approximately 10% enlargement from true size and can demonstrate the relationship between soft and hard tissue. In the present study, facial soft tissue thickness in Japanese children was measured at 12 anthropological points using X-ray cephalometry in order to establish a database for facial soft tissue thickness. This study of both boys and girls, aged from 6 to 18 years, follows a previous study of Japanese female children only, and focuses on facial soft tissue thickness in only one skeletal type. Sex differences in thickness of tissue were found from 12 years of age upwards. The study provides more detailed and accurate measurements than past reports of facial soft tissue thickness, and reveals the uniqueness of the Japanese child's facial profile.

  6. Factors contributing to the adaptation aftereffects of facial expression.

    PubMed

    Butler, Andrea; Oruc, Ipek; Fox, Christopher J; Barton, Jason J S

    2008-01-29

    Previous studies have demonstrated the existence of adaptation aftereffects for facial expressions. Here we investigated which aspects of facial stimuli contribute to these aftereffects. In Experiment 1, we examined the role of local adaptation to image elements such as curvature, shape and orientation, independent of expression, by using hybrid faces constructed from either the same or opposing expressions. While hybrid faces made with consistent expressions generated aftereffects as large as those with normal faces, there were no aftereffects from hybrid faces made from different expressions, despite the fact that these contained the same local image elements. In Experiment 2, we examined the role of facial features independent of the normal face configuration by contrasting adaptation with whole faces to adaptation with scrambled faces. We found that scrambled faces also generated significant aftereffects, indicating that expressive features without a normal facial configuration could generate expression aftereffects. In Experiment 3, we examined the role of facial configuration by using schematic faces made from line elements that in isolation do not carry expression-related information (e.g. curved segments and straight lines) but that convey an expression when arranged in a normal facial configuration. We obtained a significant aftereffect for facial configurations but not scrambled configurations of these line elements. We conclude that facial expression aftereffects are not due to local adaptation to image elements but due to high-level adaptation of neural representations that involve both facial features and facial configuration.

  7. Face-to-face: Perceived personal relevance amplifies face processing.

    PubMed

    Bublatzky, Florian; Pittig, Andre; Schupp, Harald T; Alpers, Georg W

    2017-05-01

    The human face conveys emotional and social information, but it is not well understood how these two aspects influence face perception. In order to model a group situation, two faces displaying happy, neutral or angry expressions were presented. Importantly, faces were either facing the observer, or they were presented in profile view directed towards, or looking away from each other. In Experiment 1 (n = 64), face pairs were rated regarding perceived relevance, wish-to-interact, and displayed interactivity, as well as valence and arousal. All variables revealed main effects of facial expression (emotional > neutral), face orientation (facing observer > towards > away) and interactions showed that evaluation of emotional faces strongly varies with their orientation. Experiment 2 (n = 33) examined the temporal dynamics of perceptual-attentional processing of these face constellations with event-related potentials. Processing of emotional and neutral faces differed significantly in N170 amplitudes, early posterior negativity (EPN), and sustained positive potentials. Importantly, selective emotional face processing varied as a function of face orientation, indicating early emotion-specific (N170, EPN) and late threat-specific effects (LPP, sustained positivity). Taken together, perceived personal relevance to the observer-conveyed by facial expression and face direction-amplifies emotional face processing within triadic group situations. © The Author (2017). Published by Oxford University Press.

  8. Hemispheric differences in recognizing upper and lower facial displays of emotion.

    PubMed

    Prodan, C I; Orbelo, D M; Testa, J A; Ross, E D

    2001-01-01

    To determine if there are hemispheric differences in processing upper versus lower facial displays of emotion. Recent evidence suggests that there are two broad classes of emotions with differential hemispheric lateralization. Primary emotions (e.g. anger, fear) and associated displays are innate, are recognized across all cultures, and are thought to be modulated by the right hemisphere. Social emotions (e.g., guilt, jealousy) and associated "display rules" are learned during early child development, vary across cultures, and are thought to be modulated by the left hemisphere. Display rules are used by persons to alter, suppress or enhance primary emotional displays for social purposes. During deceitful behaviors, a subject's true emotional state is often leaked through upper rather than lower facial displays, giving rise to facial blends of emotion. We hypothesized that upper facial displays are processed preferentially by the right hemisphere, as part of the primary emotional system, while lower facial displays are processed preferentially by the left hemisphere, as part of the social emotional system. 30 strongly right-handed adult volunteers were tested tachistoscopically by randomly flashing facial displays of emotion to the right and left visual fields. The stimuli were line drawings of facial blends with different emotions displayed on the upper versus lower face. The subjects were tested under two conditions: 1) without instructions and 2) with instructions to attend to the upper face. Without instructions, the subjects robustly identified the emotion displayed on the lower face, regardless of visual field presentation. With instructions to attend to the upper face, for the left visual field they robustly identified the emotion displayed on the upper face. For the right visual field, they continued to identify the emotion displayed on the lower face, but to a lesser degree. Our results support the hypothesis that hemispheric differences exist in the ability to process upper versus lower facial displays of emotion. Attention appears to enhance the ability to explore these hemispheric differences under experimental conditions. Our data also support the recent observation that the right hemisphere has a greater ability to recognize deceitful behaviors compared with the left hemisphere. This may be attributable to the different roles the hemispheres play in modulating social versus primary emotions and related behaviors.

  9. Increased subjective experience of non-target emotions in patients with frontotemporal dementia and Alzheimer’s disease

    PubMed Central

    Chen, Kuan-Hua; Lwi, Sandy J.; Hua, Alice Y.; Haase, Claudia M.; Miller, Bruce L.; Levenson, Robert W.

    2017-01-01

    Although laboratory procedures are designed to produce specific emotions, participants often experience mixed emotions (i.e., target and non-target emotions). We examined non-target emotions in patients with frontotemporal dementia (FTD), Alzheimer’s disease (AD), other neurodegenerative diseases, and healthy controls. Participants watched film clips designed to produce three target emotions. Subjective experience of non-target emotions was assessed and emotional facial expressions were coded. Compared to patients with other neurodegenerative diseases and healthy controls, FTD patients reported more positive and negative non-target emotions, whereas AD patients reported more positive non-target emotions. There were no group differences in facial expressions of non-target emotions. We interpret these findings as reflecting deficits in processing interoceptive and contextual information resulting from neurodegeneration in brain regions critical for creating subjective emotional experience. PMID:29457053

  10. The Facially Disfigured Child.

    ERIC Educational Resources Information Center

    Moncada, Georgia A.

    1987-01-01

    The article reviews diagnosis and treatments for facially disfigured children including craniofacial reconstruction and microsurgical techniques. Noted are associated disease processes that affect the social and intellectual outcomes of the afflicted child. (Author/DB)

  11. Automatic decoding of facial movements reveals deceptive pain expressions

    PubMed Central

    Bartlett, Marian Stewart; Littlewort, Gwen C.; Frank, Mark G.; Lee, Kang

    2014-01-01

    Summary In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1–3]. Two motor pathways control facial movement [4–7]. A subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions. A cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8–11]. Machine vision may, however, be able to distinguish deceptive from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here we show that human observers could not discriminate real from faked expressions of pain better than chance, and after training, improved accuracy to a modest 55%. However a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system’s superiority is attributable to its ability to differentiate the dynamics of genuine from faked expressions. Thus by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling. PMID:24656830

  12. Is moral beauty different from facial beauty? Evidence from an fMRI study

    PubMed Central

    Wang, Tingting; Mo, Ce; Tan, Li Hai; Cant, Jonathan S.; Zhong, Luojin; Cupchik, Gerald

    2015-01-01

    Is moral beauty different from facial beauty? Two functional magnetic resonance imaging experiments were performed to answer this question. Experiment 1 investigated the network of moral aesthetic judgments and facial aesthetic judgments. Participants performed aesthetic judgments and gender judgments on both faces and scenes containing moral acts. The conjunction analysis of the contrasts ‘facial aesthetic judgment > facial gender judgment’ and ‘scene moral aesthetic judgment > scene gender judgment’ identified the common involvement of the orbitofrontal cortex (OFC), inferior temporal gyrus and medial superior frontal gyrus, suggesting that both types of aesthetic judgments are based on the orchestration of perceptual, emotional and cognitive components. Experiment 2 examined the network of facial beauty and moral beauty during implicit perception. Participants performed a non-aesthetic judgment task on both faces (beautiful vs common) and scenes (containing morally beautiful vs neutral information). We observed that facial beauty (beautiful faces > common faces) involved both the cortical reward region OFC and the subcortical reward region putamen, whereas moral beauty (moral beauty scenes > moral neutral scenes) only involved the OFC. Moreover, compared with facial beauty, moral beauty spanned a larger-scale cortical network, indicating more advanced and complex cerebral representations characterizing moral beauty. PMID:25298010

  13. Psychometric challenges and proposed solutions when scoring facial emotion expression codes.

    PubMed

    Olderbak, Sally; Hildebrandt, Andrea; Pinkpank, Thomas; Sommer, Werner; Wilhelm, Oliver

    2014-12-01

    Coding of facial emotion expressions is increasingly performed by automated emotion expression scoring software; however, there is limited discussion on how best to score the resulting codes. We present a discussion of facial emotion expression theories and a review of contemporary emotion expression coding methodology. We highlight methodological challenges pertinent to scoring software-coded facial emotion expression codes and present important psychometric research questions centered on comparing competing scoring procedures of these codes. Then, on the basis of a time series data set collected to assess individual differences in facial emotion expression ability, we derive, apply, and evaluate several statistical procedures, including four scoring methods and four data treatments, to score software-coded emotion expression data. These scoring procedures are illustrated to inform analysis decisions pertaining to the scoring and data treatment of other emotion expression questions and under different experimental circumstances. Overall, we found applying loess smoothing and controlling for baseline facial emotion expression and facial plasticity are recommended methods of data treatment. When scoring facial emotion expression ability, maximum score is preferred. Finally, we discuss the scoring methods and data treatments in the larger context of emotion expression research.

  14. From the operating room to the courtroom: a comprehensive characterization of litigation related to facial plastic surgery procedures.

    PubMed

    Svider, Peter F; Keeley, Brieze R; Zumba, Osvaldo; Mauro, Andrew C; Setzen, Michael; Eloy, Jean Anderson

    2013-08-01

    Malpractice litigation has increased in recent decades, contributing to higher health-care costs. Characterization of complications leading to litigation is of special interest to practitioners of facial plastic surgery procedures because of the higher proportion of elective cases relative to other subspecialties. In this analysis, we comprehensively examine malpractice litigation in facial plastic surgery procedures and characterize factors important in determining legal responsibility, as this information may be of great interest and use to practitioners in several specialties. Retrospective analysis. The Westlaw legal database was examined for court records pertaining to facial plastic surgery procedures. The term "medical malpractice" was searched in combination with numerous procedures obtained from the American Academy of Facial Plastic and Reconstructive Surgery website. Of the 88 cases included, 62.5% were decided in the physician's favor, 9.1% were resolved with an out-of-court settlement, and 28.4% ended in a jury awarding damages for malpractice. The mean settlement was $577,437 and mean jury award was $352,341. The most litigated procedures were blepharoplasties and rhinoplasties. Alleged lack of informed consent was noted in 38.6% of cases; other common complaints were excessive scarring/disfigurement, functional considerations, and postoperative pain. This analysis characterized factors in determining legal responsibility in facial plastic surgery cases. Several factors were identified as potential targets for minimizing liability. Informed consent was the most reported entity in these malpractice suits. This finding emphasizes the importance of open communication between physicians and their patients regarding expectations as well as documentation of specific risks, benefits, and alternatives. © 2013 The American Laryngological, Rhinological, and Otological Society, Inc.

  15. A dynamic Shh expression pattern, regulated by SHH and BMP signaling, coordinates fusion of primordia in the amniote face

    PubMed Central

    Hu, Diane; Young, Nathan M.; Li, Xin; Xu, Yanhua; Hallgrímsson, Benedikt; Marcucio, Ralph S.

    2015-01-01

    The mechanisms of morphogenesis are not well understood, yet shaping structures during development is essential for establishing correct organismal form and function. Here, we examine mechanisms that help to shape the developing face during the crucial period of facial primordia fusion. This period of development is a time when the faces of amniote embryos exhibit the greatest degree of similarity, and it probably results from the necessity for fusion to occur to establish the primary palate. Our results show that hierarchical induction mechanisms, consisting of iterative signaling by Sonic hedgehog (SHH) followed by Bone morphogenetic proteins (BMPs), regulate a dynamic expression pattern of Shh in the ectoderm covering the frontonasal (FNP) and maxillary (MxP) processes. Furthermore, this Shh expression domain contributes to the morphogenetic processes that drive the directional growth of the globular process of the FNP toward the lateral nasal process and MxP, in part by regulating cell proliferation in the facial mesenchyme. The nature of the induction mechanism that we discovered suggests that the process of fusion of the facial primordia is intrinsically buffered against producing maladaptive morphologies, such as clefts of the primary palate, because there appears to be little opportunity for variation to occur during expansion of the Shh expression domain in the ectoderm of the facial primordia. Ultimately, these results might explain why this period of development constitutes a phylotypic stage of facial development among amniotes. PMID:25605783

  16. Rigid Facial Motion Influences Featural, But Not Holistic, Face Processing

    PubMed Central

    Xiao, Naiqi; Quinn, Paul C.; Ge, Liezhong; Lee, Kang

    2012-01-01

    We report three experiments in which we investigated the effect of rigid facial motion on face processing. Specifically, we used the face composite effect to examine whether rigid facial motion influences primarily featural or holistic processing of faces. In Experiments 1, 2, and 3, participants were first familiarized with dynamic displays in which a target face turned from one side to another; then at test, participants judged whether the top half of a composite face (the top half of the target face aligned or misaligned with the bottom half of a foil face) belonged to the target face. We compared performance in the dynamic condition to various static control conditions in Experiments 1, 2, and 3, which differed from each other in terms of the display order of the multiple static images or the inter stimulus interval (ISI) between the images. We found that the size of the face composite effect in the dynamic condition was significantly smaller than that in the static conditions. In other words, the dynamic face display influenced participants to process the target faces in a part-based manner and consequently their recognition of the upper portion of the composite face at test became less interfered with by the aligned lower part of the foil face. The findings from the present experiments provide the strongest evidence to date to suggest that the rigid facial motion mainly influences facial featural, but not holistic, processing. PMID:22342561

  17. Regional Brain Responses Are Biased Toward Infant Facial Expressions Compared to Adult Facial Expressions in Nulliparous Women.

    PubMed

    Li, Bingbing; Cheng, Gang; Zhang, Dajun; Wei, Dongtao; Qiao, Lei; Wang, Xiangpeng; Che, Xianwei

    2016-01-01

    Recent neuroimaging studies suggest that neutral infant faces compared to neutral adult faces elicit greater activity in brain areas associated with face processing, attention, empathic response, reward, and movement. However, whether infant facial expressions evoke larger brain responses than adult facial expressions remains unclear. Here, we performed event-related functional magnetic resonance imaging in nulliparous women while they were presented with images of matched unfamiliar infant and adult facial expressions (happy, neutral, and uncomfortable/sad) in a pseudo-randomized order. We found that the bilateral fusiform and right lingual gyrus were overall more activated during the presentation of infant facial expressions compared to adult facial expressions. Uncomfortable infant faces compared to sad adult faces evoked greater activation in the bilateral fusiform gyrus, precentral gyrus, postcentral gyrus, posterior cingulate cortex-thalamus, and precuneus. Neutral infant faces activated larger brain responses in the left fusiform gyrus compared to neutral adult faces. Happy infant faces compared to happy adult faces elicited larger responses in areas of the brain associated with emotion and reward processing using a more liberal threshold of p < 0.005 uncorrected. Furthermore, the level of the test subjects' Interest-In-Infants was positively associated with the intensity of right fusiform gyrus response to infant faces and uncomfortable infant faces compared to sad adult faces. In addition, the Perspective Taking subscale score on the Interpersonal Reactivity Index-Chinese was significantly correlated with precuneus activity during uncomfortable infant faces compared to sad adult faces. Our findings suggest that regional brain areas may bias cognitive and emotional responses to infant facial expressions compared to adult facial expressions among nulliparous women, and this bias may be modulated by individual differences in Interest-In-Infants and perspective taking ability.

  18. Regional Brain Responses Are Biased Toward Infant Facial Expressions Compared to Adult Facial Expressions in Nulliparous Women

    PubMed Central

    Zhang, Dajun; Wei, Dongtao; Qiao, Lei; Wang, Xiangpeng; Che, Xianwei

    2016-01-01

    Recent neuroimaging studies suggest that neutral infant faces compared to neutral adult faces elicit greater activity in brain areas associated with face processing, attention, empathic response, reward, and movement. However, whether infant facial expressions evoke larger brain responses than adult facial expressions remains unclear. Here, we performed event-related functional magnetic resonance imaging in nulliparous women while they were presented with images of matched unfamiliar infant and adult facial expressions (happy, neutral, and uncomfortable/sad) in a pseudo-randomized order. We found that the bilateral fusiform and right lingual gyrus were overall more activated during the presentation of infant facial expressions compared to adult facial expressions. Uncomfortable infant faces compared to sad adult faces evoked greater activation in the bilateral fusiform gyrus, precentral gyrus, postcentral gyrus, posterior cingulate cortex-thalamus, and precuneus. Neutral infant faces activated larger brain responses in the left fusiform gyrus compared to neutral adult faces. Happy infant faces compared to happy adult faces elicited larger responses in areas of the brain associated with emotion and reward processing using a more liberal threshold of p < 0.005 uncorrected. Furthermore, the level of the test subjects’ Interest-In-Infants was positively associated with the intensity of right fusiform gyrus response to infant faces and uncomfortable infant faces compared to sad adult faces. In addition, the Perspective Taking subscale score on the Interpersonal Reactivity Index-Chinese was significantly correlated with precuneus activity during uncomfortable infant faces compared to sad adult faces. Our findings suggest that regional brain areas may bias cognitive and emotional responses to infant facial expressions compared to adult facial expressions among nulliparous women, and this bias may be modulated by individual differences in Interest-In-Infants and perspective taking ability. PMID:27977692

  19. Processing of Fear and Anger Facial Expressions: The Role of Spatial Frequency

    PubMed Central

    Comfort, William E.; Wang, Meng; Benton, Christopher P.; Zana, Yossi

    2013-01-01

    Spatial frequency (SF) components encode a portion of the affective value expressed in face images. The aim of this study was to estimate the relative weight of specific frequency spectrum bandwidth on the discrimination of anger and fear facial expressions. The general paradigm was a classification of the expression of faces morphed at varying proportions between anger and fear images in which SF adaptation and SF subtraction are expected to shift classification of facial emotion. A series of three experiments was conducted. In Experiment 1 subjects classified morphed face images that were unfiltered or filtered to remove either low (<8 cycles/face), middle (12–28 cycles/face), or high (>32 cycles/face) SF components. In Experiment 2 subjects were adapted to unfiltered or filtered prototypical (non-morphed) fear face images and subsequently classified morphed face images. In Experiment 3 subjects were adapted to unfiltered or filtered prototypical fear face images with the phase component randomized before classifying morphed face images. Removing mid frequency components from the target images shifted classification toward fear. The same shift was observed under adaptation condition to unfiltered and low- and middle-range filtered fear images. However, when the phase spectrum of the same adaptation stimuli was randomized, no adaptation effect was observed. These results suggest that medium SF components support the perception of fear more than anger at both low and high level of processing. They also suggest that the effect at high-level processing stage is related more to high-level featural and/or configural information than to the low-level frequency spectrum. PMID:23637687

  20. Single trial classification for the categories of perceived emotional facial expressions: an event-related fMRI study

    NASA Astrophysics Data System (ADS)

    Song, Sutao; Huang, Yuxia; Long, Zhiying; Zhang, Jiacai; Chen, Gongxiang; Wang, Shuqing

    2016-03-01

    Recently, several studies have successfully applied multivariate pattern analysis methods to predict the categories of emotions. These studies are mainly focused on self-experienced emotions, such as the emotional states elicited by music or movie. In fact, most of our social interactions involve perception of emotional information from the expressions of other people, and it is an important basic skill for humans to recognize the emotional facial expressions of other people in a short time. In this study, we aimed to determine the discriminability of perceived emotional facial expressions. In a rapid event-related fMRI design, subjects were instructed to classify four categories of facial expressions (happy, disgust, angry and neutral) by pressing different buttons, and each facial expression stimulus lasted for 2s. All participants performed 5 fMRI runs. One multivariate pattern analysis method, support vector machine was trained to predict the categories of facial expressions. For feature selection, ninety masks defined from anatomical automatic labeling (AAL) atlas were firstly generated and each were treated as the input of the classifier; then, the most stable AAL areas were selected according to prediction accuracies, and comprised the final feature sets. Results showed that: for the 6 pair-wise classification conditions, the accuracy, sensitivity and specificity were all above chance prediction, among which, happy vs. neutral , angry vs. disgust achieved the lowest results. These results suggested that specific neural signatures of perceived emotional facial expressions may exist, and happy vs. neutral, angry vs. disgust might be more similar in information representation in the brain.

Top