Sex differences in face gender recognition: an event-related potential study.
Sun, Yueting; Gao, Xiaochao; Han, Shihui
2010-04-23
Multiple level neurocognitive processes are involved in face processing in humans. The present study examined whether the early face processing such as structural encoding is modulated by task demands that manipulate attention to perceptual or social features of faces and such an effect, if any, is different between men and women. Event-related brain potentials were recorded from male and female adults while they identified a low-level perceptual feature of faces (i.e., face orientation) and a high-level social feature of faces (i.e., gender). We found that task demands that required the processing of face orientations or face gender resulted in modulations of both the early occipital/temporal negativity (N170) and the late central/parietal positivity (P3). The N170 amplitude was smaller in the gender relative to the orientation identification task whereas the P3 amplitude was larger in the gender identification task relative to the orientation identification task. In addition, these effects were much stronger in women than in men. Our findings suggest that attention to social information in faces such as gender modulates both the early encoding of facial structures and late evaluative process of faces to a greater degree in women than in men.
Hjordt, Liv V; Stenbæk, Dea S; Madsen, Kathrine Skak; Mc Mahon, Brenda; Jensen, Christian G; Vestergaard, Martin; Hageman, Ida; Meder, David; Hasselbalch, Steen G; Knudsen, Gitte M
2017-04-01
Depressed individuals often exhibit impaired inhibition to negative input and identification of positive stimuli, but it is unclear whether this is a state or trait feature. We here exploited a naturalistic model, namely individuals with seasonal affective disorder (SAD), to study this feature longitudinally. The goal of this study was to examine seasonal changes in inhibitory control and identification of emotional faces in individuals with SAD. Twenty-nine individuals diagnosed with winter-SAD and 30 demographically matched controls with no seasonality symptoms completed an emotional Go/NoGo task, requiring inhibition of prepotent responses to emotional facial expressions and an emotional face identification task twice, in winter and summer. In winter, individuals with SAD showed impaired ability to inhibit responses to angry (p = .0006) and sad faces (p = .011), and decreased identification of happy faces (p = .032) compared with controls. In summer, individuals with SAD and controls performed similarly on these tasks (ps > .24). We provide novel evidence that inhibition of angry and sad faces and identification of happy faces are impaired in SAD in the symptomatic phase, but not in the remitted phase. The affective biases in cognitive processing constitute state-dependent features of SAD. Our data show that reinstatement of a normal affective cognition should be possible and would constitute a major goal in psychiatric treatment to improve the quality of life for these patients. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Alenezi, Hamood M.; Fysh, Matthew C.; Johnston, Robert A.
2015-01-01
In face matching, observers have to decide whether two photographs depict the same person or different people. This task is not only remarkably difficult but accuracy declines further during prolonged testing. The current study investigated whether this decline in long tasks can be eliminated with regular rest-breaks (Experiment 1) or room-switching (Experiment 2). Both experiments replicated the accuracy decline for long face-matching tasks and showed that this could not be eliminated with rest or room-switching. These findings suggest that person identification in applied settings, such as passport control, might be particularly error-prone due to the long and repetitive nature of the task. The experiments also show that it is difficult to counteract these problems. PMID:26312179
Age differences in accuracy and choosing in eyewitness identification and face recognition.
Searcy, J H; Bartlett, J C; Memon, A
1999-05-01
Studies of aging and face recognition show age-related increases in false recognitions of new faces. To explore implications of this false alarm effect, we had young and senior adults perform (1) three eye-witness identification tasks, using both target present and target absent lineups, and (2) and old/new recognition task in which a study list of faces was followed by a test including old and new faces, along with conjunctions of old faces. Compared with the young, seniors had lower accuracy and higher choosing rates on the lineups, and they also falsely recognized more new faces on the recognition test. However, after screening for perceptual processing deficits, there was no age difference in false recognition of conjunctions, or in discriminating old faces from conjunctions. We conclude that the false alarm effect generalizes to lineup identification, but does not extend to conjunction faces. The findings are consistent with age-related deficits in recollection of context and relative age invariance in perceptual integrative processes underlying the experience of familiarity.
Xu, Xinxing; Li, Wen; Xu, Dong
2015-12-01
In this paper, we propose a new approach to improve face verification and person re-identification in the RGB images by leveraging a set of RGB-D data, in which we have additional depth images in the training data captured using depth cameras such as Kinect. In particular, we extract visual features and depth features from the RGB images and depth images, respectively. As the depth features are available only in the training data, we treat the depth features as privileged information, and we formulate this task as a distance metric learning with privileged information problem. Unlike the traditional face verification and person re-identification tasks that only use visual features, we further employ the extra depth features in the training data to improve the learning of distance metric in the training process. Based on the information-theoretic metric learning (ITML) method, we propose a new formulation called ITML with privileged information (ITML+) for this task. We also present an efficient algorithm based on the cyclic projection method for solving the proposed ITML+ formulation. Extensive experiments on the challenging faces data sets EUROCOM and CurtinFaces for face verification as well as the BIWI RGBD-ID data set for person re-identification demonstrate the effectiveness of our proposed approach.
Burk, Joshua A.; Fleckenstein, Katarina; Kozikowski, C. Teal
2018-01-01
The current work examined the unique contribution that autistic traits and social anxiety have on tasks examining attention and emotion processing. In Study 1, 119 typically-developing college students completed a flanker task assessing the control of attention to target faces and away from distracting faces during emotion identification. In Study 2, 208 typically-developing college students performed a visual search task which required identification of whether a series of 8 or 16 emotional faces depicted the same or different emotions. Participants with more self-reported autistic traits performed more slowly on the flanker task in Study 1 than those with fewer autistic traits when stimuli depicted complex emotions. In Study 2, participants higher in social anxiety performed less accurately on trials showing all complex faces; participants with autistic traits showed no differences. These studies suggest that traits related to autism and to social anxiety differentially impact social cognitive processing. PMID:29596523
Perceptual impairment in face identification with poor sleep
Beattie, Louise; Walsh, Darragh; McLaren, Jessica; Biello, Stephany M.
2016-01-01
Previous studies have shown impaired memory for faces following restricted sleep. However, it is not known whether lack of sleep impairs performance on face identification tasks that do not rely on recognition memory, despite these tasks being more prevalent in security and forensic professions—for example, in photo-ID checks at national borders. Here we tested whether poor sleep affects accuracy on a standard test of face-matching ability that does not place demands on memory: the Glasgow Face-Matching Task (GFMT). In Experiment 1, participants who reported sleep disturbance consistent with insomnia disorder show impaired accuracy on the GFMT when compared with participants reporting normal sleep behaviour. In Experiment 2, we then used a sleep diary method to compare GFMT accuracy in a control group to participants reporting poor sleep on three consecutive nights—and again found lower accuracy scores in the short sleep group. In both experiments, reduced face-matching accuracy in those with poorer sleep was not associated with lower confidence in their decisions, carrying implications for occupational settings where identification errors made with high confidence can have serious outcomes. These results suggest that sleep-related impairments in face memory reflect difficulties in perceptual encoding of identity, and point towards metacognitive impairment in face matching following poor sleep. PMID:27853547
Nakamura, Koyo; Arai, Shihoko; Kawabata, Hideaki
2017-11-01
People are sensitive to facial attractiveness because it is an important biological and social signal. As such, our perceptual and attentional system seems biased toward attractive faces. We tested whether attractive faces capture attention and enhance memory access in an involuntary manner using a dual-task rapid serial visual presentation (dtRSVP) paradigm, wherein multiple faces were successively presented for 120 ms. In Experiment 1, participants (N = 26) were required to identify two female faces embedded in a stream of animal faces as distractors. The results revealed that identification of the second female target (T2) was better when it was attractive compared to neutral or unattractive. In Experiment 2, we investigated whether perceived attractiveness affects T2 identification (N = 27). To this end, we performed another dtRSVP task involving participants in a romantic partnership with the opposite sex, wherein T2 was their romantic partner's face. The results demonstrated that a romantic partner's face was correctly identified more often than was the face of a friend or unknown person. Furthermore, the greater the intensity of passionate love participants felt for their partner (as measured by the Passionate Love Scale), the more often they correctly identified their partner's face. Our experiments indicate that attractive and romantic partners' faces facilitate the identification of the faces in an involuntary manner.
Weight and See: Loading Working Memory Improves Incidental Identification of Irrelevant Faces
Carmel, David; Fairnie, Jake; Lavie, Nilli
2012-01-01
Are task-irrelevant stimuli processed to a level enabling individual identification? This question is central both for perceptual processing models and for applied settings (e.g., eye-witness testimony). Lavie’s load theory proposes that working memory actively maintains attentional prioritization of relevant over irrelevant information. Loading working memory thus impairs attentional prioritization, leading to increased processing of task-irrelevant stimuli. Previous research has shown that increased working memory load leads to greater interference effects from response-competing distractors. Here we test the novel prediction that increased processing of irrelevant stimuli under high working memory load should lead to a greater likelihood of incidental identification of entirely irrelevant stimuli. To test this, we asked participants to perform a word-categorization task while ignoring task-irrelevant images. The categorization task was performed during the retention interval of a working memory task with either low or high load (defined by memory set size). Following the final experimental trial, a surprise question assessed incidental identification of the irrelevant image. Loading working memory was found to improve identification of task-irrelevant faces, but not of building stimuli (shown in a separate experiment to be less distracting). These findings suggest that working memory plays a critical role in determining whether distracting stimuli will be subsequently identified. PMID:22912623
Wynn, Jonathan K.; Lee, Junghee; Horan, William P.; Green, Michael F.
2008-01-01
Schizophrenia patients show impairments in identifying facial affect; however, it is not known at what stage facial affect processing is impaired. We evaluated 3 event-related potentials (ERPs) to explore stages of facial affect processing in schizophrenia patients. Twenty-six schizophrenia patients and 27 normal controls participated. In separate blocks, subjects identified the gender of a face, the emotion of a face, or if a building had 1 or 2 stories. Three ERPs were examined: (1) P100 to examine basic visual processing, (2) N170 to examine facial feature encoding, and (3) N250 to examine affect decoding. Behavioral performance on each task was also measured. Results showed that schizophrenia patients’ P100 was comparable to the controls during all 3 identification tasks. Both patients and controls exhibited a comparable N170 that was largest during processing of faces and smallest during processing of buildings. For both groups, the N250 was largest during the emotion identification task and smallest for the building identification task. However, the patients produced a smaller N250 compared with the controls across the 3 tasks. The groups did not differ in behavioral performance in any of the 3 identification tasks. The pattern of intact P100 and N170 suggest that patients maintain basic visual processing and facial feature encoding abilities. The abnormal N250 suggests that schizophrenia patients are less efficient at decoding facial affect features. Our results imply that abnormalities in the later stage of feature decoding could potentially underlie emotion identification deficits in schizophrenia. PMID:18499704
Convolutional neural networks and face recognition task
NASA Astrophysics Data System (ADS)
Sochenkova, A.; Sochenkov, I.; Makovetskii, A.; Vokhmintsev, A.; Melnikov, A.
2017-09-01
Computer vision tasks are remaining very important for the last couple of years. One of the most complicated problems in computer vision is face recognition that could be used in security systems to provide safety and to identify person among the others. There is a variety of different approaches to solve this task, but there is still no universal solution that would give adequate results in some cases. Current paper presents following approach. Firstly, we extract an area containing face, then we use Canny edge detector. On the next stage we use convolutional neural networks (CNN) to finally solve face recognition and person identification task.
Emotion-attention interactions in recognition memory for distractor faces.
Srinivasan, Narayanan; Gupta, Rashmi
2010-04-01
Effective filtering of distractor information has been shown to be dependent on perceptual load. Given the salience of emotional information and the presence of emotion-attention interactions, we wanted to explore the recognition memory for emotional distractors especially as a function of focused attention and distributed attention by manipulating load and the spatial spread of attention. We performed two experiments to study emotion-attention interactions by measuring recognition memory performance for distractor neutral and emotional faces. Participants performed a color discrimination task (low-load) or letter identification task (high-load) with a letter string display in Experiment 1 and a high-load letter identification task with letters presented in a circular array in Experiment 2. The stimuli were presented against a distractor face background. The recognition memory results show that happy faces were recognized better than sad faces under conditions of less focused or distributed attention. When attention is more spatially focused, sad faces were recognized better than happy faces. The study provides evidence for emotion-attention interactions in which specific emotional information like sad or happy is associated with focused or distributed attention respectively. Distractor processing with emotional information also has implications for theories of attention. Copyright 2010 APA, all rights reserved.
Hills, Peter J; Hill, Dominic M
2017-07-12
Sad individuals perform more accurately at face identity recognition (Hills, Werno, & Lewis, 2011), possibly because they scan more of the face during encoding. During expression identification tasks, sad individuals do not fixate on the eyes as much as happier individuals (Wu, Pu, Allen, & Pauli, 2012). Fixating on features other than the eyes leads to a reduced own-ethnicity bias (Hills & Lewis, 2006). This background indicates that sad individuals would not view the eyes as much as happy individuals and this would result in improved expression recognition and a reduced own-ethnicity bias. This prediction was tested using an expression identification task, with eye tracking. We demonstrate that sad-induced participants show enhanced expression recognition and a reduced own-ethnicity bias than happy-induced participants due to scanning more facial features. We conclude that mood affects eye movements and face encoding by causing a wider sampling strategy and deeper encoding of facial features diagnostic for expression identification.
García-Rodríguez, Beatriz; Guillén, Carmen Casares; Barba, Rosa Jurado; io Valladolid, Gabriel Rub; Arjona, José Antonio Molina; Ellgring, Heiner
2012-02-15
There is evidence that visuo-spatial capacity can become overloaded when processing a secondary visual task (Dual Task, DT), as occurs in daily life. Hence, we investigated the influence of the visuo-spatial interference in the identification of emotional facial expressions (EFEs) in early stages of Parkinson's disease (PD). We compared the identification of 24 emotional faces that illustrate six basic emotions in, unmedicated recently diagnosed PD patients (16) and healthy adults (20), under two different conditions: a) simple EFE identification, and b) identification with a concurrent visuo-spatial task (Corsi Blocks). EFE identification by PD patients was significantly worse than that of healthy adults when combined with another visual stimulus. Published by Elsevier B.V.
Finding Needles in Haystacks: Identity Mismatch Frequency and Facial Identity Verification
ERIC Educational Resources Information Center
Bindemann, Markus; Avetisyan, Meri; Blackwell, Kristy-Ann
2010-01-01
Accurate person identification is central to all security, police, and judicial systems. A commonplace method to achieve this is to compare a photo-ID and the face of its purported owner. The critical aspect of this task is to spot cases in which these two instances of a face do not match. Studies of person identification show that these instances…
Young, Kymberly D; Misaki, Masaya; Harmer, Catherine J; Victor, Teresa; Zotev, Vadim; Phillips, Raquel; Siegle, Greg J; Drevets, Wayne C; Bodurka, Jerzy
2017-10-15
In participants with major depressive disorder who are trained to upregulate their amygdalar hemodynamic responses during positive autobiographical memory recall with real-time functional magnetic resonance imaging neurofeedback (rtfMRI-nf) training, depressive symptoms diminish. This study tested whether amygdalar rtfMRI-nf also changes emotional processing of positive and negative stimuli in a variety of behavioral and imaging tasks. Patients with major depressive disorder completed two rtfMRI-nf sessions (18 received amygdalar rtfMRI-nf, 16 received control parietal rtfMRI-nf). One week before and following rtfMRI-nf training, participants performed tasks measuring responses to emotionally valenced stimuli including a backward-masking task, which measures the amygdalar hemodynamic response to emotional faces presented for traditionally subliminal duration and followed by a mask, and the Emotional Test Battery in which reaction times and performance accuracy are measured during tasks involving emotional faces and words. During the backward-masking task, amygdalar responses increased while viewing masked happy faces but decreased to masked sad faces in the experimental versus control group following rtfMRI-nf. During the Emotional Test Battery, reaction times decreased to identification of positive faces and during self-identification with positive words and vigilance scores increased to positive faces and decreased to negative faces during the faces dot-probe task in the experimental versus control group following rtfMRI-nf. rtfMRI-nf training to increase the amygdalar hemodynamic response to positive memories was associated with changes in amygdalar responses to happy and sad faces and improved processing of positive stimuli during performance of the Emotional Test Battery. These results may suggest that amygdalar rtfMRI-nf training alters responses to emotional stimuli in a manner similar to antidepressant pharmacotherapy. Copyright © 2017 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Age-Related Changes in the Processing of Emotional Faces in a Dual-Task Paradigm.
Casares-Guillén, Carmen; García-Rodríguez, Beatriz; Delgado, Marisa; Ellgring, Heiner
2016-01-01
Background/ Study Context: Age-related changes appear to affect the ability to identify emotional facial expressions in dual-task conditions (i.e., while simultaneously performing a second visual task). The level of interference generated by the secondary task depends on the phase of emotional processing affected by the interference and the nature of the secondary task. The aim of the present study was to investigate the effect of these variables on age-related changes in the processing of emotional faces. The identification of emotional facial expressions (EFEs) was assessed in a dual-task paradigm using the following variables: (a) the phase during which interference was applied (encoding vs. retrieval phase); and (b) the nature of the interfering stimulus (visuospatial vs. verbal). The sample population consisted of 24 healthy aged adults (mean age = 75.38) and 40 younger adults (mean age = 26.90). The accuracy of EFE identification was calculated for all experimental conditions. Consistent with our hypothesis, the performance of the older group was poorer than that of the younger group in all experimental conditions. Dual-task performance was poorer when the interference occurred during the encoding phase of emotional face processing and when both tasks were of the same nature (i.e., when the experimental condition was more demanding in terms of attention). These results provide empirical evidence of age-related deficits in the identification of emotional facial expressions, which may be partially explained by the impairment of cognitive resources specific to this task. These findings may account for the difficulties experienced by the elderly during social interactions that require the concomitant processing of emotional and environmental information.
Neumann, Markus F; Schweinberger, Stefan R
2008-11-06
It is a matter of considerable debate whether attention to initial stimulus presentations is required for repetition-related neural modulations to occur. Recently, it has been assumed that faces are particularly hard to ignore, and can capture attention in a reflexive manner. In line with this idea, electrophysiological evidence for long-term repetition effects of unattended famous faces has been reported. The present study investigated influences of attention to prime faces on short-term repetition effects in event-related potentials (ERPs). We manipulated attention to short (200 ms) prime presentations (S1) of task-irrelevant famous faces according to Lavie's Perceptual Load Theory. Participants attended to letter strings superimposed on face images, and identified target letters "X" vs. "N" embedded in strings of either 6 different (high load) or 6 identical (low load) letters. Letter identification was followed by probe presentations (S2), which were either repetitions of S1 faces, new famous faces, or infrequent butterflies, to which participants responded. Our ERP data revealed repetition effects in terms of an N250r at occipito-temporal regions, suggesting priming of face identification processes, and in terms of an N400 at the vertex, suggesting semantic priming. Crucially, the magnitude of these effects was unaffected by perceptual load at S1 presentation. This indicates that task-irrelevant face processing is remarkably preserved even in a demanding letter detection task, supporting recent notions of face-specific attentional resources.
Ozbek, Müge; Bindemann, Markus
2011-10-01
The identification of unfamiliar faces has been studied extensively with matching tasks, in which observers decide if pairs of photographs depict the same person (identity matches) or different people (mismatches). In experimental studies in this field, performance is usually self-paced under the assumption that this will encourage best-possible accuracy. Here, we examined the temporal characteristics of this task by limiting display times and tracking observers' eye movements. Observers were required to make match/mismatch decisions to pairs of faces shown for 200, 500, 1000, or 2000ms, or for an unlimited duration. Peak accuracy was reached within 2000ms and two fixations to each face. However, intermixing exposure conditions produced a context effect that generally reduced accuracy on identity mismatch trials, even when unlimited viewing of faces was possible. These findings indicate that less than 2s are required for face matching when exposure times are variable, but temporal constraints should be avoided altogether if accuracy is truly paramount. The implications of these findings are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Ishitobi, Makoto; Kosaka, Hirotaka; Omori, Masao; Matsumura, Yukiko; Munesue, Toshio; Mizukami, Kimiko; Shimoyama, Tomohiro; Murata, Tetsuhito; Sadato, Norihiro; Okazawa, Hidehiko; Wada, Yuji
2011-01-01
Much functional neuroimaging evidence indicates that autistic spectrum disorders (ASD) demonstrate marked brain abnormalities in face processing. Most of these findings were obtained from studies using tasks related to whole faces. However, individuals with ASD tend to rely more on individual parts of the face for identification than on the…
Feature saliency in judging the sex and familiarity of faces.
Roberts, T; Bruce, V
1988-01-01
Two experiments are reported on the effect of feature masking on judgements of the sex and familiarity of faces. In experiment 1 the effect of masking the eyes, nose, or mouth of famous and nonfamous, male and female faces on response times in two tasks was investigated. In the first, recognition, task only masking of the eyes had a significant effect on response times. In the second, sex-judgement, task masking of the nose gave rise to a significant and large increase in response times. In experiment 2 it was found that when facial features were presented in isolation in a sex-judgement task, responses to noses were at chance level, unlike those for eyes or mouths. It appears that visual information available from the nose in isolation from the rest of the face is not sufficient for sex judgement, yet masking of the nose may disrupt the extraction of information about the overall topography of the face, information that may be more useful for sex judgement than for identification of a face.
Face emotion recognition is related to individual differences in psychosis-proneness.
Germine, L T; Hooker, C I
2011-05-01
Deficits in face emotion recognition (FER) in schizophrenia are well documented, and have been proposed as a potential intermediate phenotype for schizophrenia liability. However, research on the relationship between psychosis vulnerability and FER has mixed findings and methodological limitations. Moreover, no study has yet characterized the relationship between FER ability and level of psychosis-proneness. If FER ability varies continuously with psychosis-proneness, this suggests a relationship between FER and polygenic risk factors. We tested two large internet samples to see whether psychometric psychosis-proneness, as measured by the Schizotypal Personality Questionnaire-Brief (SPQ-B), is related to differences in face emotion identification and discrimination or other face processing abilities. Experiment 1 (n=2332) showed that psychosis-proneness predicts face emotion identification ability but not face gender identification ability. Experiment 2 (n=1514) demonstrated that psychosis-proneness also predicts performance on face emotion but not face identity discrimination. The tasks in Experiment 2 used identical stimuli and task parameters, differing only in emotion/identity judgment. Notably, the relationships demonstrated in Experiments 1 and 2 persisted even when individuals with the highest psychosis-proneness levels (the putative high-risk group) were excluded from analysis. Our data suggest that FER ability is related to individual differences in psychosis-like characteristics in the normal population, and that these differences cannot be accounted for by differences in face processing and/or visual perception. Our results suggest that FER may provide a useful candidate intermediate phenotype.
Pose invariant face recognition: 3D model from single photo
NASA Astrophysics Data System (ADS)
Napoléon, Thibault; Alfalou, Ayman
2017-02-01
Face recognition is widely studied in the literature for its possibilities in surveillance and security. In this paper, we report a novel algorithm for the identification task. This technique is based on an optimized 3D modeling allowing to reconstruct faces in different poses from a limited number of references (i.e. one image by class/person). Particularly, we propose to use an active shape model to detect a set of keypoints on the face necessary to deform our synthetic model with our optimized finite element method. Indeed, in order to improve our deformation, we propose a regularization by distances on graph. To perform the identification we use the VanderLugt correlator well know to effectively address this task. On the other hand we add a difference of Gaussian filtering step to highlight the edges and a description step based on the local binary patterns. The experiments are performed on the PHPID database enhanced with our 3D reconstructed faces of each person with an azimuth and an elevation ranging from -30° to +30°. The obtained results prove the robustness of our new method with 88.76% of good identification when the classic 2D approach (based on the VLC) obtains just 44.97%.
Sarri, Margarita; Greenwood, Richard; Kalra, Lalit; Driver, Jon
2011-01-01
Previous research has shown that prism adaptation (prism adaptation) can ameliorate several symptoms of spatial neglect after right-hemisphere damage. But the mechanisms behind this remain unclear. Recently we reported that prisms may increase leftward awareness for neglect in a task using chimeric visual objects, despite apparently not affecting awareness in a task using chimeric emotional faces (Sarri et al., 2006). Here we explored potential reasons for this apparent discrepancy in outcome, by testing further whether the lack of a prism effect on the chimeric face task task could be explained by: i) the specific category of stimuli used (faces as opposed to objects); ii) the affective nature of the stimuli; and/or iii) the particular task implemented, with the chimeric face task requiring forced-choice judgements of lateral ‘preference’ between pairs of identical, but left/right mirror-reversed chimeric face tasks (as opposed to identification for the chimeric object task). We replicated our previous pattern of no impact of prisms on the emotional chimeric face task here in a new series of patients, while also similarly finding no beneficial impact on another lateral ‘preference’ measure that used non-face non-emotional stimuli, namely greyscale gradients. By contrast, we found the usual beneficial impact of prism adaptation (prism adaptation) on some conventional measures of neglect, and improvements for at least some patients in a different face task, requiring explicit discrimination of the chimeric or non-chimeric nature of face stimuli. The new findings indicate that prism therapy does not alter spatial biases in neglect as revealed by ‘lateral preference tasks’ that have no right or wrong answer (requiring forced-choice judgements on left/right mirror-reversed stimuli), regardless of whether these employ face or non-face stimuli. But our data also show that prism therapy can beneficially modulate some aspects of visual awareness in spatial neglect not only for objects, but also for face stimuli, in some cases. PMID:20171612
Identification and Classification of Facial Familiarity in Directed Lying: An ERP Study
Sun, Delin; Chan, Chetwyn C. H.; Lee, Tatia M. C.
2012-01-01
Recognizing familiar faces is essential to social functioning, but little is known about how people identify human faces and classify them in terms of familiarity. Face identification involves discriminating familiar faces from unfamiliar faces, whereas face classification involves making an intentional decision to classify faces as “familiar” or “unfamiliar.” This study used a directed-lying task to explore the differentiation between identification and classification processes involved in the recognition of familiar faces. To explore this issue, the participants in this study were shown familiar and unfamiliar faces. They responded to these faces (i.e., as familiar or unfamiliar) in accordance with the instructions they were given (i.e., to lie or to tell the truth) while their EEG activity was recorded. Familiar faces (regardless of lying vs. truth) elicited significantly less negative-going N400f in the middle and right parietal and temporal regions than unfamiliar faces. Regardless of their actual familiarity, the faces that the participants classified as “familiar” elicited more negative-going N400f in the central and right temporal regions than those classified as “unfamiliar.” The P600 was related primarily with the facial identification process. Familiar faces (regardless of lying vs. truth) elicited more positive-going P600f in the middle parietal and middle occipital regions. The results suggest that N400f and P600f play different roles in the processes involved in facial recognition. The N400f appears to be associated with both the identification (judgment of familiarity) and classification of faces, while it is likely that the P600f is only associated with the identification process (recollection of facial information). Future studies should use different experimental paradigms to validate the generalizability of the results of this study. PMID:22363597
Holistic Processing in the Composite Task Depends on Face Size.
Ross, David A; Gauthier, Isabel
Holistic processing is a hallmark of face processing. There is evidence that holistic processing is strongest for faces at identification distance, 2 - 10 meters from the observer. However, this evidence is based on tasks that have been little used in the literature and that are indirect measures of holistic processing. We use the composite task- a well validated and frequently used paradigm - to measure the effect of viewing distance on holistic processing. In line with previous work, we find a congruency x alignment effect that is strongest for faces that are close (2m equivalent distance) than for faces that are further away (24m equivalent distance). In contrast, the alignment effect for same trials, used by several authors to measure holistic processing, produced results that are difficult to interpret. We conclude that our results converge with previous findings providing more direct evidence for an effect of size on holistic processing.
ERPs reveal subliminal processing of fearful faces.
Kiss, Monika; Eimer, Martin
2008-03-01
To investigate whether facial expression is processed in the absence of conscious awareness, ERPs were recorded in a task in which participants had to identify the expression of masked fearful and neutral target faces. On supraliminal trials (200 ms target duration), in which identification performance was high, a sustained positivity to fearful versus neutral target faces started 140 ms after target face onset. On subliminal trials (8 ms target duration), identification performance was at chance level, but ERPs still showed systematic fear-specific effects. An early positivity to fearful target faces was present but smaller than on supraliminal trials. A subsequent enhanced N2 to fearful faces was only present for subliminal trials. In contrast, a P3 enhancement to fearful faces was observed on supraliminal but not subliminal trials. Results demonstrate rapid emotional expression processing in the absence of awareness.
ERPs reveal subliminal processing of fearful faces
Kiss, Monika; Eimer, Martin
2008-01-01
To investigate whether facial expression is processed in the absence of conscious awareness, ERPs were recorded in a task where participants had to identify the expression of masked fearful and neutral target faces. On supraliminal trials (200 ms target duration), where identification performance was high, a sustained positivity to fearful versus neutral target faces started 140 ms after target face onset. On subliminal trials (8 ms target duration), identification performance was at chance level, but ERPs still showed systematic fear-specific effects. An early positivity to fearful target faces was present but smaller than on supraliminal trials. A subsequent enhanced N2 to fearful faces was only present for subliminal trials. In contrast, a P3 enhancement to fearful faces was observed on supraliminal but not subliminal trials. Results demonstrate rapid emotional expression processing in the absence of awareness. PMID:17995905
Thirioux, Bérangère; Wehrmann, Moritz; Langbour, Nicolas; Jaafari, Nematollah; Berthoz, Alain
2016-01-01
Looking at our face in a mirror is one of the strongest phenomenological experiences of the Self in which we need to identify the face as reflected in the mirror as belonging to us. Recent behavioral and neuroimaging studies reported that self-face identification not only relies upon visual-mnemonic representation of one’s own face but also upon continuous updating and integration of visuo-tactile signals. Therefore, bodily self-consciousness plays a major role in self-face identification, with respect to interplay between unisensory and multisensory processing. However, if previous studies demonstrated that the integration of multisensory body-related signals contributes to the visual processing of one’s own face, there is so far no data regarding how self-face identification, inversely, contributes to bodily self-consciousness. In the present study, we tested whether self–other face identification impacts either the egocentered or heterocentered visuo-spatial mechanisms that are core processes of bodily self-consciousness and sustain self–other distinction. For that, we developed a new paradigm, named “Double Mirror.” This paradigm, consisting of a semi-transparent double mirror and computer-controlled Light Emitting Diodes, elicits self–other face merging illusory effect in ecologically more valid conditions, i.e., when participants are physically facing each other and interacting. Self-face identification was manipulated by exposing pairs of participants to an Interpersonal Visual Stimulation in which the reflection of their faces merged in the mirror. Participants simultaneously performed visuo-spatial and mental own-body transformation tasks centered on their own face (egocentered) or the face of their partner (heterocentered) in the pre- and post-stimulation phase. We show that self–other face identification altered the egocentered visuo-spatial mechanisms. Heterocentered coding was preserved. Our data suggest that changes in self-face identification induced a bottom-up conflict between the current visual representation and the stored mnemonic representation of one’s own face which, in turn, top-down impacted bodily self-consciousness. PMID:27610095
Does Face Inversion Change Spatial Frequency Tuning?
ERIC Educational Resources Information Center
Willenbockel, Verena; Fiset, Daniel; Chauvin, Alan; Blais, Caroline; Arguin, Martin; Tanaka, James W.; Bub, Daniel N.; Gosselin, Frederic
2010-01-01
The authors examined spatial frequency (SF) tuning of upright and inverted face identification using an SF variant of the Bubbles technique (F. Gosselin & P. G. Schyns, 2001). In Experiment 1, they validated the SF Bubbles technique in a plaid detection task. In Experiments 2a-c, the SFs used for identifying upright and inverted inner facial…
Learning optimal eye movements to unusual faces
Peterson, Matthew F.; Eckstein, Miguel P.
2014-01-01
Eye movements, which guide the fovea’s high resolution and computational power to relevant areas of the visual scene, are integral to efficient, successful completion of many visual tasks. How humans modify their eye movements through experience with their perceptual environments, and its functional role in learning new tasks, has not been fully investigated. Here, we used a face identification task where only the mouth discriminated exemplars to assess if, how, and when eye movement modulation may mediate learning. By interleaving trials of unconstrained eye movements with trials of forced fixation, we attempted to separate the contributions of eye movements and covert mechanisms to performance improvements. Without instruction, a majority of observers substantially increased accuracy and learned to direct their initial eye movements towards the optimal fixation point. The proximity of an observer’s default face identification eye movement behavior to the new optimal fixation point and the observer’s peripheral processing ability were predictive of performance gains and eye movement learning. After practice in a subsequent condition in which observers were directed to fixate different locations along the face, including the relevant mouth region, all observers learned to make eye movements to the optimal fixation point. In this fully learned state, augmented fixation strategy accounted for 43% of total efficiency improvements while covert mechanisms accounted for the remaining 57%. The findings suggest a critical role for eye movement planning to perceptual learning, and elucidate factors that can predict when and how well an observer can learn a new task with unusual exemplars. PMID:24291712
Face-off: A new identification procedure for child eyewitnesses.
Price, Heather L; Fitzgerald, Ryan J
2016-09-01
In 2 experiments, we introduce a new "face-off" procedure for child eyewitness identifications. The new procedure, which is premised on reducing the stimulus set size, was compared with the showup and simultaneous procedures in Experiment 1 and with modified versions of the simultaneous and elimination procedures in Experiment 2. Several benefits of the face-off procedure were observed: it was significantly more diagnostic than the showup procedure; it led to significantly more correct rejections of target-absent lineups than the simultaneous procedures in both experiments, and it led to greater information gain than the modified elimination and simultaneous procedures. The face-off procedure led to consistently more conservative responding than the simultaneous procedures in both experiments. Given the commonly cited concern that children are too lenient in their decision criteria for identification tasks, the face-off procedure may offer a concrete technique to reduce children's high choosing rates. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Edalati, Hanie; Walsh, Zach; Kosson, David S
2016-08-01
Numerous studies have identified differences in the identification of emotional displays between psychopaths and non-psychopaths; however, results have been equivocal regarding the nature of these differences. The present study investigated an alternative approach to examining the association between psychopathy and emotion processing by examining attentional bias to emotional faces; we used a modified dot-probe task to measure attentional bias toward emotional faces in comparison with neutral faces, among a sample of male jail inmates assessed using the Psychopathy Checklist-Revised (PCL-R). Results indicated a positive association between psychopathy and attention toward happy versus neutral faces, and that this association was attributable to Factor 1 of the psychopathy construct. © The Author(s) 2015.
Emotional Recognition in Autism Spectrum Conditions from Voices and Faces
ERIC Educational Resources Information Center
Stewart, Mary E.; McAdam, Clair; Ota, Mitsuhiko; Peppe, Sue; Cleland, Joanne
2013-01-01
The present study reports on a new vocal emotion recognition task and assesses whether people with autism spectrum conditions (ASC) perform differently from typically developed individuals on tests of emotional identification from both the face and the voice. The new test of vocal emotion contained trials in which the vocal emotion of the sentence…
Face matching impairment in developmental prosopagnosia.
White, David; Rivolta, Davide; Burton, A Mike; Al-Janabi, Shahd; Palermo, Romina
2017-02-01
Developmental prosopagnosia (DP) is commonly referred to as 'face blindness', a term that implies a perceptual basis to the condition. However, DP presents as a deficit in face recognition and is diagnosed using memory-based tasks. Here, we test face identification ability in six people with DP, who are severely impaired on face memory tasks, using tasks that do not rely on memory. First, we compared DP to control participants on a standardized test of unfamiliar face matching using facial images taken on the same day and under standardized studio conditions (Glasgow Face Matching Test; GFMT). Scores for DP participants did not differ from normative accuracy scores on the GFMT. Second, we tested face matching performance on a test created using images that were sourced from the Internet and so varied substantially due to changes in viewing conditions and in a person's appearance (Local Heroes Test; LHT). DP participants showed significantly poorer matching accuracy on the LHT than control participants, for both unfamiliar and familiar face matching. Interestingly, this deficit is specific to 'match' trials, suggesting that people with DP may have particular difficulty in matching images of the same person that contain natural day-to-day variations in appearance. We discuss these results in the broader context of individual differences in face matching ability.
Discrimination and categorization of emotional facial expressions and faces in Parkinson's disease.
Alonso-Recio, Laura; Martín, Pilar; Rubio, Sandra; Serrano, Juan M
2014-09-01
Our objective was to compare the ability to discriminate and categorize emotional facial expressions (EFEs) and facial identity characteristics (age and/or gender) in a group of 53 individuals with Parkinson's disease (PD) and another group of 53 healthy subjects. On the one hand, by means of discrimination and identification tasks, we compared two stages in the visual recognition process that could be selectively affected in individuals with PD. On the other hand, facial expression versus gender and age comparison permits us to contrast whether the emotional or non-emotional content influences the configural perception of faces. In Experiment I, we did not find differences between groups, either with facial expression or age, in discrimination tasks. Conversely, in Experiment II, we found differences between the groups, but only in the EFE identification task. Taken together, our results indicate that configural perception of faces does not seem to be globally impaired in PD. However, this ability is selectively altered when the categorization of emotional faces is required. A deeper assessment of the PD group indicated that decline in facial expression categorization is more evident in a subgroup of patients with higher global impairment (motor and cognitive). Taken together, these results suggest that the problems found in facial expression recognition may be associated with the progressive neuronal loss in frontostriatal and mesolimbic circuits, which characterizes PD. © 2013 The British Psychological Society.
Applying face identification to detecting hijacking of airplane
NASA Astrophysics Data System (ADS)
Luo, Xuanwen; Cheng, Qiang
2004-09-01
That terrorists hijacked the airplanes and crashed the World Trade Center is disaster to civilization. To avoid the happening of hijack is critical to homeland security. To report the hijacking in time, limit the terrorist to operate the plane if happened and land the plane to the nearest airport could be an efficient way to avoid the misery. Image processing technique in human face recognition or identification could be used for this task. Before the plane take off, the face images of pilots are input into a face identification system installed in the airplane. The camera in front of pilot seat keeps taking the pilot face image during the flight and comparing it with pre-input pilot face images. If a different face is detected, a warning signal is sent to ground automatically. At the same time, the automatic cruise system is started or the plane is controlled by the ground. The terrorists will have no control over the plane. The plane will be landed to a nearest or appropriate airport under the control of the ground or cruise system. This technique could also be used in automobile industry as an image key to avoid car stealth.
Face Processing: Models For Recognition
NASA Astrophysics Data System (ADS)
Turk, Matthew A.; Pentland, Alexander P.
1990-03-01
The human ability to process faces is remarkable. We can identify perhaps thousands of faces learned throughout our lifetime and read facial expression to understand such subtle qualities as emotion. These skills are quite robust, despite sometimes large changes in the visual stimulus due to expression, aging, and distractions such as glasses or changes in hairstyle or facial hair. Computers which model and recognize faces will be useful in a variety of applications, including criminal identification, human-computer interface, and animation. We discuss models for representing faces and their applicability to the task of recognition, and present techniques for identifying faces and detecting eye blinks.
Ho, Tiffany C; Zhang, Shunan; Sacchet, Matthew D; Weng, Helen; Connolly, Colm G; Henje Blom, Eva; Han, Laura K M; Mobayed, Nisreen O; Yang, Tony T
2016-01-01
While the extant literature has focused on major depressive disorder (MDD) as being characterized by abnormalities in processing affective stimuli (e.g., facial expressions), little is known regarding which specific aspects of cognition influence the evaluation of affective stimuli, and what are the underlying neural correlates. To investigate these issues, we assessed 26 adolescents diagnosed with MDD and 37 well-matched healthy controls (HCL) who completed an emotion identification task of dynamically morphing faces during functional magnetic resonance imaging (fMRI). We analyzed the behavioral data using a sequential sampling model of response time (RT) commonly used to elucidate aspects of cognition in binary perceptual decision making tasks: the Linear Ballistic Accumulator (LBA) model. Using a hierarchical Bayesian estimation method, we obtained group-level and individual-level estimates of LBA parameters on the facial emotion identification task. While the MDD and HCL groups did not differ in mean RT, accuracy, or group-level estimates of perceptual processing efficiency (i.e., drift rate parameter of the LBA), the MDD group showed significantly reduced responses in left fusiform gyrus compared to the HCL group during the facial emotion identification task. Furthermore, within the MDD group, fMRI signal in the left fusiform gyrus during affective face processing was significantly associated with greater individual-level estimates of perceptual processing efficiency. Our results therefore suggest that affective processing biases in adolescents with MDD are characterized by greater perceptual processing efficiency of affective visual information in sensory brain regions responsible for the early processing of visual information. The theoretical, methodological, and clinical implications of our results are discussed.
Ho, Tiffany C.; Zhang, Shunan; Sacchet, Matthew D.; Weng, Helen; Connolly, Colm G.; Henje Blom, Eva; Han, Laura K. M.; Mobayed, Nisreen O.; Yang, Tony T.
2016-01-01
While the extant literature has focused on major depressive disorder (MDD) as being characterized by abnormalities in processing affective stimuli (e.g., facial expressions), little is known regarding which specific aspects of cognition influence the evaluation of affective stimuli, and what are the underlying neural correlates. To investigate these issues, we assessed 26 adolescents diagnosed with MDD and 37 well-matched healthy controls (HCL) who completed an emotion identification task of dynamically morphing faces during functional magnetic resonance imaging (fMRI). We analyzed the behavioral data using a sequential sampling model of response time (RT) commonly used to elucidate aspects of cognition in binary perceptual decision making tasks: the Linear Ballistic Accumulator (LBA) model. Using a hierarchical Bayesian estimation method, we obtained group-level and individual-level estimates of LBA parameters on the facial emotion identification task. While the MDD and HCL groups did not differ in mean RT, accuracy, or group-level estimates of perceptual processing efficiency (i.e., drift rate parameter of the LBA), the MDD group showed significantly reduced responses in left fusiform gyrus compared to the HCL group during the facial emotion identification task. Furthermore, within the MDD group, fMRI signal in the left fusiform gyrus during affective face processing was significantly associated with greater individual-level estimates of perceptual processing efficiency. Our results therefore suggest that affective processing biases in adolescents with MDD are characterized by greater perceptual processing efficiency of affective visual information in sensory brain regions responsible for the early processing of visual information. The theoretical, methodological, and clinical implications of our results are discussed. PMID:26869950
Initial eye movements during face identification are optimal and similar across cultures
Or, Charles C.-F.; Peterson, Matthew F.; Eckstein, Miguel P.
2015-01-01
Culture influences not only human high-level cognitive processes but also low-level perceptual operations. Some perceptual operations, such as initial eye movements to faces, are critical for extraction of information supporting evolutionarily important tasks such as face identification. The extent of cultural effects on these crucial perceptual processes is unknown. Here, we report that the first gaze location for face identification was similar across East Asian and Western Caucasian cultural groups: Both fixated a featureless point between the eyes and the nose, with smaller between-group than within-group differences and with a small horizontal difference across cultures (8% of the interocular distance). We also show that individuals of both cultural groups initially fixated at a slightly higher point on Asian faces than on Caucasian faces. The initial fixations were found to be both fundamental in acquiring the majority of information for face identification and optimal, as accuracy deteriorated when observers held their gaze away from their preferred fixations. An ideal observer that integrated facial information with the human visual system's varying spatial resolution across the visual field showed a similar information distribution across faces of both races and predicted initial human fixations. The model consistently replicated the small vertical difference between human fixations to Asian and Caucasian faces but did not predict the small horizontal leftward bias of Caucasian observers. Together, the results suggest that initial eye movements during face identification may be driven by brain mechanisms aimed at maximizing accuracy, and less influenced by culture. The findings increase our understanding of the interplay between the brain's aims to optimally accomplish basic perceptual functions and to respond to sociocultural influences. PMID:26382003
Drane, Daniel L.; Ojemann, Jeffrey G.; Phatak, Vaishali; Loring, David W.; Gross, Robert E.; Hebb, Adam O.; Silbergeld, Daniel L.; Miller, John W.; Voets, Natalie L.; Saindane, Amit M.; Barsalou, Lawrence; Meador, Kimford J.; Ojemann, George A.; Tranel, Daniel
2012-01-01
This study aims to demonstrate that the left and right anterior temporal lobes (ATLs) perform critical but unique roles in famous face identification, with damage to either leading to differing deficit patterns reflecting decreased access to lexical or semantic concepts but not their degradation. Famous face identification was studied in 22 presurgical and 14 postsurgical temporal lobe epilepsy (TLE) patients and 20 healthy comparison subjects using free recall and multiple choice (MC) paradigms. Right TLE patients exhibited presurgical deficits in famous face recognition, and postsurgical deficits in both famous face recognition and familiarity judgments. However, they did not exhibit any problems with naming before or after surgery. In contrast, left TLE patients demonstrated both pre-and postsurgical deficits in famous face naming but no significant deficits in recognition or familiarity. Double dissociations in performance between groups were alleviated by altering task demands. Postsurgical right TLE patients provided with MC options correctly identified greater than 70% of famous faces they initially rated as unfamiliar. Left TLE patients accurately chose the name for nearly all famous faces they recognized (based on their verbal description) but initially failed to name, although they tended to rapidly lose access to this name. We believe alterations in task demands activate alternative routes to semantic and lexical networks, demonstrating that unique pathways to such stored information exist, and suggesting a different role for each ATL in identifying visually presented famous faces. The right ATL appears to play a fundamental role in accessing semantic information from a visual route, with the left ATL serving to link semantic information to the language system to produce a specific name. These findings challenge several assumptions underlying amodal models of semantic memory, and provide support for the integrated multimodal theories of semantic memory and a distributed representation of concepts. PMID:23040175
Drane, Daniel L; Ojemann, Jeffrey G; Phatak, Vaishali; Loring, David W; Gross, Robert E; Hebb, Adam O; Silbergeld, Daniel L; Miller, John W; Voets, Natalie L; Saindane, Amit M; Barsalou, Lawrence; Meador, Kimford J; Ojemann, George A; Tranel, Daniel
2013-06-01
This study aims to demonstrate that the left and right anterior temporal lobes (ATLs) perform critical but unique roles in famous face identification, with damage to either leading to differing deficit patterns reflecting decreased access to lexical or semantic concepts but not their degradation. Famous face identification was studied in 22 presurgical and 14 postsurgical temporal lobe epilepsy (TLE) patients and 20 healthy comparison subjects using free recall and multiple choice (MC) paradigms. Right TLE patients exhibited presurgical deficits in famous face recognition, and postsurgical deficits in both famous face recognition and familiarity judgments. However, they did not exhibit any problems with naming before or after surgery. In contrast, left TLE patients demonstrated both pre- and postsurgical deficits in famous face naming but no significant deficits in recognition or familiarity. Double dissociations in performance between groups were alleviated by altering task demands. Postsurgical right TLE patients provided with MC options correctly identified greater than 70% of famous faces they initially rated as unfamiliar. Left TLE patients accurately chose the name for nearly all famous faces they recognized (based on their verbal description) but initially failed to name, although they tended to rapidly lose access to this name. We believe alterations in task demands activate alternative routes to semantic and lexical networks, demonstrating that unique pathways to such stored information exist, and suggesting a different role for each ATL in identifying visually presented famous faces. The right ATL appears to play a fundamental role in accessing semantic information from a visual route, with the left ATL serving to link semantic information to the language system to produce a specific name. These findings challenge several assumptions underlying amodal models of semantic memory, and provide support for the integrated multimodal theories of semantic memory and a distributed representation of concepts. Copyright © 2012 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Canto, Angela I.; Cox, Bradley E.; Osborn, Debra; Becker, Martin Swanbrow; Hayden, Seth
2017-01-01
Resident Assistants (RAs) are often faced with working and living among residents who are struggling, in distress, and, at times, in crisis. This article provides a primer on identifying residents in crisis as well as the tasks faced when working with students in transition and crisis. A discussion of the prevalence of distress reported by…
Calvo, Manuel G; Nummenmaa, Lauri
2009-12-01
Happy, surprised, disgusted, angry, sad, fearful, and neutral faces were presented extrafoveally, with fixations on faces allowed or not. The faces were preceded by a cue word that designated the face to be saccaded in a two-alternative forced-choice discrimination task (2AFC; Experiments 1 and 2), or were followed by a probe word for recognition (Experiment 3). Eye tracking was used to decompose the recognition process into stages. Relative to the other expressions, happy faces (1) were identified faster (as early as 160 msec from stimulus onset) in extrafoveal vision, as revealed by shorter saccade latencies in the 2AFC task; (2) required less encoding effort, as indexed by shorter first fixations and dwell times; and (3) required less decision-making effort, as indicated by fewer refixations on the face after the recognition probe was presented. This reveals a happy-face identification advantage both prior to and during overt attentional processing. The results are discussed in relation to prior neurophysiological findings on latencies in facial expression recognition.
The Role of Facial Attractiveness and Facial Masculinity/Femininity in Sex Classification of Faces
Hoss, Rebecca A.; Ramsey, Jennifer L.; Griffin, Angela M.; Langlois, Judith H.
2005-01-01
We tested whether adults (Experiment 1) and 4–5-year-old children (Experiment 2) identify the sex of high attractive faces faster and more accurately than low attractive faces in a reaction time task. We also assessed whether facial masculinity/femininity facilitated identification of sex. Results showed that attractiveness facilitated adults’ sex classification of both female and male faces and children’s sex classification of female, but not male, faces. Moreover, attractiveness affected the speed and accuracy of sex classification independent of masculinity/femininity. High masculinity in male faces, but not high femininity in female faces, also facilitated sex classification for both adults and children. These findings provide important new data on how the facial cues of attractiveness and masculinity/femininity contribute to the task of sex classification and provide evidence for developmental differences in how adults and children use these cues. Additionally, these findings provide support for Langlois and Roggman’s (1990) averageness theory of attractiveness. PMID:16457167
Holistic Processing in the Composite Task Depends on Face Size
Ross, David A.; Gauthier, Isabel
2015-01-01
Holistic processing is a hallmark of face processing. There is evidence that holistic processing is strongest for faces at identification distance, 2 – 10 meters from the observer. However, this evidence is based on tasks that have been little used in the literature and that are indirect measures of holistic processing. We use the composite task– a well validated and frequently used paradigm – to measure the effect of viewing distance on holistic processing. In line with previous work, we find a congruency x alignment effect that is strongest for faces that are close (2m equivalent distance) than for faces that are further away (24m equivalent distance). In contrast, the alignment effect for same trials, used by several authors to measure holistic processing, produced results that are difficult to interpret. We conclude that our results converge with previous findings providing more direct evidence for an effect of size on holistic processing. PMID:26500423
Face and object encoding under perceptual load: ERP evidence.
Neumann, Markus F; Mohamed, Tarik N; Schweinberger, Stefan R
2011-02-14
According to the perceptual load theory, processing of a task-irrelevant distractor is abolished when attentional resources are fully consumed by task-relevant material. As an exception, however, famous faces have been shown to elicit repetition modulations in event-related potentials - an N250r - despite high load at initial presentation, suggesting preserved face-encoding. Here, we recorded N250r repetition modulations by unfamiliar faces, hands, and houses, and tested face specificity of preserved encoding under high load. In an immediate (S1-S2) repetition priming paradigm, participants performed a letter identification task on S1 by indicating whether an "X" vs. "N" was among 6 different (high load condition) or 6 identical (low load condition) letters. Letter strings were superimposed on distractor faces, hands, or houses. Subsequent S2 probes were either identical repetitions of S1 distractors, non-repeated exemplars from the same category, or infrequent butterflies, to which participants responded. Independent of attentional load at S1, an occipito-temporal N250r was found for unfamiliar faces. In contrast, no repetition-related neural modulation emerged for houses or hands. This strongly suggests that a putative face-selective attention module supports encoding under high load, and that similar mechanisms are unavailable for other natural or artificial objects. Copyright © 2010 Elsevier Inc. All rights reserved.
Right hemispheric dominance in gaze-triggered reflexive shift of attention in humans.
Okada, Takashi; Sato, Wataru; Toichi, Motomi
2006-11-01
Recent findings suggest a right hemispheric dominance in gaze-triggered shifts of attention. The aim of this study was to clarify the dominant hemisphere in the gaze processing that mediates attentional shift. A target localization task, with preceding non-predicative gaze cues presented to each visual field, was undertaken by 44 healthy subjects, measuring reaction time (RT). A face identification task was also given to determine hemispheric dominance in face processing for each subject. RT differences between valid and invalid cues were larger when presented in the left rather than the right visual field. This held true regardless of individual hemispheric dominance in face processing. Together, these results indicate right hemispheric dominance in gaze-triggered reflexive shifts of attention in normal healthy subjects.
The aging eyewitness: effects of age on face, delay, and source-memory ability.
Memon, Amina; Bartlett, James; Rose, Rachel; Gray, Colin
2003-11-01
As a way to examine the nature of age-related differences in lineup identification accuracy, young (16-33 years) and older (60-82 years) witnesses viewed two similar videotaped incidents, one involving a young perpetrator and the other involving an older perpetrator. The incidents were followed by two separate lineups, one for the younger perpetrator and one for the older perpetrator. When the test delay was short (35 min), the young and older witnesses performed similarly on the lineups, but when the tests were delayed by 1 week, the older witnesses were substantially less accurate. When the target was absent from the lineups, the older witnesses made more false alarm errors, particularly when the faces were young. When the target was present in the lineups, correct identifications by both young and older witnesses were positively correlated with a measure of source recollection derived from a separate face-recognition task. Older witnesses scored poorly on this measure, suggesting that source-recollection deficits are partially responsible for age-related differences in performance on the lineup task.
Multiple confidence estimates as indices of eyewitness memory.
Sauer, James D; Brewer, Neil; Weber, Nathan
2008-08-01
Eyewitness identification decisions are vulnerable to various influences on witnesses' decision criteria that contribute to false identifications of innocent suspects and failures to choose perpetrators. An alternative procedure using confidence estimates to assess the degree of match between novel and previously viewed faces was investigated. Classification algorithms were applied to participants' confidence data to determine when a confidence value or pattern of confidence values indicated a positive response. Experiment 1 compared confidence group classification accuracy with a binary decision control group's accuracy on a standard old-new face recognition task and found superior accuracy for the confidence group for target-absent trials but not for target-present trials. Experiment 2 used a face mini-lineup task and found reduced target-present accuracy offset by large gains in target-absent accuracy. Using a standard lineup paradigm, Experiments 3 and 4 also found improved classification accuracy for target-absent lineups and, with a more sophisticated algorithm, for target-present lineups. This demonstrates the accessibility of evidence for recognition memory decisions and points to a more sensitive index of memory quality than is afforded by binary decisions.
Li, Yuan Hang; Tottenham, Nim
2013-04-01
A growing literature suggests that the self-face is involved in processing the facial expressions of others. The authors experimentally activated self-face representations to assess its effects on the recognition of dynamically emerging facial expressions of others. They exposed participants to videos of either their own faces (self-face prime) or faces of others (nonself-face prime) prior to a facial expression judgment task. Their results show that experimentally activating self-face representations results in earlier recognition of dynamically emerging facial expression. As a group, participants in the self-face prime condition recognized expressions earlier (when less affective perceptual information was available) compared to participants in the nonself-face prime condition. There were individual differences in performance, such that poorer expression identification was associated with higher autism traits (in this neurocognitively healthy sample). However, when randomized into the self-face prime condition, participants with high autism traits performed as well as those with low autism traits. Taken together, these data suggest that the ability to recognize facial expressions in others is linked with the internal representations of our own faces. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Optical correlators for recognition of human face thermal images
NASA Astrophysics Data System (ADS)
Bauer, Joanna; Podbielska, Halina; Suchwalko, Artur; Mazurkiewicz, Jacek
2005-09-01
In this paper, the application of the optical correlators for face thermograms recognition is described. The thermograms were colleted from 27 individuals. For each person 10 pictures in different conditions were recorded and the data base composed of 270 images was prepared. Two biometric systems based on joint transform correlator and 4f correlator were built. Each system was designed for realizing two various tasks: verification and identification. The recognition systems were tested and evaluated according to the Face Recognition Vendor Tests (FRVT).
Sanchez, Alvaro; Romero, Nuria; Maurage, Pierre; De Raedt, Rudi
2017-12-01
Interpersonal difficulties are common in depression, but their underlying mechanisms are not yet fully understood. The role of depression in the identification of mixed emotional signals with a direct interpersonal value remains unclear. The present study aimed to clarify this question. A sample of 39 individuals reporting a broad range of depression levels completed an emotion identification task where they viewed faces expressing three emotional categories (100% disgusted and 100% happy faces, as well as their morphed 50% disgusted - 50% happy exemplars). Participants were asked to identify the corresponding depicted emotion as "clearly disgusted", "mixed", or "clearly happy". Higher depression levels were associated with lower identification of positive emotions in 50% disgusted - 50% happy faces. The study was conducted with an analogue sample reporting individual differences in subclinical depression levels. Further research must replicate these findings in a clinical sample and clarify whether differential emotional identification patterns emerge in depression for different mixed negative-positive emotions (sad-happy vs. disgusted-happy). Depression may account for a lower bias to perceive positive states when ambiguous states from others include subtle signals of social threat (i.e., disgust), leading to an under-perception of positive social signals. Copyright © 2017 Elsevier Ltd. All rights reserved.
Impairment in face processing in autism spectrum disorder: a developmental perspective.
Greimel, Ellen; Schulte-Rüther, Martin; Kamp-Becker, Inge; Remschmidt, Helmut; Herpertz-Dahlmann, Beate; Konrad, Kerstin
2014-09-01
Findings on face identity and facial emotion recognition in autism spectrum disorder (ASD) are inconclusive. Moreover, little is known about the developmental trajectory of face processing skills in ASD. Taking a developmental perspective, the aim of this study was to extend previous findings on face processing skills in a sample of adolescents and adults with ASD. N = 38 adolescents and adults (13-49 years) with high-functioning ASD and n = 37 typically developing (TD) control subjects matched for age and IQ participated in the study. Moreover, n = 18 TD children between the ages of 8 and 12 were included to address the question whether face processing skills in ASD follow a delayed developmental pattern. Face processing skills were assessed using computerized tasks of face identity recognition (FR) and identification of facial emotions (IFE). ASD subjects showed impaired performance on several parameters of the FR and IFE task compared to TD control adolescents and adults. Whereas TD adolescents and adults outperformed TD children in both tasks, performance in ASD adolescents and adults was similar to the group of TD children. Within the groups of ASD and control adolescents and adults, no age-related changes in performance were found. Our findings corroborate and extend previous studies showing that ASD is characterised by broad impairments in the ability to process faces. These impairments seem to reflect a developmentally delayed pattern that remains stable throughout adolescence and adulthood.
Parker, Alison E.; Mathis, Erin T.; Kupersmidt, Janis B.
2016-01-01
The study examined children’s recognition of emotion from faces and body poses, as well as gender differences in these recognition abilities. Preschool-aged children (N = 55) and their parents and teachers participated in the study. Preschool-aged children completed a web-based measure of emotion recognition skills, which included five tasks (three with faces and two with bodies). Parents and teachers reported on children’s aggressive behaviors and social skills. Children’s emotion accuracy on two of the three facial tasks and one of the body tasks was related to teacher reports of social skills. Some of these relations were moderated by child gender. In particular, the relationships between emotion recognition accuracy and reports of children’s behavior were stronger for boys than girls. Identifying preschool-aged children’s strengths and weaknesses in identification of emotion from faces and body poses may be helpful in guiding interventions with children who have problems with social and behavioral functioning that may be due, in part, to emotional knowledge deficits. Further developmental implications of these findings are discussed. PMID:27057129
Face verification with balanced thresholds.
Yan, Shuicheng; Xu, Dong; Tang, Xiaoou
2007-01-01
The process of face verification is guided by a pre-learned global threshold, which, however, is often inconsistent with class-specific optimal thresholds. It is, hence, beneficial to pursue a balance of the class-specific thresholds in the model-learning stage. In this paper, we present a new dimensionality reduction algorithm tailored to the verification task that ensures threshold balance. This is achieved by the following aspects. First, feasibility is guaranteed by employing an affine transformation matrix, instead of the conventional projection matrix, for dimensionality reduction, and, hence, we call the proposed algorithm threshold balanced transformation (TBT). Then, the affine transformation matrix, constrained as the product of an orthogonal matrix and a diagonal matrix, is optimized to improve the threshold balance and classification capability in an iterative manner. Unlike most algorithms for face verification which are directly transplanted from face identification literature, TBT is specifically designed for face verification and clarifies the intrinsic distinction between these two tasks. Experiments on three benchmark face databases demonstrate that TBT significantly outperforms the state-of-the-art subspace techniques for face verification.
Face Recognition by Metropolitan Police Super-Recognisers.
Robertson, David J; Noyes, Eilidh; Dowsett, Andrew J; Jenkins, Rob; Burton, A Mike
2016-01-01
Face recognition is used to prove identity across a wide variety of settings. Despite this, research consistently shows that people are typically rather poor at matching faces to photos. Some professional groups, such as police and passport officers, have been shown to perform just as poorly as the general public on standard tests of face recognition. However, face recognition skills are subject to wide individual variation, with some people showing exceptional ability-a group that has come to be known as 'super-recognisers'. The Metropolitan Police Force (London) recruits 'super-recognisers' from within its ranks, for deployment on various identification tasks. Here we test four working super-recognisers from within this police force, and ask whether they are really able to perform at levels above control groups. We consistently find that the police 'super-recognisers' perform at well above normal levels on tests of unfamiliar and familiar face matching, with degraded as well as high quality images. Recruiting employees with high levels of skill in these areas, and allocating them to relevant tasks, is an efficient way to overcome some of the known difficulties associated with unfamiliar face recognition.
Mayo, Ruth; Alfasi, Dana; Schwarz, Norbert
2014-06-01
Feelings of distrust alert people not to take information at face value, which may influence their reasoning strategy. Using the Wason (1960) rule identification task, we tested whether chronic and temporary distrust increase the use of negative hypothesis testing strategies suited to falsify one's own initial hunch. In Study 1, participants who were low in dispositional trust were more likely to engage in negative hypothesis testing than participants high in dispositional trust. In Study 2, trust and distrust were induced through an alleged person-memory task. Paralleling the effects of chronic distrust, participants exposed to a single distrust-eliciting face were 3 times as likely to engage in negative hypothesis testing as participants exposed to a trust-eliciting face. In both studies, distrust increased negative hypothesis testing, which was associated with better performance on the Wason task. In contrast, participants' initial rule generation was not consistently affected by distrust. These findings provide first evidence that distrust can influence which reasoning strategy people adopt. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Carrier-Toutant, Frédérike; Guay, Samuel; Beaulieu, Christelle; Léveillé, Édith; Turcotte-Giroux, Alexandre; Papineau, Samaël D; Brisson, Benoit; D'Hondt, Fabien; De Beaumont, Louis
2018-05-06
Concussions affect the processing of emotional stimuli. This study aimed to investigate how sex interacts with concussion effects on early event-related brain potentials (ERP) measures (P1, N1) of emotional facial expressions (EFE) processing in asymptomatic, multi-concussion athletes during an EFE identification task. Forty control athletes (20 females and 20 males) and 43 multi-concussed athletes (22 females and 21 males), recruited more than 3 months after their last concussion, were tested. Participants completed the Beck Depression Inventory II, the Beck Anxiety Inventory, the Post-Concussion Symptom Scale, and an Emotional Facial Expression Identification Task. Pictures of male and female faces expressing neutral, angry, and happy emotions were randomly presented and the emotion depicted had to be identified as fast as possible during EEG acquisition. Relative to controls, concussed athletes of both sex exhibited a significant suppression of P1 amplitude recorded from the dominant right hemisphere while performing the emotional face expression identification task. The present study also highlighted a sex-specific suppression of the N1 component amplitude after concussion which affected male athletes. These findings suggest that repeated concussions alter the typical pattern of right-hemisphere response dominance to EFE in early stages of EFE processing and that the neurophysiological mechanisms underlying the processing of emotional stimuli are distinctively affected across sex. (JINS, 2018, 24, 1-11).
The processing of auditory and visual recognition of self-stimuli.
Hughes, Susan M; Nicholson, Shevon E
2010-12-01
This study examined self-recognition processing in both the auditory and visual modalities by determining how comparable hearing a recording of one's own voice was to seeing photograph of one's own face. We also investigated whether the simultaneous presentation of auditory and visual self-stimuli would either facilitate or inhibit self-identification. Ninety-one participants completed reaction-time tasks of self-recognition when presented with their own faces, own voices, and combinations of the two. Reaction time and errors made when responding with both the right and left hand were recorded to determine if there were lateralization effects on these tasks. Our findings showed that visual self-recognition for facial photographs appears to be superior to auditory self-recognition for voice recordings. Furthermore, a combined presentation of one's own face and voice appeared to inhibit rather than facilitate self-recognition and there was a left-hand advantage for reaction time on the combined-presentation tasks. Copyright © 2010 Elsevier Inc. All rights reserved.
Painful faces-induced attentional blink modulated by top–down and bottom–up mechanisms
Zheng, Chun; Wang, Jin-Yan; Luo, Fei
2015-01-01
Pain-related stimuli can capture attention in an automatic (bottom–up) or intentional (top–down) fashion. Previous studies have examined attentional capture by pain-related information using spatial attention paradigms that involve mainly a bottom–up mechanism. In the current study, we investigated the pain information-induced attentional blink (AB) using a rapid serial visual presentation (RSVP) task, and compared the effects of task-irrelevant and task-relevant pain distractors. Relationships between accuracy of target identification and individual traits (i.e., empathy and catastrophizing thinking about pain) were also examined. The results demonstrated that task-relevant painful faces had a significant pain information-induced AB effect, whereas task-irrelevant faces showed a near-significant trend of this effect, supporting the notion that pain-related stimuli can influence the temporal dynamics of attention. Furthermore, we found a significant negative correlation between response accuracy and pain catastrophizing score in task-relevant trials. These findings suggest that active scanning of environmental information related to pain produces greater deficits in cognition than does unintentional attention toward pain, which may represent the different ways in which healthy individuals and patients with chronic pain process pain-relevant information. These results may provide insight into the understanding of maladaptive attentional processing in patients with chronic pain. PMID:26082731
Methylphenidate alters selective attention by amplifying salience.
ter Huurne, Niels; Fallon, Sean James; van Schouwenburg, Martine; van der Schaaf, Marieke; Buitelaar, Jan; Jensen, Ole; Cools, Roshan
2015-12-01
Methylphenidate, the most common treatment of attention deficit hyperactivity disorder (ADHD), is increasingly used by healthy individuals as a "smart drug" to enhance cognitive abilities like attention. A key feature of (selective) attention is the ability to ignore irrelevant but salient information in the environment (distractors). Although crucial for cognitive performance, until now, it is not known how the use of methylphenidate affects resistance to attentional capture by distractors. The present study aims to clarify how methylphenidate affects distractor suppression in healthy individuals. The effect of methylphenidate (20 mg) on distractor suppression was assessed in healthy subjects (N = 20), in a within-subject double-blind placebo-controlled crossover design. We used a visuospatial attention task with target faces flanked by strong (faces) or weak distractors (scrambled faces). Methylphenidate increased accuracy on trials that required gender identification of target face stimuli (methylphenidate 88.9 ± 1.4 [mean ± SEM], placebo 86.0 ± 1.2 %; p = .003), suggesting increased processing of the faces. At the same time, however, methylphenidate increased reaction time when the target face was flanked by a face distractor relative to a scrambled face distractor (methylphenidate 34.9 ± 3.73, placebo 26.7 ± 2.84 ms; p = .027), suggesting enhanced attentional capture by distractors with task-relevant features. We conclude that methylphenidate amplifies salience of task-relevant information at the level of the stimulus category. This leads to enhanced processing of the target (faces) but also increased attentional capture by distractors drawn from the same category as the target.
The perception of positive and negative facial expressions by unilateral stroke patients.
Abbott, Jacenta D; Wijeratne, Tissa; Hughes, Andrew; Perre, Diana; Lindell, Annukka K
2014-04-01
There remains conflict in the literature about the lateralisation of affective face perception. Some studies have reported a right hemisphere advantage irrespective of valence, whereas others have found a left hemisphere advantage for positive, and a right hemisphere advantage for negative, emotion. Differences in injury aetiology and chronicity, proportion of male participants, participant age, and the number of emotions used within a perception task may contribute to these contradictory findings. The present study therefore controlled and/or directly examined the influence of these possible moderators. Right brain-damaged (RBD; n=17), left brain-damaged (LBD; n=17), and healthy control (HC; n=34) participants completed two face perception tasks (identification and discrimination). No group differences in facial expression perception according to valence were found. Across emotions, the RBD group was less accurate thanthe HC group, however RBD and LBD group performancedid not differ. The lack of difference between RBD and LBD groups indicates that both hemispheres are involved in positive and negative expression perception. The inclusion of older adults and the well-defined chronicity range of the brain-damaged participants may have moderated these findings. Participant sex and general face perception ability did not influence performance. Furthermore, while the RBD group was less accurate than the LBD group when the identification task tested two emotions, performance of the two groups was indistinguishable when the number of emotions increased (four or six). This suggests that task demand moderates a study's ability to find hemispheric differences in the perception of facial emotion. Copyright © 2014 Elsevier Inc. All rights reserved.
Palermo, Romina; O’Connor, Kirsty B.; Davis, Joshua M.; Irons, Jessica; McKone, Elinor
2013-01-01
Although good tests are available for diagnosing clinical impairments in face expression processing, there is a lack of strong tests for assessing “individual differences” – that is, differences in ability between individuals within the typical, nonclinical, range. Here, we develop two new tests, one for expression perception (an odd-man-out matching task in which participants select which one of three faces displays a different expression) and one additionally requiring explicit identification of the emotion (a labelling task in which participants select one of six verbal labels). We demonstrate validity (careful check of individual items, large inversion effects, independence from nonverbal IQ, convergent validity with a previous labelling task), reliability (Cronbach’s alphas of.77 and.76 respectively), and wide individual differences across the typical population. We then demonstrate the usefulness of the tests by addressing theoretical questions regarding the structure of face processing, specifically the extent to which the following processes are common or distinct: (a) perceptual matching and explicit labelling of expression (modest correlation between matching and labelling supported partial independence); (b) judgement of expressions from faces and voices (results argued labelling tasks tap into a multi-modal system, while matching tasks tap distinct perceptual processes); and (c) expression and identity processing (results argued for a common first step of perceptual processing for expression and identity). PMID:23840821
Palermo, Romina; O'Connor, Kirsty B; Davis, Joshua M; Irons, Jessica; McKone, Elinor
2013-01-01
Although good tests are available for diagnosing clinical impairments in face expression processing, there is a lack of strong tests for assessing "individual differences"--that is, differences in ability between individuals within the typical, nonclinical, range. Here, we develop two new tests, one for expression perception (an odd-man-out matching task in which participants select which one of three faces displays a different expression) and one additionally requiring explicit identification of the emotion (a labelling task in which participants select one of six verbal labels). We demonstrate validity (careful check of individual items, large inversion effects, independence from nonverbal IQ, convergent validity with a previous labelling task), reliability (Cronbach's alphas of.77 and.76 respectively), and wide individual differences across the typical population. We then demonstrate the usefulness of the tests by addressing theoretical questions regarding the structure of face processing, specifically the extent to which the following processes are common or distinct: (a) perceptual matching and explicit labelling of expression (modest correlation between matching and labelling supported partial independence); (b) judgement of expressions from faces and voices (results argued labelling tasks tap into a multi-modal system, while matching tasks tap distinct perceptual processes); and (c) expression and identity processing (results argued for a common first step of perceptual processing for expression and identity).
Emotion Awareness Predicts Body Mass Index Percentile Trajectories in Youth.
Whalen, Diana J; Belden, Andy C; Barch, Deanna; Luby, Joan
2015-10-01
To examine the rate of change in body mass index (BMI) percentile across 3 years in relation to emotion identification ability and brain-based reactivity in emotional processing regions. A longitudinal sample of 202 youths completed 3 functional magnetic resonance imaging-based facial processing tasks and behavioral emotion differentiation tasks. We examined the rate of change in the youth's BMI percentile as a function of reactivity in emotional processing brain regions and behavioral emotion identification tasks using multilevel modeling. Lower correct identification of both happiness and sadness measured behaviorally predicted increases in BMI percentile across development, whereas higher correct identification of both happiness and sadness predicted decreases in BMI percentile, while controlling for children's pubertal status, sex, ethnicity, IQ score, exposure to antipsychotic medication, family income-to-needs ratio, and externalizing, internalizing, and depressive symptoms. Greater neural activation in emotional reactivity regions to sad faces also predicted increases in BMI percentile during development, also controlling for the aforementioned covariates. Our findings provide longitudinal developmental data demonstrating links between both emotion identification ability and greater neural reactivity in emotional processing regions with trajectories of BMI percentiles across childhood. Copyright © 2015 Elsevier Inc. All rights reserved.
Tan, Patricia Z; Silk, Jennifer S; Dahl, Ronald E; Kronhaus, Dina; Ladouceur, Cecile D
2018-01-01
This study sought to examine age-related differences in the influences of social (neutral, emotional faces) and non-social/non-emotional (shapes) distractor stimuli in children, adolescents, and adults. To assess the degree to which distractor, or task-irrelevant, stimuli of varying social and emotional salience interfere with cognitive performance, children ( N = 12; 8-12y), adolescents ( N = 17; 13-17y), and adults ( N = 17; 18-52y) completed the Emotional Identification and Dynamic Faces (EIDF) task. This task included three types of dynamically-changing distractors: (1) neutral-social (neutral face changing into another face); (2) emotional-social (face changing from 0% emotional to 100% emotional); and (3) non-social/non-emotional (shapes changing from small to large) to index the influence of task-irrelevant social and emotional information on cognition. Results yielded no age-related differences in accuracy but showed an age-related linear reduction in correct reaction times across distractor conditions. An age-related effect in interference was observed, such that children and adults showed slower response times on correct trials with socially-salient distractors; whereas adolescents exhibited faster responses on trials with distractors that included faces rather than shapes. A secondary study goal was to explore individual differences in cognitive interference. Results suggested that regardless of age, low trait anxiety and high effortful control were associated with interference to angry faces. Implications for developmental differences in affective processing, notably the importance of considering the contexts in which purportedly irrelevant social and emotional information might impair, vs. improve cognitive control, are discussed.
Ethnicity identification from face images
NASA Astrophysics Data System (ADS)
Lu, Xiaoguang; Jain, Anil K.
2004-08-01
Human facial images provide the demographic information, such as ethnicity and gender. Conversely, ethnicity and gender also play an important role in face-related applications. Image-based ethnicity identification problem is addressed in a machine learning framework. The Linear Discriminant Analysis (LDA) based scheme is presented for the two-class (Asian vs. non-Asian) ethnicity classification task. Multiscale analysis is applied to the input facial images. An ensemble framework, which integrates the LDA analysis for the input face images at different scales, is proposed to further improve the classification performance. The product rule is used as the combination strategy in the ensemble. Experimental results based on a face database containing 263 subjects (2,630 face images, with equal balance between the two classes) are promising, indicating that LDA and the proposed ensemble framework have sufficient discriminative power for the ethnicity classification problem. The normalized ethnicity classification scores can be helpful in the facial identity recognition. Useful as a "soft" biometric, face matching scores can be updated based on the output of ethnicity classification module. In other words, ethnicity classifier does not have to be perfect to be useful in practice.
Perceptual expertise in forensic facial image comparison
White, David; Phillips, P. Jonathon; Hahn, Carina A.; Hill, Matthew; O'Toole, Alice J.
2015-01-01
Forensic facial identification examiners are required to match the identity of faces in images that vary substantially, owing to changes in viewing conditions and in a person's appearance. These identifications affect the course and outcome of criminal investigations and convictions. Despite calls for research on sources of human error in forensic examination, existing scientific knowledge of face matching accuracy is based, almost exclusively, on people without formal training. Here, we administered three challenging face matching tests to a group of forensic examiners with many years' experience of comparing face images for law enforcement and government agencies. Examiners outperformed untrained participants and computer algorithms, thereby providing the first evidence that these examiners are experts at this task. Notably, computationally fusing responses of multiple experts produced near-perfect performance. Results also revealed qualitative differences between expert and non-expert performance. First, examiners' superiority was greatest at longer exposure durations, suggestive of more entailed comparison in forensic examiners. Second, experts were less impaired by image inversion than non-expert students, contrasting with face memory studies that show larger face inversion effects in high performers. We conclude that expertise in matching identity across unfamiliar face images is supported by processes that differ qualitatively from those supporting memory for individual faces. PMID:26336174
Lopez-Duran, Nestor L.; Kuhlman, Kate R.; George, Charles; Kovacs, Maria
2012-01-01
In the present study we examined perceptual sensitivity to facial expressions of sadness among children at familial-risk for depression (N = 64) and low-risk peers (N = 40) between the ages 7 and 13(Mage = 9.51; SD = 2.27). Participants were presented with pictures of facial expressions that varied in emotional intensity from neutral to full-intensity sadness or anger (i.e., emotion recognition), or pictures of faces morphing from anger to sadness (emotion discrimination). After each picture was presented, children indicated whether the face showed a specific emotion (i.e., sadness, anger) or no emotion at all (neutral). In the emotion recognition task, boys (but not girls) at familial-risk for depression identified sadness at significantly lower levels of emotional intensity than did their low-risk peers. The high and low-risk groups did not differ with regard to identification of anger. In the emotion discrimination task, both groups displayed over-identification of sadness in ambiguous mixed faces but high-risk youth were less likely to show this labeling bias than their peers. Our findings are consistent with the hypothesis that enhanced perceptual sensitivity to subtle traces of sadness in facial expressions may be a potential mechanism of risk among boys at familial-risk for depression. This enhanced perceptual sensitivity does not appear to be due to biases in the labeling of ambiguous faces. PMID:23106941
Eifuku, Satoshi; De Souza, Wania C.; Nakata, Ryuzaburo; Ono, Taketoshi; Tamura, Ryoi
2011-01-01
To investigate the neural representations of faces in primates, particularly in relation to their personal familiarity or unfamiliarity, neuronal activities were chronically recorded from the ventral portion of the anterior inferior temporal cortex (AITv) of macaque monkeys during the performance of a facial identification task using either personally familiar or unfamiliar faces as stimuli. By calculating the correlation coefficients between neuronal responses to the faces for all possible pairs of faces given in the task and then using the coefficients as neuronal population-based similarity measures between the faces in pairs, we analyzed the similarity/dissimilarity relationship between the faces, which were potentially represented by the activities of a population of the face-responsive neurons recorded in the area AITv. The results showed that, for personally familiar faces, different identities were represented by different patterns of activities of the population of AITv neurons irrespective of the view (e.g., front, 90° left, etc.), while different views were not represented independently of their facial identities, which was consistent with our previous report. In the case of personally unfamiliar faces, the faces possessing different identities but presented in the same frontal view were represented as similar, which contrasts with the results for personally familiar faces. These results, taken together, outline the neuronal representations of personally familiar and unfamiliar faces in the AITv neuronal population. PMID:21526206
van Golde, Celine; Verstraten, Frans A. J.
2017-01-01
The speed and ease with which we recognize the faces of our friends and family members belies the difficulty we have recognizing less familiar individuals. Nonetheless, overconfidence in our ability to recognize faces has carried over into various aspects of our legal system; for instance, eyewitness identification serves a critical role in criminal proceedings. For this reason, understanding the perceptual and psychological processes that underlie false identification is of the utmost importance. Gaze direction is a salient social signal and direct eye contact, in particular, is thought to capture attention. Here, we tested the hypothesis that differences in gaze direction may influence difficult decisions in a lineup context. In a series of experiments, we show that when a group of faces differed in their gaze direction, the faces that were making eye contact with the participants were more likely to be misidentified. Interestingly, this bias disappeared when the faces are presented with their eyes closed. These findings open a critical conversation between social neuroscience and forensic psychology, and imply that direct eye contact may (wrongly) increase the perceived familiarity of a face. PMID:28203355
Face Recognition by Metropolitan Police Super-Recognisers
Robertson, David J.; Noyes, Eilidh; Dowsett, Andrew J.; Jenkins, Rob; Burton, A. Mike
2016-01-01
Face recognition is used to prove identity across a wide variety of settings. Despite this, research consistently shows that people are typically rather poor at matching faces to photos. Some professional groups, such as police and passport officers, have been shown to perform just as poorly as the general public on standard tests of face recognition. However, face recognition skills are subject to wide individual variation, with some people showing exceptional ability—a group that has come to be known as ‘super-recognisers’. The Metropolitan Police Force (London) recruits ‘super-recognisers’ from within its ranks, for deployment on various identification tasks. Here we test four working super-recognisers from within this police force, and ask whether they are really able to perform at levels above control groups. We consistently find that the police ‘super-recognisers’ perform at well above normal levels on tests of unfamiliar and familiar face matching, with degraded as well as high quality images. Recruiting employees with high levels of skill in these areas, and allocating them to relevant tasks, is an efficient way to overcome some of the known difficulties associated with unfamiliar face recognition. PMID:26918457
Task-dependent modulation of the visual sensory thalamus assists visual-speech recognition.
Díaz, Begoña; Blank, Helen; von Kriegstein, Katharina
2018-05-14
The cerebral cortex modulates early sensory processing via feed-back connections to sensory pathway nuclei. The functions of this top-down modulation for human behavior are poorly understood. Here, we show that top-down modulation of the visual sensory thalamus (the lateral geniculate body, LGN) is involved in visual-speech recognition. In two independent functional magnetic resonance imaging (fMRI) studies, LGN response increased when participants processed fast-varying features of articulatory movements required for visual-speech recognition, as compared to temporally more stable features required for face identification with the same stimulus material. The LGN response during the visual-speech task correlated positively with the visual-speech recognition scores across participants. In addition, the task-dependent modulation was present for speech movements and did not occur for control conditions involving non-speech biological movements. In face-to-face communication, visual speech recognition is used to enhance or even enable understanding what is said. Speech recognition is commonly explained in frameworks focusing on cerebral cortex areas. Our findings suggest that task-dependent modulation at subcortical sensory stages has an important role for communication: Together with similar findings in the auditory modality the findings imply that task-dependent modulation of the sensory thalami is a general mechanism to optimize speech recognition. Copyright © 2018. Published by Elsevier Inc.
Social Experience Does Not Abolish Cultural Diversity in Eye Movements
Kelly, David J.; Jack, Rachael E.; Miellet, Sébastien; De Luca, Emanuele; Foreman, Kay; Caldara, Roberto
2011-01-01
Adults from Eastern (e.g., China) and Western (e.g., USA) cultural groups display pronounced differences in a range of visual processing tasks. For example, the eye movement strategies used for information extraction during a variety of face processing tasks (e.g., identification and facial expressions of emotion categorization) differs across cultural groups. Currently, many of the differences reported in previous studies have asserted that culture itself is responsible for shaping the way we process visual information, yet this has never been directly investigated. In the current study, we assessed the relative contribution of genetic and cultural factors by testing face processing in a population of British Born Chinese adults using face recognition and expression classification tasks. Contrary to predictions made by the cultural differences framework, the majority of British Born Chinese adults deployed “Eastern” eye movement strategies, while approximately 25% of participants displayed “Western” strategies. Furthermore, the cultural eye movement strategies used by individuals were consistent across recognition and expression tasks. These findings suggest that “culture” alone cannot straightforwardly account for diversity in eye movement patterns. Instead a more complex understanding of how the environment and individual experiences can influence the mechanisms that govern visual processing is required. PMID:21886626
Categorical Perception of Emotional Facial Expressions in Preschoolers
ERIC Educational Resources Information Center
Cheal, Jenna L.; Rutherford, M. D.
2011-01-01
Adults perceive emotional facial expressions categorically. In this study, we explored categorical perception in 3.5-year-olds by creating a morphed continuum of emotional faces and tested preschoolers' discrimination and identification of them. In the discrimination task, participants indicated whether two examples from the continuum "felt the…
Individual differences in face-looking behavior generalize from the lab to the world.
Peterson, Matthew F; Lin, Jing; Zaun, Ian; Kanwisher, Nancy
2016-05-01
Recent laboratory studies have found large, stable individual differences in the location people first fixate when identifying faces, ranging from the brows to the mouth. Importantly, this variation is strongly associated with differences in fixation-specific identification performance such that individuals' recognition ability is maximized when looking at their preferred location (Mehoudar, Arizpe, Baker, & Yovel, 2014; Peterson & Eckstein, 2013). This finding suggests that face representations are retinotopic and individuals enact gaze strategies that optimize identification, yet the extent to which this behavior reflects real-world gaze behavior is unknown. Here, we used mobile eye trackers to test whether individual differences in face gaze generalize from lab to real-world vision. In-lab fixations were measured with a speeded face identification task, while real-world behavior was measured as subjects freely walked around the Massachusetts Institute of Technology campus. We found a strong correlation between the patterns of individual differences in face gaze in the lab and real-world settings. Our findings support the hypothesis that individuals optimize real-world face identification by consistently fixating the same location and thus strongly constraining the space of retinotopic input. The methods developed for this study entailed collecting a large set of high-definition, wide field-of-view natural videos from head-mounted cameras and the viewer's fixation position, allowing us to characterize subjects' actually experienced real-world retinotopic images. These images enable us to ask how vision is optimized not just for the statistics of the "natural images" found in web databases, but of the truly natural, retinotopic images that have landed on actual human retinae during real-world experience.
Performance evaluation of wavelet-based face verification on a PDA recorded database
NASA Astrophysics Data System (ADS)
Sellahewa, Harin; Jassim, Sabah A.
2006-05-01
The rise of international terrorism and the rapid increase in fraud and identity theft has added urgency to the task of developing biometric-based person identification as a reliable alternative to conventional authentication methods. Human Identification based on face images is a tough challenge in comparison to identification based on fingerprints or Iris recognition. Yet, due to its unobtrusive nature, face recognition is the preferred method of identification for security related applications. The success of such systems will depend on the support of massive infrastructures. Current mobile communication devices (3G smart phones) and PDA's are equipped with a camera which can capture both still and streaming video clips and a touch sensitive display panel. Beside convenience, such devices provide an adequate secure infrastructure for sensitive & financial transactions, by protecting against fraud and repudiation while ensuring accountability. Biometric authentication systems for mobile devices would have obvious advantages in conflict scenarios when communication from beyond enemy lines is essential to save soldier and civilian life. In areas of conflict or disaster the luxury of fixed infrastructure is not available or destroyed. In this paper, we present a wavelet-based face verification scheme that have been specifically designed and implemented on a currently available PDA. We shall report on its performance on the benchmark audio-visual BANCA database and on a newly developed PDA recorded audio-visual database that take include indoor and outdoor recordings.
Perceptual expertise in forensic facial image comparison.
White, David; Phillips, P Jonathon; Hahn, Carina A; Hill, Matthew; O'Toole, Alice J
2015-09-07
Forensic facial identification examiners are required to match the identity of faces in images that vary substantially, owing to changes in viewing conditions and in a person's appearance. These identifications affect the course and outcome of criminal investigations and convictions. Despite calls for research on sources of human error in forensic examination, existing scientific knowledge of face matching accuracy is based, almost exclusively, on people without formal training. Here, we administered three challenging face matching tests to a group of forensic examiners with many years' experience of comparing face images for law enforcement and government agencies. Examiners outperformed untrained participants and computer algorithms, thereby providing the first evidence that these examiners are experts at this task. Notably, computationally fusing responses of multiple experts produced near-perfect performance. Results also revealed qualitative differences between expert and non-expert performance. First, examiners' superiority was greatest at longer exposure durations, suggestive of more entailed comparison in forensic examiners. Second, experts were less impaired by image inversion than non-expert students, contrasting with face memory studies that show larger face inversion effects in high performers. We conclude that expertise in matching identity across unfamiliar face images is supported by processes that differ qualitatively from those supporting memory for individual faces. © 2015 The Author(s).
Ma, Yina; Han, Shihui
2010-06-01
Human adults usually respond faster to their own faces rather than to those of others. We tested the hypothesis that an implicit positive association (IPA) with self mediates self-advantage in face recognition through 4 experiments. Using a self-concept threat (SCT) priming that associated the self with negative personal traits and led to a weakened IPA with self, we found that self-face advantage in an implicit face-recognition task that required identification of face orientation was eliminated by the SCT priming. Moreover, the SCT effect on self-face recognition was evident only with the left-hand responses. Furthermore, the SCT effect on self-face recognition was observed in both Chinese and American participants. Our findings support the IPA hypothesis that defines a social cognitive mechanism of self-advantage in face recognition.
Neural circuitry of emotional face processing in autism spectrum disorders.
Monk, Christopher S; Weng, Shih-Jen; Wiggins, Jillian Lee; Kurapati, Nikhil; Louro, Hugo M C; Carrasco, Melisa; Maslowsky, Julie; Risi, Susan; Lord, Catherine
2010-03-01
Autism spectrum disorders (ASD) are associated with severe impairments in social functioning. Because faces provide nonverbal cues that support social interactions, many studies of ASD have examined neural structures that process faces, including the amygdala, ventromedial prefrontal cortex and superior and middle temporal gyri. However, increases or decreases in activation are often contingent on the cognitive task. Specifically, the cognitive domain of attention influences group differences in brain activation. We investigated brain function abnormalities in participants with ASD using a task that monitored attention bias to emotional faces. Twenty-four participants (12 with ASD, 12 controls) completed a functional magnetic resonance imaging study while performing an attention cuing task with emotional (happy, sad, angry) and neutral faces. In response to emotional faces, those in the ASD group showed greater right amygdala activation than those in the control group. A preliminary psychophysiological connectivity analysis showed that ASD participants had stronger positive right amygdala and ventromedial prefrontal cortex coupling and weaker positive right amygdala and temporal lobe coupling than controls. There were no group differences in the behavioural measure of attention bias to the emotional faces. The small sample size may have affected our ability to detect additional group differences. When attention bias to emotional faces was equivalent between ASD and control groups, ASD was associated with greater amygdala activation. Preliminary analyses showed that ASD participants had stronger connectivity between the amygdala ventromedial prefrontal cortex (a network implicated in emotional modulation) and weaker connectivity between the amygdala and temporal lobe (a pathway involved in the identification of facial expressions, although areas of group differences were generally in a more anterior region of the temporal lobe than what is typically reported for emotional face processing). These alterations in connectivity are consistent with emotion and face processing disturbances in ASD.
He, Yi; Johnson, Marcia K; Dovidio, John F; McCarthy, Gregory
2009-01-01
The neural correlates of the perception of faces from different races were investigated. White participants performed a gender identification task in which Asian, Black, and White faces were presented while event-related potentials (ERPs) were recorded. Participants also completed an implicit association task for Black (IAT-Black) and Asian (IAT-Asian) faces. ERPs evoked by Black and White faces differed, with Black faces evoking a larger positive ERP that peaked at 168 ms over the frontal scalp, and White faces evoking a larger negative ERP that peaked at 244 ms. These Black/White ERP differences significantly correlated with participants' scores on the IAT-Black. ERPs also differentiated White from Asian faces and a significant correlation was obtained between the White-Asian ERP difference waves at approximately 500 ms and the IAT-Asian. A positive ERP at 116 ms over occipital scalp differentiated all three races, but was not correlated with either IAT. In addition, a late positive component (around 592 ms) was greater for the same race compared to either other race faces, suggesting potentially more extended or deeper processing of the same race faces. Taken together, the ERP/IAT correlations observed for both other races indicate the influence of a race-sensitive evaluative process that may include early more automatic and/or implicit processes and relatively later more controlled processes.
He, Yi; Johnson, Marcia K.; Dovidio, John F.; McCarthy, Gregory
2009-01-01
The neural correlates of the perception of faces from different races were investigated. White participants performed a gender identification task in which Asian, Black, and White faces were presented while event-related potentials (ERPs) were recorded. Participants also completed an implicit association task for Black (IAT-Black) and Asian (IAT-Asian) faces. ERPs evoked by Black and White faces differed, with Black faces evoking a larger positive ERP that peaked at 168 ms over the frontal scalp, and White faces evoking a larger negative ERP that peaked at 244 ms. These Black/White ERP differences significantly correlated with participants’ scores on the IAT-Black. ERPs also differentiated White from Asian faces and a significant correlation was obtained between the White-Asian ERP difference waves at ~500 ms and the IAT-Asian. A positive ERP at 116 ms over occipital scalp differentiated all three races, but was not correlated with either IAT. In addition, a late positive component (around 592 ms) was greater for the same race compared to either other race faces, suggesting potentially more extended or deeper processing of the same race faces. Taken together, the ERP/IAT correlations observed for both other races indicate the influence of a race-sensitive evaluative process that may include early more automatic and/or implicit processes and relatively later more controlled processes. PMID:19562628
Cardoso, Christopher; Ellenbogen, Mark A; Linnen, Anne-Marie
2014-02-01
Evidence suggests that intranasal oxytocin enhances the perception of emotion in facial expressions during standard emotion identification tasks. However, it is not clear whether this effect is desirable in people who do not show deficits in emotion perception. That is, a heightened perception of emotion in faces could lead to "oversensitivity" to the emotions of others in nonclinical participants. The goal of this study was to assess the effects of intranasal oxytocin on emotion perception using ecologically valid social and nonsocial visual tasks. Eighty-two participants (42 women) self-administered a 24 IU dose of intranasal oxytocin or a placebo in a double-blind, randomized experiment and then completed the perceiving and understanding emotion components of the Mayer-Salovey-Caruso Emotional Intelligence Test. In this test, emotion identification accuracy is based on agreement with a normative sample. As expected, participants administered intranasal oxytocin rated emotion in facial stimuli as expressing greater emotional intensity than those given a placebo. Consequently, accurate identification of emotion in faces, based on agreement with a normative sample, was impaired in the oxytocin group relative to placebo. No such effect was observed for tests using nonsocial stimuli. The results are consistent with the hypothesis that intranasal oxytocin enhances the salience of social stimuli in the environment, but not nonsocial stimuli. The present findings support a growing literature showing that the effects of intranasal oxytocin on social cognition can be negative under certain circumstances, in this case promoting "oversensitivity" to emotion in faces in healthy people. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Hayward, William G; Favelle, Simone K; Oxner, Matt; Chu, Ming Hon; Lam, Sze Man
2017-05-01
The other-race effect in face identification has been reported in many situations and by many different ethnicities, yet it remains poorly understood. One reason for this lack of clarity may be a limitation in the methodologies that have been used to test it. Experiments typically use an old-new recognition task to demonstrate the existence of the other-race effect, but such tasks are susceptible to different social and perceptual influences, particularly in terms of the extent to which all faces are equally individuated at study. In this paper we report an experiment in which we used a face learning methodology to measure the other-race effect. We obtained naturalistic photographs of Chinese and Caucasian individuals, which allowed us to test the ability of participants to generalize their learning to new ecologically valid exemplars of a face identity. We show a strong own-race advantage in face learning, such that participants required many fewer trials to learn names of own-race individuals than those of other-race individuals and were better able to identify learned own-race individuals in novel naturalistic stimuli. Since our methodology requires individuation of all faces, and generalization over large image changes, our finding of an other-race effect can be attributed to a specific deficit in the sensitivity of perceptual and memory processes to other-race faces.
Postural Control in Children with Dyslexia: Effects of Emotional Stimuli in a Dual-Task Environment.
Goulème, Nathalie; Gerard, Christophe-Loïc; Bucci, Maria Pia
2017-08-01
The aim of this study was to compare the visual exploration strategies used during a postural control task across participants with and without dyslexia. We simultaneously recorded eye movements and postural control while children were viewing different types of emotional faces. Twenty-two children with dyslexia and twenty-two aged-matched children without dyslexia participated in the study. We analysed the surface area, the length and the mean velocity of the centre of pressure for balance in parallel with visual saccadic latency, the number of saccades and the time spent in regions of interest. Our results showed that postural stability in children with dyslexia was weaker and the surface area of their centre of pressure increased significantly when they viewed an unpleasant face. Moreover, children with dyslexia had different strategies to those used by children without dyslexia during visual exploration, and in particular when they viewed unpleasant emotional faces. We suggest that lower performance in emotional face processing in children with dyslexia could be due to a difference in their visual strategies, linked to their identification of unpleasant emotional faces. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Professional Competences of Teachers for Fostering Creativity and Supporting High-Achieving Students
ERIC Educational Resources Information Center
Hoth, Jessica; Kaiser, Gabriele; Busse, Andreas; Döhrmann, Martina; König, Johannes; Blömeke, Sigrid
2017-01-01
This paper addresses an important task teachers face in class: the identification and support of creative and high-achieving students. In particular, we examine whether primary teachers (1) have acquired professional knowledge during teacher education that is necessary to foster creativity and to teach high-achieving students, and whether they (2)…
Opposing Amygdala and Ventral Striatum Connectivity During Emotion Identification
Satterthwaite, Theodore D.; Wolf, Daniel H.; Pinkham, Amy E.; Ruparel, Kosha; Elliott, Mark A.; Valdez, Jeffrey N.; Overton, Eve; Seubert, Janina; Gur, Raquel E.; Gur, Ruben C.; Loughead, James
2011-01-01
Lesion and electrophysiological studies in animals provide evidence of opposing functions for subcortical nuclei such as the amygdala and ventral striatum, but the implications of these findings for emotion identification in humans remain poorly described. Here we report a high-resolution fMRI study in a sample of 39 healthy subjects who performed a well-characterized emotion identification task. As expected, the amygdala responded to THREAT (angry or fearful) faces more than NON-THREAT (sad or happy) faces. A functional connectivity analysis of the time series from an anatomically defined amygdala seed revealed a strong anti-correlation between the amygdala and the ventral striatum /ventral pallidum, consistent with an opposing role for these regions in during emotion identification. A second functional connectivity analysis (psychophysiological interaction) investigating relative connectivity on THREAT vs. NON-THREAT trials demonstrated that the amygdala had increased connectivity with the orbitofrontal cortex during THREAT trials, whereas the ventral striatum demonstrated increased connectivity with the posterior hippocampus on NON-THREAT trials. These results indicate that activity in the amygdala and ventral striatum may be inversely related, and that both regions may provide opposing affective bias signals during emotion identification. PMID:21600684
Sparse modeling applied to patient identification for safety in medical physics applications
NASA Astrophysics Data System (ADS)
Lewkowitz, Stephanie
Every scheduled treatment at a radiation therapy clinic involves a series of safety protocol to ensure the utmost patient care. Despite safety protocol, on a rare occasion an entirely preventable medical event, an accident, may occur. Delivering a treatment plan to the wrong patient is preventable, yet still is a clinically documented error. This research describes a computational method to identify patients with a novel machine learning technique to combat misadministration. The patient identification program stores face and fingerprint data for each patient. New, unlabeled data from those patients are categorized according to the library. The categorization of data by this face-fingerprint detector is accomplished with new machine learning algorithms based on Sparse Modeling that have already begun transforming the foundation of Computer Vision. Previous patient recognition software required special subroutines for faces and different tailored subroutines for fingerprints. In this research, the same exact model is used for both fingerprints and faces, without any additional subroutines and even without adjusting the two hyperparameters. Sparse modeling is a powerful tool, already shown utility in the areas of super-resolution, denoising, inpainting, demosaicing, and sub-nyquist sampling, i.e. compressed sensing. Sparse Modeling is possible because natural images are inherently sparse in some bases, due to their inherent structure. This research chooses datasets of face and fingerprint images to test the patient identification model. The model stores the images of each dataset as a basis (library). One image at a time is removed from the library, and is classified by a sparse code in terms of the remaining library. The Locally Competitive Algorithm, a truly neural inspired Artificial Neural Network, solves the computationally difficult task of finding the sparse code for the test image. The components of the sparse representation vector are summed by ℓ1 pooling, and correct patient identification is consistently achieved 100% over 1000 trials, when either the face data or fingerprint data are implemented as a classification basis. The algorithm gets 100% classification when faces and fingerprints are concatenated into multimodal datasets. This suggests that 100% patient identification will be achievable in the clinal setting.
Beacher, Felix D C C; Gray, Marcus A; Minati, Ludovico; Whale, Richard; Harrison, Neil A; Critchley, Hugo D
2011-02-01
Acute tryptophan depletion (ATD) decreases levels of central serotonin. ATD thus enables the cognitive effects of serotonin to be studied, with implications for the understanding of psychiatric conditions, including depression. To determine the role of serotonin in conscious (explicit) and unconscious/incidental processing of emotional information. A randomized, double-blind, cross-over design was used with 15 healthy female participants. Subjective mood was recorded at baseline and after 4 h, when participants performed an explicit emotional face processing task, and a task eliciting unconscious processing of emotionally aversive and neutral images presented subliminally using backward masking. ATD was associated with a robust reduction in plasma tryptophan at 4 h but had no effect on mood or autonomic physiology. ATD was associated with significantly lower attractiveness ratings for happy faces and attenuation of intensity/arousal ratings of angry faces. ATD also reduced overall reaction times on the unconscious perception task, but there was no interaction with emotional content of masked stimuli. ATD did not affect breakthrough perception (accuracy in identification) of masked images. ATD attenuates the attractiveness of positive faces and the negative intensity of threatening faces, suggesting that serotonin contributes specifically to the appraisal of the social salience of both positive and negative salient social emotional cues. We found no evidence that serotonin affects unconscious processing of negative emotional stimuli. These novel findings implicate serotonin in conscious aspects of active social and behavioural engagement and extend knowledge regarding the effects of ATD on emotional perception.
Multi-stream face recognition for crime-fighting
NASA Astrophysics Data System (ADS)
Jassim, Sabah A.; Sellahewa, Harin
2007-04-01
Automatic face recognition (AFR) is a challenging task that is increasingly becoming the preferred biometric trait for identification and has the potential of becoming an essential tool in the fight against crime and terrorism. Closed-circuit television (CCTV) cameras have increasingly been used over the last few years for surveillance in public places such as airports, train stations and shopping centers. They are used to detect and prevent crime, shoplifting, public disorder and terrorism. The work of law-enforcing and intelligence agencies is becoming more reliant on the use of databases of biometric data for large section of the population. Face is one of the most natural biometric traits that can be used for identification and surveillance. However, variations in lighting conditions, facial expressions, face size and pose are a great obstacle to AFR. This paper is concerned with using waveletbased face recognition schemes in the presence of variations of expressions and illumination. In particular, we will investigate the use of a combination of wavelet frequency channels for a multi-stream face recognition using various wavelet subbands as different face signal streams. The proposed schemes extend our recently developed face veri.cation scheme for implementation on mobile devices. We shall present experimental results on the performance of our proposed schemes for a number of face databases including a new AV database recorded on a PDA. By analyzing the various experimental data, we shall demonstrate that the multi-stream approach is robust against variations in illumination and facial expressions than the previous single-stream approach.
Fuentes, Christina T; Runa, Catarina; Blanco, Xenxo Alvarez; Orvalho, Verónica; Haggard, Patrick
2013-01-01
Despite extensive research on face perception, few studies have investigated individuals' knowledge about the physical features of their own face. In this study, 50 participants indicated the location of key features of their own face, relative to an anchor point corresponding to the tip of the nose, and the results were compared to the true location of the same individual's features from a standardised photograph. Horizontal and vertical errors were analysed separately. An overall bias to underestimate vertical distances revealed a distorted face representation, with reduced face height. Factor analyses were used to identify separable subconfigurations of facial features with correlated localisation errors. Independent representations of upper and lower facial features emerged from the data pattern. The major source of variation across individuals was in representation of face shape, with a spectrum from tall/thin to short/wide representation. Visual identification of one's own face is excellent, and facial features are routinely used for establishing personal identity. However, our results show that spatial knowledge of one's own face is remarkably poor, suggesting that face representation may not contribute strongly to self-awareness.
Impaired Value Learning for Faces in Preschoolers With Autism Spectrum Disorder.
Wang, Quan; DiNicola, Lauren; Heymann, Perrine; Hampson, Michelle; Chawarska, Katarzyna
2018-01-01
One of the common findings in autism spectrum disorder (ASD) is limited selective attention toward social objects, such as faces. Evidence from both human and nonhuman primate studies suggests that selection of objects for processing is guided by the appraisal of object values. We hypothesized that impairments in selective attention in ASD may reflect a disruption of a system supporting learning about object values in the social domain. We examined value learning in social (faces) and nonsocial (fractals) domains in preschoolers with ASD (n = 25) and typically developing (TD) controls (n = 28), using a novel value learning task implemented on a gaze-contingent eye-tracking platform consisting of value learning and a selective attention choice test. Children with ASD performed more poorly than TD controls on the social value learning task, but both groups performed similarly on the nonsocial task. Within-group comparisons indicated that value learning in TD children was enhanced on the social compared to the nonsocial task, but no such enhancement was seen in children with ASD. Performance in the social and nonsocial conditions was correlated in the ASD but not in the TD group. The study provides support for a domain-specific impairment in value learning for faces in ASD, and suggests that, in ASD, value learning in social and nonsocial domains may rely on a shared mechanism. These findings have implications both for models of selective social attention deficits in autism and for identification of novel treatment targets. Copyright © 2017 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.
Biased lineup instructions and face identification from video images.
Thompson, W Burt; Johnson, Jaime
2008-01-01
Previous eyewitness memory research has shown that biased lineup instructions reduce identification accuracy, primarily by increasing false-positive identifications in target-absent lineups. Because some attempts at identification do not rely on a witness's memory of the perpetrator but instead involve matching photos to images on surveillance video, the authors investigated the effects of biased instructions on identification accuracy in a matching task. In Experiment 1, biased instructions did not affect the overall accuracy of participants who used video images as an identification aid, but nearly all correct decisions occurred with target-present photo spreads. Both biased and unbiased instructions resulted in high false-positive rates. In Experiment 2, which focused on video-photo matching accuracy with target-absent photo spreads, unbiased instructions led to more correct responses (i.e., fewer false positives). These findings suggest that investigators should not relax precautions against biased instructions when people attempt to match photos to an unfamiliar person recorded on video.
Glucose enhancement of a facial recognition task in young adults.
Metzger, M M
2000-02-01
Numerous studies have reported that glucose administration enhances memory processes in both elderly and young adult subjects. Although these studies have utilized a variety of procedures and paradigms, investigations of both young and elderly subjects have typically used verbal tasks (word list recall, paragraph recall, etc.). In the present study, the effect of glucose consumption on a nonverbal, facial recognition task in young adults was examined. Lemonade sweetened with either glucose (50 g) or saccharin (23.7 mg) was consumed by college students (mean age of 21.1 years) 15 min prior to a facial recognition task. The task consisted of a familiarization phase in which subjects were presented with "target" faces, followed immediately by a recognition phase in which subjects had to identify the targets among a random array of familiar target and novel "distractor" faces. Statistical analysis indicated that there were no differences on hit rate (target identification) for subjects who consumed either saccharin or glucose prior to the test. However, further analyses revealed that subjects who consumed glucose committed significantly fewer false alarms and had (marginally) higher d-prime scores (a signal detection measure) compared to subjects who consumed saccharin prior to the test. These results parallel a previous report demonstrating glucose enhancement of a facial recognition task in probable Alzheimer's patients; however, this is believed to be the first demonstration of glucose enhancement for a facial recognition task in healthy, young adults.
Li, Huijie; Chan, Raymond C K; Zhao, Qing; Hong, Xiaohong; Gong, Qi-Yong
2010-03-17
Although there is a consensus that patients with schizophrenia have certain deficits in perceiving and expressing facial emotions, previous studies of facial emotion perception in schizophrenia do not present consistent results. The objective of this study was to explore facial emotion perception deficits in Chinese patients with schizophrenia and their non-psychotic first-degree relatives. Sixty-nine patients with schizophrenia, 56 of their first-degree relatives (33 parents and 23 siblings), and 92 healthy controls (67 younger healthy controls matched to the patients and siblings, and 25 older healthy controls matched to the parents) completed a set of facial emotion perception tasks, including facial emotion discrimination, identification, intensity, valence, and corresponding face identification tasks. The results demonstrated that patients with schizophrenia performed significantly worse than their siblings and younger healthy controls in accuracy in a variety of facial emotion perception tasks, whereas the siblings of the patients performed as well as the corresponding younger healthy controls in all of the facial emotion perception tasks. Patients with schizophrenia also showed significantly reduced speed than younger healthy controls, while siblings of patients did not demonstrate significant differences with both patients and younger healthy controls in speed. Meanwhile, we also found that parents of the schizophrenia patients performed significantly worse than the corresponding older healthy controls in accuracy in terms of facial emotion identification, valence, and the composite index of the facial discrimination, identification, intensity and valence tasks. Moreover, no significant differences were found between the parents of patients and older healthy controls in speed after controlling the years of education and IQ. Taken together, the results suggest that facial emotion perception deficits may serve as potential endophenotypes for schizophrenia. Copyright 2010 Elsevier Inc. All rights reserved.
Baker, Lewis J; Levin, Daniel T
2016-12-01
Levin and Banaji (Journal of Experimental Psychology: General, 135, 501-512, 2006) reported a lightness illusion in which participants appeared to perceive Black faces to be darker than White faces, even though the faces were matched for overall brightness and contrast. Recently, this finding was challenged by Firestone and Scholl (Psychonomic Bulletin and Review, 2014), who argued that the nominal illusion remained even when the faces were blurred so as to make their race undetectable, and concluded that uncontrolled perceptual differences between the stimulus faces drove at least some observations of the original distortion effect. In this paper we report that measures of race perception used by Firestone and Scholl were insufficiently sensitive. We demonstrate that a forced choice race-identification task not only reveals that participants could detect the race of the blurred faces but also that participants' lightness judgments often aligned with their assignment of race.
Targeting specific facial variation for different identification tasks.
Aeria, Gillian; Claes, Peter; Vandermeulen, Dirk; Clement, John Gerald
2010-09-10
A conceptual framework that allows faces to be studied and compared objectively with biological validity is presented. The framework is a logical extension of modern morphometrics and statistical shape analysis techniques. Three dimensional (3D) facial scans were collected from 255 healthy young adults. One scan depicted a smiling facial expression and another scan depicted a neutral expression. These facial scans were modelled in a Principal Component Analysis (PCA) space where Euclidean (ED) and Mahalanobis (MD) distances were used to form similarity measures. Within this PCA space, property pathways were calculated that expressed the direction of change in facial expression. Decomposition of distances into property-independent (D1) and dependent components (D2) along these pathways enabled the comparison of two faces in terms of the extent of a smiling expression. The performance of all distances was tested and compared in dual types of experiments: Classification tasks and a Recognition task. In the Classification tasks, individual facial scans were assigned to one or more population groups of smiling or neutral scans. The property-dependent (D2) component of both Euclidean and Mahalanobis distances performed best in the Classification task, by correctly assigning 99.8% of scans to the right population group. The recognition task tested if a scan of an individual depicting a smiling/neutral expression could be positively identified when shown a scan of the same person depicting a neutral/smiling expression. ED1 and MD1 performed best, and correctly identified 97.8% and 94.8% of individual scans respectively as belonging to the same person despite differences in facial expression. It was concluded that decomposed components are superior to straightforward distances in achieving positive identifications and presents a novel method for quantifying facial similarity. Additionally, although the undecomposed Mahalanobis distance often used in practice outperformed that of the Euclidean, it was the opposite result for the decomposed distances. Crown Copyright 2010. Published by Elsevier Ireland Ltd. All rights reserved.
Gray, Marcus A.; Minati, Ludovico; Whale, Richard; Harrison, Neil A.; Critchley, Hugo D.
2010-01-01
Rationale Acute tryptophan depletion (ATD) decreases levels of central serotonin. ATD thus enables the cognitive effects of serotonin to be studied, with implications for the understanding of psychiatric conditions, including depression. Objective To determine the role of serotonin in conscious (explicit) and unconscious/incidental processing of emotional information. Materials and methods A randomized, double-blind, cross-over design was used with 15 healthy female participants. Subjective mood was recorded at baseline and after 4 h, when participants performed an explicit emotional face processing task, and a task eliciting unconscious processing of emotionally aversive and neutral images presented subliminally using backward masking. Results ATD was associated with a robust reduction in plasma tryptophan at 4 h but had no effect on mood or autonomic physiology. ATD was associated with significantly lower attractiveness ratings for happy faces and attenuation of intensity/arousal ratings of angry faces. ATD also reduced overall reaction times on the unconscious perception task, but there was no interaction with emotional content of masked stimuli. ATD did not affect breakthrough perception (accuracy in identification) of masked images. Conclusions ATD attenuates the attractiveness of positive faces and the negative intensity of threatening faces, suggesting that serotonin contributes specifically to the appraisal of the social salience of both positive and negative salient social emotional cues. We found no evidence that serotonin affects unconscious processing of negative emotional stimuli. These novel findings implicate serotonin in conscious aspects of active social and behavioural engagement and extend knowledge regarding the effects of ATD on emotional perception. PMID:20596858
Rieffe, Carolien; Wiefferink, Carin H
2017-03-01
The capacity for emotion recognition and understanding is crucial for daily social functioning. We examined to what extent this capacity is impaired in young children with a Language Impairment (LI). In typical development, children learn to recognize emotions in faces and situations through social experiences and social learning. Children with LI have less access to these experiences and are therefore expected to fall behind their peers without LI. In this study, 89 preschool children with LI and 202 children without LI (mean age 3 years and 10 months in both groups) were tested on three indices for facial emotion recognition (discrimination, identification, and attribution in emotion evoking situations). Parents reported on their children's emotion vocabulary and ability to talk about their own emotions. Preschoolers with and without LI performed similarly on the non-verbal task for emotion discrimination. Children with LI fell behind their peers without LI on both other tasks for emotion recognition that involved labelling the four basic emotions (happy, sad, angry, fear). The outcomes of these two tasks were also related to children's level of emotion language. These outcomes emphasize the importance of 'emotion talk' at the youngest age possible for children with LI. Copyright © 2017 Elsevier Ltd. All rights reserved.
Masking with faces in central visual field under a variety of temporal schedules.
Daar, Marwan; Wilson, Hugh R
2015-11-01
With a few exceptions, previous studies have explored masking using either a backward mask or a common onset trailing mask, but not both. In a series of experiments, we demonstrate the use of faces in central visual field as a viable method to study the relationship between these two types of mask schedule. We tested observers in a two alternative forced choice face identification task, where both target and mask comprised synthetic faces, and show that a simple model can successfully predict masking across a variety of masking schedules ranging from a backward mask to a common onset trailing mask and a number of intermediate variations. Our data are well accounted for by a window of sensitivity to mask interference that is centered at around 100 ms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition
NASA Astrophysics Data System (ADS)
Rouabhia, C.; Tebbikh, H.
2008-06-01
Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).
Development of a battery of functional tests for low vision.
Dougherty, Bradley E; Martin, Scott R; Kelly, Corey B; Jones, Lisa A; Raasch, Thomas W; Bullimore, Mark A
2009-08-01
We describe the development and evaluation of a battery of tests of functional visual performance of everyday tasks intended to be suitable for assessment of low vision patients. The functional test battery comprises-Reading rate: reading aloud 20 unrelated words for each of four print sizes (8, 4, 2, & 1 M); Telephone book: finding a name and reading the telephone number; Medicine bottle label: reading the name and dosing; Utility bill: reading the due date and amount due; Cooking instructions: reading cooking time on a food package; Coin sorting: making a specified amount from coins placed on a table; Playing card recognition: identifying denomination and suit; and Face recognition: identifying expressions of printed, life-size faces at 1 and 3 m. All tests were timed except face and playing card recognition. Fourteen normally sighted and 24 low vision subjects were assessed with the functional test battery. Visual acuity, contrast sensitivity, and quality of life (National Eye Institute Visual Function Questionnaire 25 [NEI-VFQ 25]) were measured and the functional tests repeated. Subsequently, 23 low vision patients participated in a pilot randomized clinical trial with half receiving low vision rehabilitation and half a delayed intervention. The functional tests were administered at enrollment and 3 months later. Normally sighted subjects could perform all tasks but the proportion of trials performed correctly by the low vision subjects ranged from 35% for face recognition at 3 m, to 95% for the playing card identification. On average, low vision subjects performed three times slower than the normally sighted subjects. Timed tasks with a visual search component showed poorer repeatability. In the pilot clinical trial, low vision rehabilitation produced the greatest improvement for the medicine bottle and cooking instruction tasks. Performance of patients on these functional tests has been assessed. Some appear responsive to low vision rehabilitation.
Jenkins, Rob; Burton, A. Mike
2011-01-01
Photographs are often used to establish the identity of an individual or to verify that they are who they claim to be. Yet, recent research shows that it is surprisingly difficult to match a photo to a face. Neither humans nor machines can perform this task reliably. Although human perceivers are good at matching familiar faces, performance with unfamiliar faces is strikingly poor. The situation is no better for automatic face recognition systems. In practical settings, automatic systems have been consistently disappointing. In this review, we suggest that failure to distinguish between familiar and unfamiliar face processing has led to unrealistic expectations about face identification in applied settings. We also argue that a photograph is not necessarily a reliable indicator of facial appearance, and develop our proposal that summary statistics can provide more stable face representations. In particular, we show that image averaging stabilizes facial appearance by diluting aspects of the image that vary between snapshots of the same person. We review evidence that the resulting images can outperform photographs in both behavioural experiments and computer simulations, and outline promising directions for future research. PMID:21536553
Computational approaches to protein inference in shotgun proteomics
2012-01-01
Shotgun proteomics has recently emerged as a powerful approach to characterizing proteomes in biological samples. Its overall objective is to identify the form and quantity of each protein in a high-throughput manner by coupling liquid chromatography with tandem mass spectrometry. As a consequence of its high throughput nature, shotgun proteomics faces challenges with respect to the analysis and interpretation of experimental data. Among such challenges, the identification of proteins present in a sample has been recognized as an important computational task. This task generally consists of (1) assigning experimental tandem mass spectra to peptides derived from a protein database, and (2) mapping assigned peptides to proteins and quantifying the confidence of identified proteins. Protein identification is fundamentally a statistical inference problem with a number of methods proposed to address its challenges. In this review we categorize current approaches into rule-based, combinatorial optimization and probabilistic inference techniques, and present them using integer programing and Bayesian inference frameworks. We also discuss the main challenges of protein identification and propose potential solutions with the goal of spurring innovative research in this area. PMID:23176300
Neural activation to emotional faces in adolescents with autism spectrum disorders.
Weng, Shih-Jen; Carrasco, Melisa; Swartz, Johnna R; Wiggins, Jillian Lee; Kurapati, Nikhil; Liberzon, Israel; Risi, Susan; Lord, Catherine; Monk, Christopher S
2011-03-01
Autism spectrum disorders (ASD) involve a core deficit in social functioning and impairments in the ability to recognize face emotions. In an emotional faces task designed to constrain group differences in attention, the present study used functional MRI to characterize activation in the amygdala, ventral prefrontal cortex (vPFC), and striatum, three structures involved in socio-emotional processing in adolescents with ASD. Twenty-two adolescents with ASD and 20 healthy adolescents viewed facial expressions (happy, fearful, sad and neutral) that were briefly presented (250 ms) during functional MRI acquisition. To monitor attention, subjects pressed a button to identify the gender of each face. The ASD group showed greater activation to the faces relative to the control group in the amygdala, vPFC and striatum. Follow-up analyses indicated that the ASD relative to control group showed greater activation in the amygdala, vPFC and striatum (p < .05 small volume corrected), particularly to sad faces. Moreover, in the ASD group, there was a negative correlation between developmental variables (age and pubertal status) and mean activation from the whole bilateral amygdala; younger adolescents showed greater activation than older adolescents. There were no group differences in accuracy or reaction time in the gender identification task. When group differences in attention to facial expressions were limited, adolescents with ASD showed greater activation in structures involved in socio-emotional processing. © 2010 The Authors. Journal of Child Psychology and Psychiatry © 2010 Association for Child and Adolescent Mental Health.
Wierzchoń, Michał; Wronka, Eligiusz; Paulewicz, Borysław; Szczepanowski, Remigiusz
2016-01-01
The present research investigated metacognitive awareness of emotional stimuli and its psychophysiological correlates. We used a backward masking task presenting participants with fearful or neutral faces. We asked participants for face discrimination and then probed their metacognitive awareness with confidence rating (CR) and post-decision wagering (PDW) scales. We also analysed psychophysiological correlates of awareness with event-related potential (ERP) components: P1, N170, early posterior negativity (EPN), and P3. We have not observed any differences between PDW and CR conditions in the emotion identification task. However, the "aware" ratings were associated with increased accuracy performance. This effect was more pronounced in PDW, especially for fearful faces, suggesting that emotional stimuli awareness may be enhanced by monetary incentives. EEG analysis showed larger N170, EPN and P3 amplitudes in aware compared to unaware trials. It also appeared that both EPN and P3 ERP components were more pronounced in the PDW condition, especially when emotional faces were presented. Taken together, our ERP findings suggest that metacognitive awareness of emotional stimuli depends on the effectiveness of both early and late visual information processing. Our study also indicates that awareness of emotional stimuli can be enhanced by the motivation induced by wagering. PMID:27490816
Wierzchoń, Michał; Wronka, Eligiusz; Paulewicz, Borysław; Szczepanowski, Remigiusz
2016-01-01
The present research investigated metacognitive awareness of emotional stimuli and its psychophysiological correlates. We used a backward masking task presenting participants with fearful or neutral faces. We asked participants for face discrimination and then probed their metacognitive awareness with confidence rating (CR) and post-decision wagering (PDW) scales. We also analysed psychophysiological correlates of awareness with event-related potential (ERP) components: P1, N170, early posterior negativity (EPN), and P3. We have not observed any differences between PDW and CR conditions in the emotion identification task. However, the "aware" ratings were associated with increased accuracy performance. This effect was more pronounced in PDW, especially for fearful faces, suggesting that emotional stimuli awareness may be enhanced by monetary incentives. EEG analysis showed larger N170, EPN and P3 amplitudes in aware compared to unaware trials. It also appeared that both EPN and P3 ERP components were more pronounced in the PDW condition, especially when emotional faces were presented. Taken together, our ERP findings suggest that metacognitive awareness of emotional stimuli depends on the effectiveness of both early and late visual information processing. Our study also indicates that awareness of emotional stimuli can be enhanced by the motivation induced by wagering.
Compressed ECG biometric: a fast, secured and efficient method for identification of CVD patient.
Sufi, Fahim; Khalil, Ibrahim; Mahmood, Abdun
2011-12-01
Adoption of compression technology is often required for wireless cardiovascular monitoring, due to the enormous size of Electrocardiography (ECG) signal and limited bandwidth of Internet. However, compressed ECG must be decompressed before performing human identification using present research on ECG based biometric techniques. This additional step of decompression creates a significant processing delay for identification task. This becomes an obvious burden on a system, if this needs to be done for a trillion of compressed ECG per hour by the hospital. Even though the hospital might be able to come up with an expensive infrastructure to tame the exuberant processing, for small intermediate nodes in a multihop network identification preceded by decompression is confronting. In this paper, we report a technique by which a person can be identified directly from his / her compressed ECG. This technique completely obviates the step of decompression and therefore upholds biometric identification less intimidating for the smaller nodes in a multihop network. The biometric template created by this new technique is lower in size compared to the existing ECG based biometrics as well as other forms of biometrics like face, finger, retina etc. (up to 8302 times lower than face template and 9 times lower than existing ECG based biometric template). Lower size of the template substantially reduces the one-to-many matching time for biometric recognition, resulting in a faster biometric authentication mechanism.
Is the Self Always Better than a Friend? Self-Face Recognition in Christians and Atheists
Ma, Yina; Han, Shihui
2012-01-01
Early behavioral studies found that human adults responded faster to their own faces than faces of familiar others or strangers, a finding referred to as self-face advantage. Recent research suggests that the self-face advantage is mediated by implicit positive association with the self and is influenced by sociocultural experience. The current study investigated whether and how Christian belief and practice affect the processing of self-face in a Chinese population. Christian and Atheist participants were recruited for an implicit association test (IAT) in Experiment 1 and a face-owner identification task in Experiment 2. Experiment 1 found that atheists responded faster to self-face when it shared the same response key with positive compared to negative trait adjectives. This IAT effect, however, was significantly reduced in Christians. Experiment 2 found that atheists responded faster to self-face compared to a friend’s face, but this self-face advantage was significantly reduced in Christians. Hierarchical regression analyses further showed that the IAT effect positively predicted self-face advantage in atheists but not in Christians. Our findings suggest that Christian belief and practice may weaken implicit positive association with the self and thus decrease the advantage of the self over a friend during face recognition in the believers. PMID:22662231
Is the self always better than a friend? Self-face recognition in Christians and atheists.
Ma, Yina; Han, Shihui
2012-01-01
Early behavioral studies found that human adults responded faster to their own faces than faces of familiar others or strangers, a finding referred to as self-face advantage. Recent research suggests that the self-face advantage is mediated by implicit positive association with the self and is influenced by sociocultural experience. The current study investigated whether and how Christian belief and practice affect the processing of self-face in a Chinese population. Christian and Atheist participants were recruited for an implicit association test (IAT) in Experiment 1 and a face-owner identification task in Experiment 2. Experiment 1 found that atheists responded faster to self-face when it shared the same response key with positive compared to negative trait adjectives. This IAT effect, however, was significantly reduced in Christians. Experiment 2 found that atheists responded faster to self-face compared to a friend's face, but this self-face advantage was significantly reduced in Christians. Hierarchical regression analyses further showed that the IAT effect positively predicted self-face advantage in atheists but not in Christians. Our findings suggest that Christian belief and practice may weaken implicit positive association with the self and thus decrease the advantage of the self over a friend during face recognition in the believers.
Task relevance of emotional information affects anxiety-linked attention bias in visual search.
Dodd, Helen F; Vogt, Julia; Turkileri, Nilgun; Notebaert, Lies
2017-01-01
Task relevance affects emotional attention in healthy individuals. Here, we investigate whether the association between anxiety and attention bias is affected by the task relevance of emotion during an attention task. Participants completed two visual search tasks. In the emotion-irrelevant task, participants were asked to indicate whether a discrepant face in a crowd of neutral, middle-aged faces was old or young. Irrelevant to the task, target faces displayed angry, happy, or neutral expressions. In the emotion-relevant task, participants were asked to indicate whether a discrepant face in a crowd of middle-aged neutral faces was happy or angry (target faces also varied in age). Trait anxiety was not associated with attention in the emotion-relevant task. However, in the emotion-irrelevant task, trait anxiety was associated with a bias for angry over happy faces. These findings demonstrate that the task relevance of emotional information affects conclusions about the presence of an anxiety-linked attention bias. Copyright © 2016 Elsevier B.V. All rights reserved.
Macapagal, Kathryn R.; Rupp, Heather A.; Heiman, Julia R.
2011-01-01
Evaluations of male faces depend on attributes of the observer and target and may influence future social and sexual decisions. However, it is unknown whether adherence to hypertraditional gender roles may shape women’s evaluations of potential sexual partners or men’s evaluations of potential competitors. Using a photo task, we tested participants’ judgments of attractiveness, trustworthiness, aggressiveness, and masculinity of male faces altered to appear more masculine or feminine. Findings revealed that higher hypermasculinity scores in male observers were correlated with higher attractiveness and trustworthiness ratings of the male faces; conversely, higher hyperfemininity scores in female observers were associated with lower ratings on those traits. Male observers also rated the faces as more aggressive than did female observers. Regarding ratings by face type, masculinized faces were rated more aggressive than feminized faces, and women’s ratings did not discriminate between altered faces better than men’s ratings. These results suggest that first impressions of men can be explained in part by socioculturally- and evolutionarily-relevant factors such as the observer’s sex and gender role adherence, as well as the target’s facial masculinity. PMID:21874151
Covert face recognition in congenital prosopagnosia: a group study.
Rivolta, Davide; Palermo, Romina; Schmalzl, Laura; Coltheart, Max
2012-03-01
Even though people with congenital prosopagnosia (CP) never develop a normal ability to "overtly" recognize faces, some individuals show indices of "covert" (or implicit) face recognition. The aim of this study was to demonstrate covert face recognition in CP when participants could not overtly recognize the faces. Eleven people with CP completed three tasks assessing their overt face recognition ability, and three tasks assessing their "covert" face recognition: a Forced choice familiarity task, a Forced choice cued task, and a Priming task. Evidence of covert recognition was observed with the Forced choice familiarity task, but not the Priming task. In addition, we propose that the Forced choice cued task does not measure covert processing as such, but instead "provoked-overt" recognition. Our study clearly shows that people with CP demonstrate covert recognition for faces that they cannot overtly recognize, and that behavioural tasks vary in their sensitivity to detect covert recognition in CP. Copyright © 2011 Elsevier Srl. All rights reserved.
The Caledonian face test: A new test of face discrimination.
Logan, Andrew J; Wilkinson, Frances; Wilson, Hugh R; Gordon, Gael E; Loffler, Gunter
2016-02-01
This study aimed to develop a clinical test of face perception which is applicable to a wide range of patients and can capture normal variability. The Caledonian face test utilises synthetic faces which combine simplicity with sufficient realism to permit individual identification. Face discrimination thresholds (i.e. minimum difference between faces required for accurate discrimination) were determined in an "odd-one-out" task. The difference between faces was controlled by an adaptive QUEST procedure. A broad range of face discrimination sensitivity was determined from a group (N=52) of young adults (mean 5.75%; SD 1.18; range 3.33-8.84%). The test is fast (3-4 min), repeatable (test-re-test r(2)=0.795) and demonstrates a significant inversion effect. The potential to identify impairments of face discrimination was evaluated by testing LM who reported a lifelong difficulty with face perception. While LM's impairment for two established face tests was close to the criterion for significance (Z-scores of -2.20 and -2.27) for the Caledonian face test, her Z-score was -7.26, implying a more than threefold higher sensitivity. The new face test provides a quantifiable and repeatable assessment of face discrimination ability. The enhanced sensitivity suggests that the Caledonian face test may be capable of detecting more subtle impairments of face perception than available tests. Copyright © 2015 Elsevier Ltd. All rights reserved.
Social identity modifies face perception: an ERP study of social categorization
Stedehouder, Jeffrey; Ito, Tiffany A.
2015-01-01
Two studies examined whether social identity processes, i.e. group identification and social identity threat, amplify the degree to which people attend to social category information in early perception [assessed with event-related brain potentials (ERPs)]. Participants were presented with faces of Muslims and non-Muslims in an evaluative priming task while ERPs were measured and implicit evaluative bias was assessed. Study 1 revealed that non-Muslims showed stronger differentiation between ingroup and outgroup faces in both early (N200) and later processing stages (implicit evaluations) when they identified more strongly with their ethnic group. Moreover, identification effects on implicit bias were mediated by intergroup differentiation in the N200. In Study 2, social identity threat (vs control) was manipulated among Muslims. Results revealed that high social identity threat resulted in stronger differentiation of Muslims from non-Muslims in early (N200) and late (implicit evaluations) processing stages, with N200 effects again predicting implicit bias. Combined, these studies reveal how seemingly bottom-up early social categorization processes are affected by individual and contextual variables that affect the meaning of social identity. Implications of these results for the social identity perspective as well as social cognitive theories of person perception are discussed. PMID:25140049
Racial bias in implicit danger associations generalizes to older male targets.
Lundberg, Gustav J W; Neel, Rebecca; Lassetter, Bethany; Todd, Andrew R
2018-01-01
Across two experiments, we examined whether implicit stereotypes linking younger (~28-year-old) Black versus White men with violence and criminality extend to older (~68-year-old) Black versus White men. In Experiment 1, participants completed a sequential priming task wherein they categorized objects as guns or tools after seeing briefly-presented facial images of men who varied in age (younger versus older) and race (Black versus White). In Experiment 2, we used different face primes of younger and older Black and White men, and participants categorized words as 'threatening' or 'safe.' Results consistently revealed robust racial biases in object and word identification: Dangerous objects and words were identified more easily (faster response times, lower error rates), and non-dangerous objects and words were identified less easily, after seeing Black face primes than after seeing White face primes. Process dissociation procedure analyses, which aim to isolate the unique contributions of automatic and controlled processes to task performance, further indicated that these effects were driven entirely by racial biases in automatic processing. In neither experiment did prime age moderate racial bias, suggesting that the implicit danger associations commonly evoked by younger Black versus White men appear to generalize to older Black versus White men.
Reconstructing Face Image from the Thermal Infrared Spectrum to the Visible Spectrum †
Kresnaraman, Brahmastro; Deguchi, Daisuke; Takahashi, Tomokazu; Mekada, Yoshito; Ide, Ichiro; Murase, Hiroshi
2016-01-01
During the night or in poorly lit areas, thermal cameras are a better choice instead of normal cameras for security surveillance because they do not rely on illumination. A thermal camera is able to detect a person within its view, but identification from only thermal information is not an easy task. The purpose of this paper is to reconstruct the face image of a person from the thermal spectrum to the visible spectrum. After the reconstruction, further image processing can be employed, including identification/recognition. Concretely, we propose a two-step thermal-to-visible-spectrum reconstruction method based on Canonical Correlation Analysis (CCA). The reconstruction is done by utilizing the relationship between images in both thermal infrared and visible spectra obtained by CCA. The whole image is processed in the first step while the second step processes patches in an image. Results show that the proposed method gives satisfying results with the two-step approach and outperforms comparative methods in both quality and recognition evaluations. PMID:27110781
How Well Do Computer-Generated Faces Tap Face Expertise?
Crookes, Kate; Ewing, Louise; Gildenhuys, Ju-Dith; Kloth, Nadine; Hayward, William G; Oxner, Matt; Pond, Stephen; Rhodes, Gillian
2015-01-01
The use of computer-generated (CG) stimuli in face processing research is proliferating due to the ease with which faces can be generated, standardised and manipulated. However there has been surprisingly little research into whether CG faces are processed in the same way as photographs of real faces. The present study assessed how well CG faces tap face identity expertise by investigating whether two indicators of face expertise are reduced for CG faces when compared to face photographs. These indicators were accuracy for identification of own-race faces and the other-race effect (ORE)-the well-established finding that own-race faces are recognised more accurately than other-race faces. In Experiment 1 Caucasian and Asian participants completed a recognition memory task for own- and other-race real and CG faces. Overall accuracy for own-race faces was dramatically reduced for CG compared to real faces and the ORE was significantly and substantially attenuated for CG faces. Experiment 2 investigated perceptual discrimination for own- and other-race real and CG faces with Caucasian and Asian participants. Here again, accuracy for own-race faces was significantly reduced for CG compared to real faces. However the ORE was not affected by format. Together these results signal that CG faces of the type tested here do not fully tap face expertise. Technological advancement may, in the future, produce CG faces that are equivalent to real photographs. Until then caution is advised when interpreting results obtained using CG faces.
Emotion identification method using RGB information of human face
NASA Astrophysics Data System (ADS)
Kita, Shinya; Mita, Akira
2015-03-01
Recently, the number of single households is drastically increased due to the growth of the aging society and the diversity of lifestyle. Therefore, the evolution of building spaces is demanded. Biofied Building we propose can help to avoid this situation. It helps interaction between the building and residents' conscious and unconscious information using robots. The unconscious information includes emotion, condition, and behavior. One of the important information is thermal comfort. We assume we can estimate it from human face. There are many researchs about face color analysis, but a few of them are conducted in real situations. In other words, the existing methods were not used with disturbance such as room lumps. In this study, Kinect was used with face-tracking. Room lumps and task lumps were used to verify that our method could be applicable to real situation. In this research, two rooms at 22 and 28 degrees C were prepared. We showed that the transition of thermal comfort by changing temperature can be observed from human face. Thus, distinction between the data of 22 and 28 degrees C condition from face color was proved to be possible.
Effects of facial color on the subliminal processing of fearful faces.
Nakajima, K; Minami, T; Nakauchi, S
2015-12-03
Recent studies have suggested that both configural information, such as face shape, and surface information is important for face perception. In particular, facial color is sufficiently suggestive of emotional states, as in the phrases: "flushed with anger" and "pale with fear." However, few studies have examined the relationship between facial color and emotional expression. On the other hand, event-related potential (ERP) studies have shown that emotional expressions, such as fear, are processed unconsciously. In this study, we examined how facial color modulated the supraliminal and subliminal processing of fearful faces. We recorded electroencephalograms while participants performed a facial emotion identification task involving masked target faces exhibiting facial expressions (fearful or neutral) and colors (natural or bluish). The results indicated that there was a significant interaction between facial expression and color for the latency of the N170 component. Subsequent analyses revealed that the bluish-colored faces increased the latency effect of facial expressions compared to the natural-colored faces, indicating that the bluish color modulated the processing of fearful expressions. We conclude that the unconscious processing of fearful faces is affected by facial color. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Caharel, Stéphanie; Bernard, Christian; Thibaut, Florence; Haouzir, Sadec; Di Maggio-Clozel, Carole; Allio, Gabrielle; Fouldrin, Gaël; Petit, Michel; Lalonde, Robert; Rebaï, Mohamed
2007-09-01
The main objective of the study was to determine whether patients with schizophrenia are deficient relative to controls in the processing of faces at different levels of familiarity and types of emotion and the stage where such differences may occur. ERPs based on 18 patients with schizophrenia and 18 controls were compared in a face identification task at three levels of familiarity (unknown, familiar, subject's own) and for three types of emotion (disgust, smiling, neutral). The schizophrenic group was less accurate than controls in the face processing, especially for unknown faces and those expressing negative emotions such as disgust. P1 and N170 amplitudes were lower and P1, N170, P250 amplitudes were of slower onset in patients with schizophrenia. N170 and P250 amplitudes were modulated by familiarity and face expression in a different manner in patients than controls. Schizophrenia is associated with a genelarized defect of face processing, both in terms of familiarity and emotional expression, attributable to deficient processing at sensory (P1) and perceptual (N170) stages. These patients appear to have difficulty in encoding the structure of a face and thereby do not evaluate correctly familiarity and emotion.
The Reliability of Facial Recognition of Deceased Persons on Photographs.
Caplova, Zuzana; Obertova, Zuzana; Gibelli, Daniele M; Mazzarelli, Debora; Fracasso, Tony; Vanezis, Peter; Sforza, Chiarella; Cattaneo, Cristina
2017-09-01
In humanitarian emergencies, such as the current deceased migrants in the Mediterranean, antemortem documentation needed for identification may be limited. The use of visual identification has been previously reported in cases of mass disasters such as Thai tsunami. This pilot study explores the ability of observers to match unfamiliar faces of living and dead persons and whether facial morphology can be used for identification. A questionnaire was given to 41 students and five professionals in the field of forensic identification with the task to choose whether a facial photograph corresponds to one of the five photographs in a lineup and to identify the most useful features used for recognition. Although the overall recognition score did not significantly differ between professionals and students, the median scores of 78.1% and 80.0%, respectively, were too low to consider this method as a reliable identification method and thus needs to be supported by other means. © 2017 American Academy of Forensic Sciences.
Developmental plateau in visual object processing from adolescence to adulthood in autism
O'Hearn, Kirsten; Tanaka, James; Lynn, Andrew; Fedor, Jennifer; Minshew, Nancy; Luna, Beatriz
2016-01-01
A lack of typical age-related improvement from adolescence to adulthood contributes to face recognition deficits in adults with autism on the Cambridge Face Memory Test (CFMT). The current studies examine if this atypical developmental trajectory generalizes to other tasks and objects, including parts of the face. The CFMT tests recognition of whole faces, often with a substantial delay. The current studies used the immediate memory (IM) task and the parts-whole face task from the Let's Face It! battery, which examines whole faces, face parts, and cars, without a delay between memorization and test trials. In the IM task, participants memorize a face or car. Immediately after the target disappears, participants identify the target from two similar distractors. In the part-whole task, participants memorize a whole face. Immediately after the face disappears, participants identify the target from a distractor with different eyes or mouth, either as a face part or a whole face. Results indicate that recognition deficits in autism become more robust by adulthood, consistent with previous work, and also become more general, including cars. In the IM task, deficits in autism were specific to faces in childhood, but included cars by adulthood. In the part-whole task, deficits in autism became more robust by adulthood, including both eyes and mouths as parts and in whole faces. Across tasks, the deficit in autism increased between adolescence and adulthood, reflecting a lack of typical improvement, leading to deficits with non-face stimuli and on a task without a memory delay. These results suggest that brain maturation continues to be affected into adulthood in autism, and that the transition from adolescence to adulthood is a vulnerable stage for those with autism. PMID:25019999
Face recognition system and method using face pattern words and face pattern bytes
Zheng, Yufeng
2014-12-23
The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.
Validation of a short-term memory test for the recognition of people and faces.
Leyk, D; Sievert, A; Heiss, A; Gorges, W; Ridder, D; Alexander, T; Wunderlich, M; Ruther, T
2008-08-01
Memorising and processing faces is a short-term memory dependent task of utmost importance in the security domain, in which constant and high performance is a must. Especially in access or passport control-related tasks, the timely identification of performance decrements is essential, margins of error are narrow and inadequate performance may have grave consequences. However, conventional short-term memory tests frequently use abstract settings with little relevance to working situations. They may thus be unable to capture task-specific decrements. The aim of the study was to devise and validate a new test, better reflecting job specifics and employing appropriate stimuli. After 1.5 s (short) or 4.5 s (long) presentation, a set of seven portraits of faces had to be memorised for comparison with two control stimuli. Stimulus appearance followed 2 s (first item) and 8 s (second item) after set presentation. Twenty eight subjects (12 male, 16 female) were tested at seven different times of day, 3 h apart. Recognition rates were above 60% even for the least favourable condition. Recognition was significantly better in the 'long' condition (+10%) and for the first item (+18%). Recognition time showed significant differences (10%) between items. Minor effects of learning were found for response latencies only. Based on occupationally relevant metrics, the test displayed internal and external validity, consistency and suitability for further use in test/retest scenarios. In public security, especially where access to restricted areas is monitored, margins of error are narrow and operator performance must remain high and level. Appropriate schedules for personnel, based on valid test results, are required. However, task-specific data and performance tests, permitting the description of task specific decrements, are not available. Commonly used tests may be unsuitable due to undue abstraction and insufficient reference to real-world conditions. Thus, tests are required that account for task-specific conditions and neurophysiological characteristics.
Menon, Nadia; White, David; Kemp, Richard I
2015-01-01
According to cognitive and neurological models of the face-processing system, faces are represented at two levels of abstraction. First, image-based pictorial representations code a particular instance of a face and include information that is unrelated to identity-such as lighting, pose, and expression. Second, at a more abstract level, identity-specific representations combine information from various encounters with a single face. Here we tested whether identity-level representations mediate unfamiliar face matching performance. Across three experiments we manipulated identity attributions to pairs of target images and measured the effect on subsequent identification decisions. Participants were instructed that target images were either two photos of the same person (1ID condition) or photos of two different people (2ID condition). This manipulation consistently affected performance in sequential matching: 1ID instructions improved accuracy on "match" trials and caused participants to adopt a more liberal response bias than the 2ID condition. However, this manipulation did not affect performance in simultaneous matching. We conclude that identity-level representations, generated in working memory, influence the amount of variation tolerated between images, when making identity judgements in sequential face matching.
Straus, S G; McGrath, J E
1994-02-01
The authors investigated the hypothesis that as group tasks pose greater requirements for member interdependence, communication media that transmit more social context cues will foster group performance and satisfaction. Seventy-two 3-person groups of undergraduate students worked in either computer-mediated or face-to-face meetings on 3 tasks with increasing levels of interdependence: an idea-generation task, an intellective task, and a judgment task. Results showed few differences between computer-mediated and face-to-face groups in the quality of the work completed but large differences in productivity favoring face-to-face groups. Analysis of productivity and of members' reactions supported the predicted interaction of tasks and media, with greater discrepancies between media conditions for tasks requiring higher levels of coordination. Results are discussed in terms of the implications of using computer-mediated communications systems for group work.
Junhong, Huang; Renlai, Zhou; Senqi, Hu
2013-01-01
Two experiments were conducted to investigate the automatic processing of emotional facial expressions while performing low or high demand cognitive tasks under unattended conditions. In Experiment 1, 35 subjects performed low (judging the structure of Chinese words) and high (judging the tone of Chinese words) cognitive load tasks while exposed to unattended pictures of fearful, neutral, or happy faces. The results revealed that the reaction time was slower and the performance accuracy was higher while performing the low cognitive load task than while performing the high cognitive load task. Exposure to fearful faces resulted in significantly longer reaction times and lower accuracy than exposure to neutral faces on the low cognitive load task. In Experiment 2, 26 subjects performed the same word judgment tasks and their brain event-related potentials (ERPs) were measured for a period of 800 ms after the onset of the task stimulus. The amplitudes of the early component of ERP around 176 ms (P2) elicited by unattended fearful faces over frontal-central-parietal recording sites was significantly larger than those elicited by unattended neutral faces while performing the word structure judgment task. Together, the findings of the two experiments indicated that unattended fearful faces captured significantly more attention resources than unattended neutral faces on a low cognitive load task, but not on a high cognitive load task. It was concluded that fearful faces could automatically capture attention if residues of attention resources were available under the unattended condition.
Junhong, Huang; Renlai, Zhou; Senqi, Hu
2013-01-01
Two experiments were conducted to investigate the automatic processing of emotional facial expressions while performing low or high demand cognitive tasks under unattended conditions. In Experiment 1, 35 subjects performed low (judging the structure of Chinese words) and high (judging the tone of Chinese words) cognitive load tasks while exposed to unattended pictures of fearful, neutral, or happy faces. The results revealed that the reaction time was slower and the performance accuracy was higher while performing the low cognitive load task than while performing the high cognitive load task. Exposure to fearful faces resulted in significantly longer reaction times and lower accuracy than exposure to neutral faces on the low cognitive load task. In Experiment 2, 26 subjects performed the same word judgment tasks and their brain event-related potentials (ERPs) were measured for a period of 800 ms after the onset of the task stimulus. The amplitudes of the early component of ERP around 176 ms (P2) elicited by unattended fearful faces over frontal-central-parietal recording sites was significantly larger than those elicited by unattended neutral faces while performing the word structure judgment task. Together, the findings of the two experiments indicated that unattended fearful faces captured significantly more attention resources than unattended neutral faces on a low cognitive load task, but not on a high cognitive load task. It was concluded that fearful faces could automatically capture attention if residues of attention resources were available under the unattended condition. PMID:24124486
Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T.; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J.; Sadato, Norihiro
2012-01-01
Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience. PMID:23372547
Cornes, Katherine; Donnelly, Nick; Godwin, Hayward; Wenger, Michael J
2011-06-01
The Thatcher illusion (Thompson, 1980) is considered to be a prototypical illustration of the notion that face perception is dependent on configural processes and representations. We explored this idea by examining the relative contributions of perceptual and decisional processes to the ability of observers to identify the orientation of two classes of forms-faces and churches-and a set of their component features. Observers were presented with upright and inverted images of faces and churches in which the components (eyes, mouth, windows, doors) were presented either upright or inverted. Observers first rated the subjective grotesqueness of all of the images and then performed a complete identification task in which they had to identify the orientation of the overall form and the orientation of each of the interior features. Grotesqueness ratings for both classes of image showed the standard modulation of rated grotesqueness as a function of orientation. The complete identification results revealed violations of both perceptual and decisional separability but failed to reveal any violations of within-stimulus (perceptual) independence. In addition, exploration of a simple bivariate Gaussian signal detection model of the relationship between identification performance and judged grotesqueness suggests that within-stimulus violations of perceptual independence on their own are insufficient for producing the illusion. This lack of evidence for within-stimulus configurality suggests the need for a critical reevaluation of the role of configural processing in the Thatcher illusion. (PsycINFO Database Record (c) 2011 APA, all rights reserved).
Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J; Sadato, Norihiro
2013-01-01
Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience.
More than words (and faces): evidence for a Stroop effect of prosody in emotion word processing.
Filippi, Piera; Ocklenburg, Sebastian; Bowling, Daniel L; Heege, Larissa; Güntürkün, Onur; Newen, Albert; de Boer, Bart
2017-08-01
Humans typically combine linguistic and nonlinguistic information to comprehend emotions. We adopted an emotion identification Stroop task to investigate how different channels interact in emotion communication. In experiment 1, synonyms of "happy" and "sad" were spoken with happy and sad prosody. Participants had more difficulty ignoring prosody than ignoring verbal content. In experiment 2, synonyms of "happy" and "sad" were spoken with happy and sad prosody, while happy or sad faces were displayed. Accuracy was lower when two channels expressed an emotion that was incongruent with the channel participants had to focus on, compared with the cross-channel congruence condition. When participants were required to focus on verbal content, accuracy was significantly lower also when prosody was incongruent with verbal content and face. This suggests that prosody biases emotional verbal content processing, even when conflicting with verbal content and face simultaneously. Implications for multimodal communication and language evolution studies are discussed.
Herzmann, Grit
2016-07-01
The N250 and N250r (r for repetition, signaling a difference measure of priming) has been proposed to reflect the activation of perceptual memory representations for individual faces. Increased N250r and N250 amplitudes have been associated with higher levels of familiarity and expertise, respectively. In contrast to these observations, the N250 amplitude has been found to be larger for other-race than own-race faces in recognition memory tasks. This study investigated if these findings were due to increased identity-specific processing demands for other-race relative to own-race faces and whether or not similar results would be obtained for the N250 in a repetition priming paradigm. Only Caucasian participants were available for testing and completed two tasks with Caucasian, African-American, and Chinese faces. In a repetition priming task, participants decided whether or not sequentially presented faces were of the same identity (individuation task) or same race (categorization task). Increased N250 amplitudes were found for African-American and Chinese faces relative to Caucasian faces, replicating previous results in recognition memory tasks. Contrary to the expectation that increased N250 amplitudes for other-race face would be confined to the individuation task, both tasks showed similar results. This could be due to the fact that face identity information needed to be maintained across the sequential presentation of prime and target in both tasks. Increased N250 amplitudes for other-race faces are taken to represent increased neural demands on the identity-specific processing of other-race faces, which are typically processed less holistically and less on the level of the individual. Copyright © 2016 Elsevier B.V. All rights reserved.
Amygdala hyperactivation to angry faces in intermittent explosive disorder.
McCloskey, Michael S; Phan, K Luan; Angstadt, Mike; Fettich, Karla C; Keedy, Sarah; Coccaro, Emil F
2016-08-01
Individuals with intermittent explosive disorder (IED) were previously found to exhibit amygdala hyperactivation and relatively reduced orbital medial prefrontal cortex (OMPFC) activation to angry faces while performing an implicit emotion information processing task during functional magnetic resonance imaging (fMRI). This study examines the neural substrates associated with explicit encoding of facial emotions among individuals with IED. Twenty unmedicated IED subjects and twenty healthy, matched comparison subjects (HC) underwent fMRI while viewing blocks of angry, happy, and neutral faces and identifying the emotional valence of each face (positive, negative or neutral). We compared amygdala and OMPFC reactivity to faces between IED and HC subjects. We also examined the relationship between amygdala/OMPFC activation and aggression severity. Compared to controls, the IED group exhibited greater amygdala response to angry (vs. neutral) facial expressions. In contrast, IED and control groups did not differ in OMPFC activation to angry faces. Across subjects amygdala activation to angry faces was correlated with number of prior aggressive acts. These findings extend previous evidence of amygdala dysfunction in response to the identification of an ecologically-valid social threat signal (processing angry faces) among individuals with IED, further substantiating a link between amygdala hyperactivity to social signals of direct threat and aggression. Copyright © 2016 Elsevier Ltd. All rights reserved.
Multi-Directional Multi-Level Dual-Cross Patterns for Robust Face Recognition.
Ding, Changxing; Choi, Jonghyun; Tao, Dacheng; Davis, Larry S
2016-03-01
To perform unconstrained face recognition robust to variations in illumination, pose and expression, this paper presents a new scheme to extract "Multi-Directional Multi-Level Dual-Cross Patterns" (MDML-DCPs) from face images. Specifically, the MDML-DCPs scheme exploits the first derivative of Gaussian operator to reduce the impact of differences in illumination and then computes the DCP feature at both the holistic and component levels. DCP is a novel face image descriptor inspired by the unique textural structure of human faces. It is computationally efficient and only doubles the cost of computing local binary patterns, yet is extremely robust to pose and expression variations. MDML-DCPs comprehensively yet efficiently encodes the invariant characteristics of a face image from multiple levels into patterns that are highly discriminative of inter-personal differences but robust to intra-personal variations. Experimental results on the FERET, CAS-PERL-R1, FRGC 2.0, and LFW databases indicate that DCP outperforms the state-of-the-art local descriptors (e.g., LBP, LTP, LPQ, POEM, tLBP, and LGXP) for both face identification and face verification tasks. More impressively, the best performance is achieved on the challenging LFW and FRGC 2.0 databases by deploying MDML-DCPs in a simple recognition scheme.
How Fast is Famous Face Recognition?
Barragan-Jason, Gladys; Lachat, Fanny; Barbeau, Emmanuel J.
2012-01-01
The rapid recognition of familiar faces is crucial for social interactions. However the actual speed with which recognition can be achieved remains largely unknown as most studies have been carried out without any speed constraints. Different paradigms have been used, leading to conflicting results, and although many authors suggest that face recognition is fast, the speed of face recognition has not been directly compared to “fast” visual tasks. In this study, we sought to overcome these limitations. Subjects performed three tasks, a familiarity categorization task (famous faces among unknown faces), a superordinate categorization task (human faces among animal ones), and a gender categorization task. All tasks were performed under speed constraints. The results show that, despite the use of speed constraints, subjects were slow when they had to categorize famous faces: minimum reaction time was 467 ms, which is 180 ms more than during superordinate categorization and 160 ms more than in the gender condition. Our results are compatible with a hierarchy of face processing from the superordinate level to the familiarity level. The processes taking place between detection and recognition need to be investigated in detail. PMID:23162503
Social identity modifies face perception: an ERP study of social categorization.
Derks, Belle; Stedehouder, Jeffrey; Ito, Tiffany A
2015-05-01
Two studies examined whether social identity processes, i.e. group identification and social identity threat, amplify the degree to which people attend to social category information in early perception [assessed with event-related brain potentials (ERPs)]. Participants were presented with faces of Muslims and non-Muslims in an evaluative priming task while ERPs were measured and implicit evaluative bias was assessed. Study 1 revealed that non-Muslims showed stronger differentiation between ingroup and outgroup faces in both early (N200) and later processing stages (implicit evaluations) when they identified more strongly with their ethnic group. Moreover, identification effects on implicit bias were mediated by intergroup differentiation in the N200. In Study 2, social identity threat (vs control) was manipulated among Muslims. Results revealed that high social identity threat resulted in stronger differentiation of Muslims from non-Muslims in early (N200) and late (implicit evaluations) processing stages, with N200 effects again predicting implicit bias. Combined, these studies reveal how seemingly bottom-up early social categorization processes are affected by individual and contextual variables that affect the meaning of social identity. Implications of these results for the social identity perspective as well as social cognitive theories of person perception are discussed. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Following the time course of face gender and expression processing: a task-dependent ERP study.
Valdés-Conroy, Berenice; Aguado, Luis; Fernández-Cahill, María; Romero-Ferreiro, Verónica; Diéguez-Risco, Teresa
2014-05-01
The effects of task demands and the interaction between gender and expression in face perception were studied using event-related potentials (ERPs). Participants performed three different tasks with male and female faces that were emotionally inexpressive or that showed happy or angry expressions. In two of the tasks (gender and expression categorization) facial properties were task-relevant while in a third task (symbol discrimination) facial information was irrelevant. Effects of expression were observed on the visual P100 component under all task conditions, suggesting the operation of an automatic process that is not influenced by task demands. The earliest interaction between expression and gender was observed later in the face-sensitive N170 component. This component showed differential modulations by specific combinations of gender and expression (e.g., angry male vs. angry female faces). Main effects of expression and task were observed in a later occipito-temporal component peaking around 230 ms post-stimulus onset (EPN or early posterior negativity). Less positive amplitudes in the presence of angry faces and during performance of the gender and expression tasks were observed. Finally, task demands also modulated a positive component peaking around 400 ms (LPC, or late positive complex) that showed enhanced amplitude for the gender task. The pattern of results obtained here adds new evidence about the sequence of operations involved in face processing and the interaction of facial properties (gender and expression) in response to different task demands. Copyright © 2014 Elsevier B.V. All rights reserved.
Short Term Gains, Long Term Pains: How Cues About State Aid Learning in Dynamic Environments
Gureckis, Todd M.; Love, Bradley C.
2009-01-01
Successful investors seeking returns, animals foraging for food, and pilots controlling aircraft all must take into account how their current decisions will impact their future standing. One challenge facing decision makers is that options that appear attractive in the short-term may not turn out best in the long run. In this paper, we explore human learning in a dynamic decision-making task which places short- and long-term rewards in conflict. Our goal in these studies was to evaluate how people’s mental representation of a task affects their ability to discover an optimal decision strategy. We find that perceptual cues that readily align with the underlying state of the task environment help people overcome the impulsive appeal of short-term rewards. Our experimental manipulations, predictions, and analyses are motivated by current work in reinforcement learning which details how learners value delayed outcomes in sequential tasks and the importance that “state” identification plays in effective learning. PMID:19427635
Naming of objects, faces and buildings in mild cognitive impairment.
Ahmed, Samrah; Arnold, Robert; Thompson, Sian A; Graham, Kim S; Hodges, John R
2008-06-01
Accruing evidence suggests that the cognitive deficits in very early Alzheimer's Disease (AD) are not confined to episodic memory, with a number of studies documenting semantic memory deficits, especially for knowledge of people. To investigate whether this difficulty in naming famous people extends to other proper names based information, three naming tasks - the Graded Naming Test (GNT), which uses objects and animals, the Graded Faces Test (GFT) and the newly designed Graded Buildings Test (GBT) - were administered to 69 participants (32 patients in the early prodromal stage of AD, so-called Mild Cognitive Impairment (MCI), and 37 normal control participants). Patients were found to be impaired on all three tests compared to controls, although naming of objects was significantly better than naming of faces and buildings. Discriminant analysis successfully predicted group membership for 100% controls and 78.1% of patients. The results suggest that even in cases that do not yet fulfil criteria for AD naming of famous people and buildings is impaired, and that both these semantic domains show greater vulnerability than general semantic knowledge. A semantic deficit together with the hallmark episodic deficit may be common in MCI, and that the use of graded tasks tapping semantic memory may be useful for the early identification of patients with MCI.
Royer, Jessica; Blais, Caroline; Barnabé-Lortie, Vincent; Carré, Mélissa; Leclerc, Josiane; Fiset, Daniel
2016-06-01
Faces are encountered in highly diverse angles in real-world settings. Despite this considerable diversity, most individuals are able to easily recognize familiar faces. The vast majority of studies in the field of face recognition have nonetheless focused almost exclusively on frontal views of faces. Indeed, a number of authors have investigated the diagnostic facial features for the recognition of frontal views of faces previously encoded in this same view. However, the nature of the information useful for identity matching when the encoded face and test face differ in viewing angle remains mostly unexplored. The present study addresses this issue using individual differences and bubbles, a method that pinpoints the facial features effectively used in a visual categorization task. Our results indicate that the use of features located in the center of the face, the lower left portion of the nose area and the center of the mouth, are significantly associated with individual efficiency to generalize a face's identity across different viewpoints. However, as faces become more familiar, the reliance on this area decreases, while the diagnosticity of the eye region increases. This suggests that a certain distinction can be made between the visual mechanisms subtending viewpoint invariance and face recognition in the case of unfamiliar face identification. Our results further support the idea that the eye area may only come into play when the face stimulus is particularly familiar to the observer. Copyright © 2016 Elsevier Ltd. All rights reserved.
Jiang, Xiong; Bollich, Angela; Cox, Patrick; Hyder, Eric; James, Joette; Gowani, Saqib Ali; Hadjikhani, Nouchine; Blanz, Volker; Manoach, Dara S.; Barton, Jason J.S.; Gaillard, William D.; Riesenhuber, Maximilian
2013-01-01
Individuals with Autism Spectrum Disorder (ASD) appear to show a general face discrimination deficit across a range of tasks including social–emotional judgments as well as identification and discrimination. However, functional magnetic resonance imaging (fMRI) studies probing the neural bases of these behavioral differences have produced conflicting results: while some studies have reported reduced or no activity to faces in ASD in the Fusiform Face Area (FFA), a key region in human face processing, others have suggested more typical activation levels, possibly reflecting limitations of conventional fMRI techniques to characterize neuron-level processing. Here, we test the hypotheses that face discrimination abilities are highly heterogeneous in ASD and are mediated by FFA neurons, with differences in face discrimination abilities being quantitatively linked to variations in the estimated selectivity of face neurons in the FFA. Behavioral results revealed a wide distribution of face discrimination performance in ASD, ranging from typical performance to chance level performance. Despite this heterogeneity in perceptual abilities, individual face discrimination performance was well predicted by neural selectivity to faces in the FFA, estimated via both a novel analysis of local voxel-wise correlations, and the more commonly used fMRI rapid adaptation technique. Thus, face processing in ASD appears to rely on the FFA as in typical individuals, differing quantitatively but not qualitatively. These results for the first time mechanistically link variations in the ASD phenotype to specific differences in the typical face processing circuit, identifying promising targets for interventions. PMID:24179786
Face-gender discrimination is possible in the near-absence of attention.
Reddy, Leila; Wilken, Patrick; Koch, Christof
2004-03-02
The attentional cost associated with the visual discrimination of the gender of a face was investigated. Participants performed a face-gender discrimination task either alone (single-task) or concurrently (dual-task) with a known attentional demanding task (5-letter T/L discrimination). Overall performance on face-gender discrimination suffered remarkably little under the dual-task condition compared to the single-task condition. Similar results were obtained in experiments that controlled for potential training effects or the use of low-level cues in this discrimination task. Our results provide further evidence against the notion that only low-level representations can be accessed outside the focus of attention.
Templier, Lorraine; Chetouani, Mohamed; Plaza, Monique; Belot, Zoé; Bocquet, Patrick; Chaby, Laurence
2015-03-01
Patients with Alzheimer's disease (AD) show cognitive and behavioral disorders, which they and their caregivers have difficulties to cope with in daily life. Psychological symptoms seem to be increased by impaired emotion processing in patients, this ability being linked to social cognition and thus essential to maintain good interpersonal relationships. Non-verbal emotion processing is a genuine way to communicate, especially so for patients whose language may be rapidly impaired. Many studies focus on emotion identification in AD patients, mostly by means of facial expressions rather than emotional prosody; even fewer consider emotional prosody production, despite its playing a key role in interpersonal exchanges. The literature on this subject is scarce with contradictory results. The present study compares the performances of 14 AD patients (88.4±4.9 yrs; MMSE: 19.9±2.7) to those of 14 control subjects (87.5±5.1 yrs; MMSE: 28.1±1.4) in tasks of emotion identification through faces and voices (non linguistic vocal emotion or emotional prosody) and in a task of emotional prosody production (12 sentences were to be pronounced in a neutral, positive, or negative tone, after a context was read). The Alzheimer's disease patients showed weaker performances than control subjects in all emotional recognition tasks and particularly when identifying emotional prosody. A negative relation between the identification scores and the NPI (professional caregivers) scores was found which underlines their link to psychological and behavioral disorders. The production of emotional prosody seems relatively preserved in a mild to moderate stage of the disease: we found subtle differences regarding acoustic parameters but in a qualitative way judges established that the patients' productions were as good as those of control subjects. These results suggest interesting new directions for improving patients' care.
Abnormal GABAergic function and face processing in schizophrenia: A pharmacologic-fMRI study.
Tso, Ivy F; Fang, Yu; Phan, K Luan; Welsh, Robert C; Taylor, Stephan F
2015-10-01
The involvement of the gamma-aminobutyric acid (GABA) system in schizophrenia is suggested by postmortem studies and the common use of GABA receptor-potentiating agents in treatment. In a recent study, we used a benzodiazepine challenge to demonstrate abnormal GABAergic function during processing of negative visual stimuli in schizophrenia. This study extended this investigation by mapping GABAergic mechanisms associated with face processing and social appraisal in schizophrenia using a benzodiazepine challenge. Fourteen stable, medicated schizophrenia/schizoaffective patients (SZ) and 13 healthy controls (HC) underwent functional MRI using the blood oxygenation level-dependent (BOLD) technique while they performed the Socio-emotional Preference Task (SePT) on emotional face stimuli ("Do you like this face?"). Participants received single-blinded intravenous saline and lorazepam (LRZ) in two separate sessions separated by 1-3weeks. Both SZ and HC recruited medial prefrontal cortex/anterior cingulate during the SePT, relative to gender identification. A significant drug by group interaction was observed in the medial occipital cortex, such that SZ showed increased BOLD signal to LRZ challenge, while HC showed an expected decrease of signal; the interaction did not vary by task. The altered BOLD response to LRZ challenge in SZ was significantly correlated with increased negative affect across multiple measures. The altered response to LRZ challenge suggests that abnormal face processing and negative affect in SZ are associated with altered GABAergic function in the visual cortex, underscoring the role of impaired visual processing in socio-emotional deficits in schizophrenia. Copyright © 2015 Elsevier B.V. All rights reserved.
Theory of mind and recognition of facial emotion in dementia: challenge to current concepts.
Freedman, Morris; Binns, Malcolm A; Black, Sandra E; Murphy, Cara; Stuss, Donald T
2013-01-01
Current literature suggests that theory of mind (ToM) and recognition of facial emotion are impaired in behavioral variant frontotemporal dementia (bvFTD). In contrast, studies suggest that ToM is spared in Alzheimer disease (AD). However, there is controversy whether recognition of emotion in faces is impaired in AD. This study challenges the concepts that ToM is preserved in AD and that recognition of facial emotion is impaired in bvFTD. ToM, recognition of facial emotion, and identification of emotions associated with video vignettes were studied in bvFTD, AD, and normal controls. ToM was assessed using false-belief and visual perspective-taking tasks. Identification of facial emotion was tested using Ekman and Friesen's pictures of facial affect. After adjusting for relevant covariates, there were significant ToM deficits in bvFTD and AD compared with controls, whereas neither group was impaired in the identification of emotions associated with video vignettes. There was borderline impairment in recognizing angry faces in bvFTD. Patients with AD showed significant deficits on false belief and visual perspective taking, and bvFTD patients were impaired on second-order false belief. We report novel findings challenging the concepts that ToM is spared in AD and that recognition of facial emotion is impaired in bvFTD.
Interference among the Processing of Facial Emotion, Face Race, and Face Gender.
Li, Yongna; Tse, Chi-Shing
2016-01-01
People can process multiple dimensions of facial properties simultaneously. Facial processing models are based on the processing of facial properties. The current study examined the processing of facial emotion, face race, and face gender using categorization tasks. The same set of Chinese, White and Black faces, each posing a neutral, happy or angry expression, was used in three experiments. Facial emotion interacted with face race in all the tasks. The interaction of face race and face gender was found in the race and gender categorization tasks, whereas the interaction of facial emotion and face gender was significant in the emotion and gender categorization tasks. These results provided evidence for a symmetric interaction between variant facial properties (emotion) and invariant facial properties (race and gender).
Interference among the Processing of Facial Emotion, Face Race, and Face Gender
Li, Yongna; Tse, Chi-Shing
2016-01-01
People can process multiple dimensions of facial properties simultaneously. Facial processing models are based on the processing of facial properties. The current study examined the processing of facial emotion, face race, and face gender using categorization tasks. The same set of Chinese, White and Black faces, each posing a neutral, happy or angry expression, was used in three experiments. Facial emotion interacted with face race in all the tasks. The interaction of face race and face gender was found in the race and gender categorization tasks, whereas the interaction of facial emotion and face gender was significant in the emotion and gender categorization tasks. These results provided evidence for a symmetric interaction between variant facial properties (emotion) and invariant facial properties (race and gender). PMID:27840621
Decoding task-based attentional modulation during face categorization.
Chiu, Yu-Chin; Esterman, Michael; Han, Yuefeng; Rosen, Heather; Yantis, Steven
2011-05-01
Attention is a neurocognitive mechanism that selects task-relevant sensory or mnemonic information to achieve current behavioral goals. Attentional modulation of cortical activity has been observed when attention is directed to specific locations, features, or objects. However, little is known about how high-level categorization task set modulates perceptual representations. In the current study, observers categorized faces by gender (male vs. female) or race (Asian vs. White). Each face was perceptually ambiguous in both dimensions, such that categorization of one dimension demanded selective attention to task-relevant information within the face. We used multivoxel pattern classification to show that task-specific modulations evoke reliably distinct spatial patterns of activity within three face-selective cortical regions (right fusiform face area and bilateral occipital face areas). This result suggests that patterns of activity in these regions reflect not only stimulus-specific (i.e., faces vs. houses) responses but also task-specific (i.e., race vs. gender) attentional modulation. Furthermore, exploratory whole-brain multivoxel pattern classification (using a searchlight procedure) revealed a network of dorsal fronto-parietal regions (left middle frontal gyrus and left inferior and superior parietal lobule) that also exhibit distinct patterns for the two task sets, suggesting that these regions may represent abstract goals during high-level categorization tasks.
Megreya, Ahmed M.; Bindemann, Markus
2017-01-01
It is unresolved whether the permanent auditory deprivation that deaf people experience leads to the enhanced visual processing of faces. The current study explored this question with a matching task in which observers searched for a target face among a concurrent lineup of ten faces. This was compared with a control task in which the same stimuli were presented upside down, to disrupt typical face processing, and an object matching task. A sample of young-adolescent deaf observers performed with higher accuracy than hearing controls across all of these tasks. These results clarify previous findings and provide evidence for a general visual processing advantage in deaf observers rather than a face-specific effect. PMID:28117407
A between-subjects test of the lower-identification/ higher-priming paradox.
Rubino, I Alex; Rociola, Giuseppe; Di Lorenzo, Giorgio; Magni, Valentina; Ribolsi, Michele; Mancini, Valentina; Saya, Anna; Pezzarossa, Bianca; Siracusano, Alberto; Suslow, Thomas
2013-01-01
An under-recognised U-shaped model states that unconscious and conscious perceptual effects are functionally exclusive and that unconscious perceptual effects manifest themselves only at the objective detection threshold, when conscious perception is completely absent. We tested the U-shaped line model with a between-subjects paradigm. Angry, happy, neutral faces, or blank slides were flashed for 5.5 ms and 19.5 ms before Chinese ideographs in a darkened room. A group of volunteers (n = 84) were asked to rate how much they liked each ideograph and performed an identification task. According to the median identification score two subgroups were composed; one with 50% or < 50% identification scores (n = 31), and one with above 50% identification scores (n = 53). The hypothesised U-shaped line was confirmed by the findings. Affective priming was found only at the two extreme points: the 5.5 ms condition of the low-identification group (subliminal perception) and the 19.5 ms condition of the > 50% high-identification group (supraliminal perception). The two intermediate points (19.5 ms of the low-identification group and 5.5 ms of the high-identification group) did not correspond to significant priming effects. These results confirm that a complete absence of conscious perception is the condition for the deployment of unconscious perceptual effects.
Zhou, Xiaomei; Short, Lindsey A; Chan, Harmonie S J; Mondloch, Catherine J
2016-09-01
Young and older adults are more sensitive to deviations from normality in young than older adult faces, suggesting that the dimensions of face space are optimized for young adult faces. Here, we extend these findings to own-race faces and provide converging evidence using an attractiveness rating task. In Experiment 1, Caucasian and Chinese adults were shown own- and other-race face pairs; one member was undistorted and the other had compressed or expanded features. Participants indicated which member of each pair was more normal (a task that requires referencing a norm) and which was more expanded (a task that simply requires discrimination). Participants showed an own-race advantage in the normality task but not the discrimination task. In Experiment 2, participants rated the facial attractiveness of own- and other-race faces (Experiment 2a) or young and older adult faces (Experiment 2b). Between-rater variability in ratings of individual faces was higher for other-race and older adult faces; reduced consensus in attractiveness judgments reflects a less refined face space. Collectively, these results provide direct evidence that the dimensions of face space are optimized for own-race and young adult faces, which may underlie face race- and age-based deficits in recognition. © The Author(s) 2016.
Associative (prosop)agnosia without (apparent) perceptual deficits: a case-study.
Anaki, David; Kaufman, Yakir; Freedman, Morris; Moscovitch, Morris
2007-04-09
In associative agnosia early perceptual processing of faces or objects are considered to be intact, while the ability to access stored semantic information about the individual face or object is impaired. Recent claims, however, have asserted that associative agnosia is also characterized by deficits at the perceptual level, which are too subtle to be detected by current neuropsychological tests. Thus, the impaired identification of famous faces or common objects in associative agnosia stems from difficulties in extracting the minute perceptual details required to identify a face or an object. In the present study, we report the case of a patient DBO with a left occipital infarct, who shows impaired object and famous face recognition. Despite his disability, he exhibits a face inversion effect, and is able to select a famous face from among non-famous distractors. In addition, his performance is normal in an immediate and delayed recognition memory for faces, whose external features were deleted. His deficits in face recognition are apparent only when he is required to name a famous face, or select two faces from among a triad of famous figures based on their semantic relationships (a task which does not require access to names). The nature of his deficits in object perception and recognition are similar to his impairments in the face domain. This pattern of behavior supports the notion that apperceptive and associative agnosia reflect distinct and dissociated deficits, which result from damage to different stages of the face and object recognition process.
Koshino, Hideya; Minamoto, Takehiro; Ikeda, Takashi; Osaka, Mariko; Otsuka, Yuki; Osaka, Naoyuki
2011-01-01
The anterior prefrontal cortex (PFC) exhibits activation during some cognitive tasks, including episodic memory, reasoning, attention, multitasking, task sets, decision making, mentalizing, and processing of self-referenced information. However, the medial part of anterior PFC is part of the default mode network (DMN), which shows deactivation during various goal-directed cognitive tasks compared to a resting baseline. One possible factor for this pattern is that activity in the anterior medial PFC (MPFC) is affected by dynamic allocation of attentional resources depending on task demands. We investigated this possibility using an event related fMRI with a face working memory task. Sixteen students participated in a single fMRI session. They were asked to form a task set to remember the faces (Face memory condition) or to ignore them (No face memory condition), then they were given 6 seconds of preparation period before the onset of the face stimuli. During this 6-second period, four single digits were presented one at a time at the center of the display, and participants were asked to add them and to remember the final answer. When participants formed a task set to remember faces, the anterior MPFC exhibited activation during a task preparation period but deactivation during a task execution period within a single trial. The results suggest that the anterior MPFC plays a role in task set formation but is not involved in execution of the face working memory task. Therefore, when attentional resources are allocated to other brain regions during task execution, the anterior MPFC shows deactivation. The results suggest that activation and deactivation in the anterior MPFC are affected by dynamic allocation of processing resources across different phases of processing.
Koshino, Hideya; Minamoto, Takehiro; Ikeda, Takashi; Osaka, Mariko; Otsuka, Yuki; Osaka, Naoyuki
2011-01-01
Background The anterior prefrontal cortex (PFC) exhibits activation during some cognitive tasks, including episodic memory, reasoning, attention, multitasking, task sets, decision making, mentalizing, and processing of self-referenced information. However, the medial part of anterior PFC is part of the default mode network (DMN), which shows deactivation during various goal-directed cognitive tasks compared to a resting baseline. One possible factor for this pattern is that activity in the anterior medial PFC (MPFC) is affected by dynamic allocation of attentional resources depending on task demands. We investigated this possibility using an event related fMRI with a face working memory task. Methodology/Principal Findings Sixteen students participated in a single fMRI session. They were asked to form a task set to remember the faces (Face memory condition) or to ignore them (No face memory condition), then they were given 6 seconds of preparation period before the onset of the face stimuli. During this 6-second period, four single digits were presented one at a time at the center of the display, and participants were asked to add them and to remember the final answer. When participants formed a task set to remember faces, the anterior MPFC exhibited activation during a task preparation period but deactivation during a task execution period within a single trial. Conclusions/Significance The results suggest that the anterior MPFC plays a role in task set formation but is not involved in execution of the face working memory task. Therefore, when attentional resources are allocated to other brain regions during task execution, the anterior MPFC shows deactivation. The results suggest that activation and deactivation in the anterior MPFC are affected by dynamic allocation of processing resources across different phases of processing. PMID:21829668
The own-age face recognition bias is task dependent.
Proietti, Valentina; Macchi Cassia, Viola; Mondloch, Catherine J
2015-08-01
The own-age bias (OAB) in face recognition (more accurate recognition of own-age than other-age faces) is robust among young adults but not older adults. We investigated the OAB under two different task conditions. In Experiment 1 young and older adults (who reported more recent experience with own than other-age faces) completed a match-to-sample task with young and older adult faces; only young adults showed an OAB. In Experiment 2 young and older adults completed an identity detection task in which we manipulated the identity strength of target and distracter identities by morphing each face with an average face in 20% steps. Accuracy increased with identity strength and facial age influenced older adults' (but not younger adults') strategy, but there was no evidence of an OAB. Collectively, these results suggest that the OAB depends on task demands and may be absent when searching for one identity. © 2014 The British Psychological Society.
Almeida, Inês; van Asselen, Marieke; Castelo-Branco, Miguel
2013-09-01
In human cognition, most relevant stimuli, such as faces, are processed in central vision. However, it is widely believed that recognition of relevant stimuli (e.g. threatening animal faces) at peripheral locations is also important due to their survival value. Moreover, task instructions have been shown to modulate brain regions involved in threat recognition (e.g. the amygdala). In this respect it is also controversial whether tasks requiring explicit focus on stimulus threat content vs. implicit processing differently engage primitive subcortical structures involved in emotional appraisal. Here we have addressed the role of central vs. peripheral processing in the human amygdala using animal threatening vs. non-threatening face stimuli. First, a simple animal face recognition task with threatening and non-threatening animal faces, as well as non-face control stimuli, was employed in naïve subjects (implicit task). A subsequent task was then performed with the same stimulus categories (but different stimuli) in which subjects were told to explicitly detect threat signals. We found lateralized amygdala responses both to the spatial location of stimuli and to the threatening content of faces depending on the task performed: the right amygdala showed increased responses to central compared to left presented stimuli specifically during the threat detection task, while the left amygdala was better prone to discriminate threatening faces from non-facial displays during the animal face recognition task. Additionally, the right amygdala responded to faces during the threat detection task but only when centrally presented. Moreover, we have found no evidence for superior responses of the amygdala to peripheral stimuli. Importantly, we have found that striatal regions activate differentially depending on peripheral vs. central processing of threatening faces. Accordingly, peripheral processing of these stimuli activated more strongly the putaminal region, while central processing engaged mainly the caudate nucleus. We conclude that the human amygdala has a central bias for face stimuli, and that visual processing recruits different striatal regions, putaminal or caudate based, depending on the task and on whether peripheral or central visual processing is involved. © 2013 Elsevier Ltd. All rights reserved.
Schuch, Stefanie; Werheid, Katja; Koch, Iring
2012-01-01
The present study investigated whether the processing characteristics of categorizing emotional facial expressions are different from those of categorizing facial age and sex information. Given that emotions change rapidly, it was hypothesized that processing facial expressions involves a more flexible task set that causes less between-task interference than the task sets involved in processing age or sex of a face. Participants switched between three tasks: categorizing a face as looking happy or angry (emotion task), young or old (age task), and male or female (sex task). Interference between tasks was measured by global interference and response interference. Both measures revealed patterns of asymmetric interference. Global between-task interference was reduced when a task was mixed with the emotion task. Response interference, as measured by congruency effects, was larger for the emotion task than for the nonemotional tasks. The results support the idea that processing emotional facial expression constitutes a more flexible task set that causes less interference (i.e., task-set "inertia") than processing the age or sex of a face.
Holistic processing, contact, and the other-race effect in face recognition.
Zhao, Mintao; Hayward, William G; Bülthoff, Isabelle
2014-12-01
Face recognition, holistic processing, and processing of configural and featural facial information are known to be influenced by face race, with better performance for own- than other-race faces. However, whether these various other-race effects (OREs) arise from the same underlying mechanisms or from different processes remains unclear. The present study addressed this question by measuring the OREs in a set of face recognition tasks, and testing whether these OREs are correlated with each other. Participants performed different tasks probing (1) face recognition, (2) holistic processing, (3) processing of configural information, and (4) processing of featural information for both own- and other-race faces. Their contact with other-race people was also assessed with a questionnaire. The results show significant OREs in tasks testing face memory and processing of configural information, but not in tasks testing either holistic processing or processing of featural information. Importantly, there was no cross-task correlation between any of the measured OREs. Moreover, the level of other-race contact predicted only the OREs obtained in tasks testing face memory and processing of configural information. These results indicate that these various cross-race differences originate from different aspects of face processing, in contrary to the view that the ORE in face recognition is due to cross-race differences in terms of holistic processing. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Infants' Response to Maternal Mirroring in the Still Face and Replay Tasks
ERIC Educational Resources Information Center
Bigelow, Ann E.; Walden, Laura M.
2009-01-01
Infants' response to maternal mirroring was investigated in 4-month-old infants. Mother-infant dyads participated in the still face and replay tasks. Infants were grouped by those whose mothers did and did not mirror their behavior in the interactive phases of the tasks. In the still face task, infants with maternal mirroring showed more…
A task-invariant cognitive reserve network.
Stern, Yaakov; Gazes, Yunglin; Razlighi, Qolomreza; Steffener, Jason; Habeck, Christian
2018-05-14
The concept of cognitive reserve (CR) can explain individual differences in susceptibility to cognitive or functional impairment in the presence of age or disease-related brain changes. Epidemiologic evidence indicates that CR helps maintain performance in the face of pathology across multiple cognitive domains. We therefore tried to identify a single, "task-invariant" CR network that is active during the performance of many disparate tasks. In imaging data acquired from 255 individuals age 20-80 while performing 12 different cognitive tasks, we used an iterative approach to derive a multivariate network that was expressed during the performance of all tasks, and whose degree of expression correlated with IQ, a proxy for CR. When applied to held out data or forward applied to fMRI data from an entirely different activation task, network expression correlated with IQ. Expression of the CR pattern accounted for additional variance in fluid reasoning performance over and above the influence of cortical thickness, and also moderated between cortical thickness and reasoning performance, consistent with the behavior of a CR network. The identification of a task-invariant CR network supports the idea that life experiences may result in brain processing differences that might provide reserve against age- or disease-related changes across multiple tasks. Copyright © 2018. Published by Elsevier Inc.
Task relevance regulates the interaction between reward expectation and emotion.
Wei, Ping; Kang, Guanlan
2014-06-01
In the present study, we investigated the impact of reward expectation on the processing of emotional facial expression using a cue-target paradigm. A cue indicating the reward condition of each trial (incentive vs. non-incentive) was followed by the presentation of a picture of an emotional face, the target. Participants were asked to discriminate the emotional expression of the target face in Experiment 1, to discriminate the gender of the target face in Experiment 2, and to judge a number superimposed on the center of the target face as even or odd in Experiment 3, rendering the emotional expression of the target face as task relevant in Experiment 1 but task irrelevant in Experiments 2 and 3. Faster reaction times (RTs) were observed in the monetary incentive condition than in the non-incentive condition, demonstrating the effect of reward on facilitating task concentration. Moreover, the reward effect (i.e., RTs in non-incentive conditions versus incentive conditions) was larger for emotional faces than for neutral faces when emotional expression was task relevant but not when it was task irrelevant. The findings suggest that top-down incentive motivation biased attentional processing toward task-relevant stimuli, and that task relevance played an important role in regulating the influence of reward expectation on the processing of emotional stimuli.
Temporal event structure and timing in schizophrenia: preserved binding in a longer "now".
Martin, Brice; Giersch, Anne; Huron, Caroline; van Wassenhove, Virginie
2013-01-01
Patients with schizophrenia experience a loss of temporal continuity or subjective fragmentation along the temporal dimension. Here, we develop the hypothesis that impaired temporal awareness results from a perturbed structuring of events in time-i.e., canonical neural dynamics. To address this, 26 patients and their matched controls took part in two psychophysical studies using desynchronized audiovisual speech. Two tasks were used and compared: first, an identification task testing for multisensory binding impairments in which participants reported what they heard while looking at a speaker's face; in a second task, we tested the perceived simultaneity of the same audiovisual speech stimuli. In both tasks, we used McGurk fusion and combination that are classic ecologically valid multisensory illusions. First, and contrary to previous reports, our results show that patients do not significantly differ from controls in their rate of illusory reports. Second, the illusory reports of patients in the identification task were more sensitive to audiovisual speech desynchronies than those of controls. Third, and surprisingly, patients considered audiovisual speech to be synchronized for longer delays than controls. As such, the temporal tolerance profile observed in a temporal judgement task was less of a predictor for sensory binding in schizophrenia than for that obtained in controls. We interpret our results as an impairment of temporal event structuring in schizophrenia which does not specifically affect sensory binding operations but rather, the explicit access to timing information associated here with audiovisual speech processing. Our findings are discussed in the context of curent neurophysiological frameworks for the binding and the structuring of sensory events in time. Copyright © 2012 Elsevier Ltd. All rights reserved.
Impaired holistic processing in congenital prosopagnosia
Avidan, Galia; Tanzer, Michal; Behrmann, Marlene
2011-01-01
It has long been argued that face processing requires disproportionate reliance on holistic or configural processing, relative to that required for non-face object recognition, and that a disruption of such holistic processing may be causally implicated in prosopagnosia. Previously, we demonstrated that individuals with congenital prosopagnosia (CP) did not show the normal face inversion effect (better performance for upright compared to inverted faces) and evinced a local (rather than the normal global) bias in a compound letter global/local (GL) task, supporting the claim of disrupted holistic processing in prosopagnosia. Here, we investigate further the nature of holistic processing impairments in CP, first by confirming, in a large sample of CP individuals, the absence of the normal face inversion effect and the presence of the local bias on the GL task, and, second, by employing the composite face paradigm, often regarded as the gold standard for measuring holistic face processing. In this last task, we show that, in contrast with normal individuals, the CP group perform equivalently with aligned and misaligned faces and was impervious to (the normal) interference from the task-irrelevant bottom part of faces. Interestingly, the extent of the local bias evident in the composite task is correlated with the abnormality of performance on diagnostic face processing tasks. Furthermore, there is a significant correlation between the magnitude of the local bias in the GL and performance on the composite task. These results provide further evidence for impaired holistic processing in CP and, moreover, corroborate the critical role of this type of processing for intact face recognition. PMID:21601583
ERIC Educational Resources Information Center
Burke, Jacqueline A.
2001-01-01
Accounting students (n=128) used either face-to-face or distant Group support systems to complete collaborative tasks. Participation and social presence perceptions were significantly higher face to face. Task difficulty did not affect participation in either environment. (Contains 54 references.) (JOW)
Short, Lindsey A; Mondloch, Catherine J; Hackland, Anne T
2015-01-01
Adults are more accurate in detecting deviations from normality in young adult faces than in older adult faces despite exhibiting comparable accuracy in discriminating both face ages. This deficit in judging the normality of older faces may be due to reliance on a face space optimized for the dimensions of young adult faces, perhaps because of early and continuous experience with young adult faces. Here we examined the emergence of this young adult face bias by testing 3- and 7-year-old children on a child-friendly version of the task used to test adults. In an attractiveness judgment task, children viewed young and older adult face pairs; each pair consisted of an unaltered face and a distorted face of the same identity. Children pointed to the prettiest face, which served as a measure of their sensitivity to the dimensions on which faces vary relative to a norm. To examine whether biases in the attractiveness task were specific to deficits in referencing a norm or extended to impaired discrimination, we tested children on a simultaneous match-to-sample task with the same stimuli. Both age groups were more accurate in judging the attractiveness of young faces relative to older faces; however, unlike adults, the young adult face bias extended to the match-to-sample task. These results suggest that by 3 years of age, children's perceptual system is more finely tuned for young adult faces than for older adult faces, which may support past findings of superior recognition for young adult faces. Copyright © 2014 Elsevier Inc. All rights reserved.
Face identification with frequency domain matched filtering in mobile environments
NASA Astrophysics Data System (ADS)
Lee, Dong-Su; Woo, Yong-Hyun; Yeom, Seokwon; Kim, Shin-Hwan
2012-06-01
Face identification at a distance is very challenging since captured images are often degraded by blur and noise. Furthermore, the computational resources and memory are often limited in the mobile environments. Thus, it is very challenging to develop a real-time face identification system on the mobile device. This paper discusses face identification based on frequency domain matched filtering in the mobile environments. Face identification is performed by the linear or phase-only matched filter and sequential verification stages. The candidate window regions are decided by the major peaks of the linear or phase-only matched filtering outputs. The sequential stages comprise a skin-color test and an edge mask filtering test, which verify color and shape information of the candidate regions in order to remove false alarms. All algorithms are built on the mobile device using Android platform. The preliminary results show that face identification of East Asian people can be performed successfully in the mobile environments.
Recognition and identification of famous faces in patients with unilateral temporal lobe epilepsy.
Seidenberg, Michael; Griffith, Randall; Sabsevitz, David; Moran, Maria; Haltiner, Alan; Bell, Brian; Swanson, Sara; Hammeke, Thomas; Hermann, Bruce
2002-01-01
We examined the performance of 21 patients with unilateral temporal lobe epilepsy (TLE) and hippocampal damage (10 lefts, and 11 rights) and 10 age-matched controls on the recognition and identification (name and occupation) of well-known faces. Famous face stimuli were selected from four time periods; 1970s, 1980s, 1990-1994, and 1995-1996. Differential patterns of performance were observed for the left and right TLE group across distinct face processing components. The left TLE group showed a selective impairment in naming famous faces while they performed similar to the controls in face recognition and semantic identification (i.e. occupation). In contrast, the right TLE group was impaired across all components of face memory; face recognition, semantic identification, and face naming. Face naming impairment in the left TLE group was characterized by a temporal gradient with better naming performance for famous faces from more distant time periods. Findings are discussed in terms of the role of the temporal lobe system for the acquisition, retention, and retrieval of face semantic networks, and the differential effects of lateralized temporal lobe lesions in this process.
Task-irrelevant emotion facilitates face discrimination learning.
Lorenzino, Martina; Caudek, Corrado
2015-03-01
We understand poorly how the ability to discriminate faces from one another is shaped by visual experience. The purpose of the present study is to determine whether face discrimination learning can be facilitated by facial emotions. To answer this question, we used a task-irrelevant perceptual learning paradigm because it closely mimics the learning processes that, in daily life, occur without a conscious intention to learn and without an attentional focus on specific facial features. We measured face discrimination thresholds before and after training. During the training phase (4 days), participants performed a contrast discrimination task on face images. They were not informed that we introduced (task-irrelevant) subtle variations in the face images from trial to trial. For the Identity group, the task-irrelevant features were variations along a morphing continuum of facial identity. For the Emotion group, the task-irrelevant features were variations along an emotional expression morphing continuum. The Control group did not undergo contrast discrimination learning and only performed the pre-training and post-training tests, with the same temporal gap between them as the other two groups. Results indicate that face discrimination improved, but only for the Emotion group. Participants in the Emotion group, moreover, showed face discrimination improvements also for stimulus variations along the facial identity dimension, even if these (task-irrelevant) stimulus features had not been presented during training. The present results highlight the importance of emotions for face discrimination learning. Copyright © 2015 Elsevier Ltd. All rights reserved.
Facial expressions of emotion (KDEF): identification under different display-duration conditions.
Calvo, Manuel G; Lundqvist, Daniel
2008-02-01
Participants judged which of seven facial expressions (neutrality, happiness, anger, sadness, surprise, fear, and disgust) were displayed by a set of 280 faces corresponding to 20 female and 20 male models of the Karolinska Directed Emotional Faces database (Lundqvist, Flykt, & Ohman, 1998). Each face was presented under free-viewing conditions (to 63 participants) and also for 25, 50, 100, 250, and 500 msec (to 160 participants), to examine identification thresholds. Measures of identification accuracy, types of errors, and reaction times were obtained for each expression. In general, happy faces were identified more accurately, earlier, and faster than other faces, whereas judgments of fearful faces were the least accurate, the latest, and the slowest. Norms for each face and expression regarding level of identification accuracy, errors, and reaction times may be downloaded from www.psychonomic.org/archive/.
Peek-a-What? Infants' Response to the Still-Face Task after Normal and Interrupted Peek-a-Boo
ERIC Educational Resources Information Center
Bigelow, Ann E.; Best, Caitlin
2013-01-01
Infants' sensitivity to the vitality or tension envelope within dyadic social exchanges was investigated by examining their responses following normal and interrupted games of peek-a-boo embedded in a Still-Face Task. Infants 5-6 months old engaged in two modified Still-Face Tasks with their mothers. In one task, the initial interaction ended with…
Neural architecture underlying classification of face perception paradigms.
Laird, Angela R; Riedel, Michael C; Sutherland, Matthew T; Eickhoff, Simon B; Ray, Kimberly L; Uecker, Angela M; Fox, P Mickle; Turner, Jessica A; Fox, Peter T
2015-10-01
We present a novel strategy for deriving a classification system of functional neuroimaging paradigms that relies on hierarchical clustering of experiments archived in the BrainMap database. The goal of our proof-of-concept application was to examine the underlying neural architecture of the face perception literature from a meta-analytic perspective, as these studies include a wide range of tasks. Task-based results exhibiting similar activation patterns were grouped as similar, while tasks activating different brain networks were classified as functionally distinct. We identified four sub-classes of face tasks: (1) Visuospatial Attention and Visuomotor Coordination to Faces, (2) Perception and Recognition of Faces, (3) Social Processing and Episodic Recall of Faces, and (4) Face Naming and Lexical Retrieval. Interpretation of these sub-classes supports an extension of a well-known model of face perception to include a core system for visual analysis and extended systems for personal information, emotion, and salience processing. Overall, these results demonstrate that a large-scale data mining approach can inform the evolution of theoretical cognitive models by probing the range of behavioral manipulations across experimental tasks. Copyright © 2015 Elsevier Inc. All rights reserved.
Evers, Kris; Kerkhof, Inneke; Steyaert, Jean; Noens, Ilse; Wagemans, Johan
2014-01-01
Emotion recognition problems are frequently reported in individuals with an autism spectrum disorder (ASD). However, this research area is characterized by inconsistent findings, with atypical emotion processing strategies possibly contributing to existing contradictions. In addition, an attenuated saliency of the eyes region is often demonstrated in ASD during face identity processing. We wanted to compare reliance on mouth versus eyes information in children with and without ASD, using hybrid facial expressions. A group of six-to-eight-year-old boys with ASD and an age- and intelligence-matched typically developing (TD) group without intellectual disability performed an emotion labelling task with hybrid facial expressions. Five static expressions were used: one neutral expression and four emotional expressions, namely, anger, fear, happiness, and sadness. Hybrid faces were created, consisting of an emotional face half (upper or lower face region) with the other face half showing a neutral expression. Results showed no emotion recognition problem in ASD. Moreover, we provided evidence for the existence of top- and bottom-emotions in children: correct identification of expressions mainly depends on information in the eyes (so-called top-emotions: happiness) or in the mouth region (so-called bottom-emotions: sadness, anger, and fear). No stronger reliance on mouth information was found in children with ASD.
Is facial skin tone sufficient to produce a cross-racial identification effect?
Alley, T R; Schultheis, J A
2001-06-01
Research clearly supports the existence of an other race effect for human faces whereby own-race faces are more accurately perceived and recognized. Why this occurs remains unclear. A computerized program (Mac-a-Mug Pro) for face composition was used to create pairs of target and distractor faces that differed only in skin tone. The six target faces were rated on honesty and aggressiveness by 72 university students, with just one 'Black' and one 'White' face viewed by each student. One week later, they attempted to identify these faces in four lineups: two with target-present and two with target-absent. The order of presentation of targets, lineups, and faces within lineups was varied. Own-race identification was slightly better than cross-racial identification. There was no significant difference in the confidence of responses to own- versus other-race faces. These results indicate that neither morphological variation nor differential confidence is necessary for a cross-racial identification effect.
Familiarity and face emotion recognition in patients with schizophrenia.
Lahera, Guillermo; Herrera, Sara; Fernández, Cristina; Bardón, Marta; de los Ángeles, Victoria; Fernández-Liria, Alberto
2014-01-01
To assess the emotion recognition in familiar and unknown faces in a sample of schizophrenic patients and healthy controls. Face emotion recognition of 18 outpatients diagnosed with schizophrenia (DSM-IVTR) and 18 healthy volunteers was assessed with two Emotion Recognition Tasks using familiar faces and unknown faces. Each subject was accompanied by 4 familiar people (parents, siblings or friends), which were photographed by expressing the 6 Ekman's basic emotions. Face emotion recognition in familiar faces was assessed with this ad hoc instrument. In each case, the patient scored (from 1 to 10) the subjective familiarity and affective valence corresponding to each person. Patients with schizophrenia not only showed a deficit in the recognition of emotions on unknown faces (p=.01), but they also showed an even more pronounced deficit on familiar faces (p=.001). Controls had a similar success rate in the unknown faces task (mean: 18 +/- 2.2) and the familiar face task (mean: 17.4 +/- 3). However, patients had a significantly lower score in the familiar faces task (mean: 13.2 +/- 3.8) than in the unknown faces task (mean: 16 +/- 2.4; p<.05). In both tests, the highest number of errors was with emotions of anger and fear. Subjectively, the patient group showed a lower level of familiarity and emotional valence to their respective relatives (p<.01). The sense of familiarity may be a factor involved in the face emotion recognition and it may be disturbed in schizophrenia. © 2013.
Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms.
Phillips, P Jonathon; Yates, Amy N; Hu, Ying; Hahn, Carina A; Noyes, Eilidh; Jackson, Kelsey; Cavazos, Jacqueline G; Jeckeln, Géraldine; Ranjan, Rajeev; Sankaranarayanan, Swami; Chen, Jun-Cheng; Castillo, Carlos D; Chellappa, Rama; White, David; O'Toole, Alice J
2018-06-12
Achieving the upper limits of face identification accuracy in forensic applications can minimize errors that have profound social and personal consequences. Although forensic examiners identify faces in these applications, systematic tests of their accuracy are rare. How can we achieve the most accurate face identification: using people and/or machines working alone or in collaboration? In a comprehensive comparison of face identification by humans and computers, we found that forensic facial examiners, facial reviewers, and superrecognizers were more accurate than fingerprint examiners and students on a challenging face identification test. Individual performance on the test varied widely. On the same test, four deep convolutional neural networks (DCNNs), developed between 2015 and 2017, identified faces within the range of human accuracy. Accuracy of the algorithms increased steadily over time, with the most recent DCNN scoring above the median of the forensic facial examiners. Using crowd-sourcing methods, we fused the judgments of multiple forensic facial examiners by averaging their rating-based identity judgments. Accuracy was substantially better for fused judgments than for individuals working alone. Fusion also served to stabilize performance, boosting the scores of lower-performing individuals and decreasing variability. Single forensic facial examiners fused with the best algorithm were more accurate than the combination of two examiners. Therefore, collaboration among humans and between humans and machines offers tangible benefits to face identification accuracy in important applications. These results offer an evidence-based roadmap for achieving the most accurate face identification possible. Copyright © 2018 the Author(s). Published by PNAS.
Wiese, Holger
2013-01-01
We are typically more accurate at remembering own- than other-race faces. This “own-race bias” has been suggested to result from enhanced expertise with and more efficient perceptual processing of own-race than other-race faces. In line with this idea, the N170, an event-related potential correlate of face perception, has been repeatedly found to be larger for other-race faces. Other studies, however, found no difference in N170 amplitude for faces from diverse ethnic groups. The present study tested whether these seemingly incongruent findings can be explained by varying task demands. European participants were presented with upright and inverted European and Asian faces (as well as European and Asian houses), and asked to either indicate the ethnicity or the orientation of the stimuli. Larger N170s for other-race faces were observed in the ethnicity but not in the orientation task, suggesting that the necessity to process facial category information is a minimum prerequisite for the occurrence of the effect. In addition, N170 inversion effects, with larger amplitudes for inverted relative to upright stimuli, were more pronounced for own- relative to other-race faces in both tasks. Overall, the present findings suggest that the occurrence of ethnicity effects in N170 for upright faces depends on the amount of facial detail required for the task at hand. At the same time, the larger inversion effects for own- than other-race faces occur independent of task and may reflect the fine-tuning of perceptual processing to faces of maximum expertise. PMID:24399955
Multi-Task Convolutional Neural Network for Pose-Invariant Face Recognition
NASA Astrophysics Data System (ADS)
Yin, Xi; Liu, Xiaoming
2018-02-01
This paper explores multi-task learning (MTL) for face recognition. We answer the questions of how and why MTL can improve the face recognition performance. First, we propose a multi-task Convolutional Neural Network (CNN) for face recognition where identity classification is the main task and pose, illumination, and expression estimations are the side tasks. Second, we develop a dynamic-weighting scheme to automatically assign the loss weight to each side task, which is a crucial problem in MTL. Third, we propose a pose-directed multi-task CNN by grouping different poses to learn pose-specific identity features, simultaneously across all poses. Last but not least, we propose an energy-based weight analysis method to explore how CNN-based MTL works. We observe that the side tasks serve as regularizations to disentangle the variations from the learnt identity features. Extensive experiments on the entire Multi-PIE dataset demonstrate the effectiveness of the proposed approach. To the best of our knowledge, this is the first work using all data in Multi-PIE for face recognition. Our approach is also applicable to in-the-wild datasets for pose-invariant face recognition and achieves comparable or better performance than state of the art on LFW, CFP, and IJB-A datasets.
Famous face recognition, face matching, and extraversion.
Lander, Karen; Poyarekar, Siddhi
2015-01-01
It has been previously established that extraverts who are skilled at interpersonal interaction perform significantly better than introverts on a face-specific recognition memory task. In our experiment we further investigate the relationship between extraversion and face recognition, focusing on famous face recognition and face matching. Results indicate that more extraverted individuals perform significantly better on an upright famous face recognition task and show significantly larger face inversion effects. However, our results did not find an effect of extraversion on face matching or inverted famous face recognition.
Gilad-Gutnick, Sharon; Harmatz, Elia Samuel; Tsourides, Kleovoulos; Yovel, Galit; Sinha, Pawan
2018-07-01
We report here an unexpectedly robust ability of healthy human individuals ( n = 40) to recognize extremely distorted needle-like facial images, challenging the well-entrenched notion that veridical spatial configuration is necessary for extracting facial identity. In face identification tasks of parametrically compressed internal and external features, we found that the sum of performances on each cue falls significantly short of performance on full faces, despite the equal visual information available from both measures (with full faces essentially being a superposition of internal and external features). We hypothesize that this large deficit stems from the use of positional information about how the internal features are positioned relative to the external features. To test this, we systematically changed the relations between internal and external features and found preferential encoding of vertical but not horizontal spatial relationships in facial representations ( n = 20). Finally, we employ magnetoencephalography imaging ( n = 20) to demonstrate a close mapping between the behavioral psychometric curve and the amplitude of the M250 face familiarity, but not M170 face-sensitive evoked response field component, providing evidence that the M250 can be modulated by faces that are perceptually identifiable, irrespective of extreme distortions to the face's veridical configuration. We theorize that the tolerance to compressive distortions has evolved from the need to recognize faces across varying viewpoints. Our findings help clarify the important, but poorly defined, concept of facial configuration and also enable an association between behavioral performance and previously reported neural correlates of face perception.
Emotion identification in girls at high risk for depression.
Joormann, Jutta; Gilbert, Kirsten; Gotlib, Ian H
2010-05-01
Children of depressed mothers are themselves at elevated risk for developing a depressive disorder. We have little understanding, however, of the specific factors that contribute to this increased risk. This study investigated whether never-disordered daughters whose mothers have experienced recurrent episodes of depression during their daughters' lifetime differ from never-disordered daughters of never-disordered mothers in their processing of facial expressions of emotion. Following a negative mood induction, daughters completed an emotion identification task in which they watched faces slowly change from a neutral to a full-intensity happy, sad, or angry expression. We assessed both the intensity that was required to accurately identify the emotion being expressed and errors in emotion identification. Daughters of depressed mothers required greater intensity than did daughters of control mothers to accurately identify sad facial expressions; they also made significantly more errors identifying angry expressions. Cognitive biases may increase vulnerability for the onset of disorders and should be considered in early intervention and prevention efforts.
About-face on face recognition ability and holistic processing
Richler, Jennifer J.; Floyd, R. Jackie; Gauthier, Isabel
2015-01-01
Previous work found a small but significant relationship between holistic processing measured with the composite task and face recognition ability measured by the Cambridge Face Memory Test (CFMT; Duchaine & Nakayama, 2006). Surprisingly, recent work using a different measure of holistic processing (Vanderbilt Holistic Face Processing Test [VHPT-F]; Richler, Floyd, & Gauthier, 2014) and a larger sample found no evidence for such a relationship. In Experiment 1 we replicate this unexpected result, finding no relationship between holistic processing (VHPT-F) and face recognition ability (CFMT). A key difference between the VHPT-F and other holistic processing measures is that unique face parts are used on each trial in the VHPT-F, unlike in other tasks where a small set of face parts repeat across the experiment. In Experiment 2, we test the hypothesis that correlations between the CFMT and holistic processing tasks are driven by stimulus repetition that allows for learning during the composite task. Consistent with our predictions, CFMT performance was correlated with holistic processing in the composite task when a small set of face parts repeated over trials, but not when face parts did not repeat. A meta-analysis confirms that relationships between the CFMT and holistic processing depend on stimulus repetition. These results raise important questions about what is being measured by the CFMT, and challenge current assumptions about why faces are processed holistically. PMID:26223027
About-face on face recognition ability and holistic processing.
Richler, Jennifer J; Floyd, R Jackie; Gauthier, Isabel
2015-01-01
Previous work found a small but significant relationship between holistic processing measured with the composite task and face recognition ability measured by the Cambridge Face Memory Test (CFMT; Duchaine & Nakayama, 2006). Surprisingly, recent work using a different measure of holistic processing (Vanderbilt Holistic Face Processing Test [VHPT-F]; Richler, Floyd, & Gauthier, 2014) and a larger sample found no evidence for such a relationship. In Experiment 1 we replicate this unexpected result, finding no relationship between holistic processing (VHPT-F) and face recognition ability (CFMT). A key difference between the VHPT-F and other holistic processing measures is that unique face parts are used on each trial in the VHPT-F, unlike in other tasks where a small set of face parts repeat across the experiment. In Experiment 2, we test the hypothesis that correlations between the CFMT and holistic processing tasks are driven by stimulus repetition that allows for learning during the composite task. Consistent with our predictions, CFMT performance was correlated with holistic processing in the composite task when a small set of face parts repeated over trials, but not when face parts did not repeat. A meta-analysis confirms that relationships between the CFMT and holistic processing depend on stimulus repetition. These results raise important questions about what is being measured by the CFMT, and challenge current assumptions about why faces are processed holistically.
Concurrent development of facial identity and expression discrimination.
Dalrymple, Kirsten A; Visconti di Oleggio Castello, Matteo; Elison, Jed T; Gobbini, M Ida
2017-01-01
Facial identity and facial expression processing both appear to follow a protracted developmental trajectory, yet these trajectories have been studied independently and have not been directly compared. Here we investigated whether these processes develop at the same or different rates using matched identity and expression discrimination tasks. The Identity task begins with a target face that is a morph between two identities (Identity A/Identity B). After a brief delay, the target face is replaced by two choice faces: 100% Identity A and 100% Identity B. Children 5-12-years-old were asked to pick the choice face that is most similar to the target identity. The Expression task is matched in format and difficulty to the Identity task, except the targets are morphs between two expressions (Angry/Happy, or Disgust/Surprise). The same children were asked to pick the choice face with the expression that is most similar to the target expression. There were significant effects of age, with performance improving (becoming more accurate and faster) on both tasks with increasing age. Accuracy and reaction times were not significantly different across tasks and there was no significant Age x Task interaction. Thus, facial identity and facial expression discrimination appear to develop at a similar rate, with comparable improvement on both tasks from age five to twelve. Because our tasks are so closely matched in format and difficulty, they may prove useful for testing face identity and face expression processing in special populations, such as autism or prosopagnosia, where one of these abilities might be impaired.
Naranjo, C; Kornreich, C; Campanella, S; Noël, X; Vandriette, Y; Gillain, B; de Longueville, X; Delatte, B; Verbanck, P; Constant, E
2011-02-01
The processing of emotional stimuli is thought to be negatively biased in major depression. This study investigates this issue using musical, vocal and facial affective stimuli. 23 depressed in-patients and 23 matched healthy controls were recruited. Affective information processing was assessed through musical, vocal and facial emotion recognition tasks. Depression, anxiety level and attention capacity were controlled. The depressed participants demonstrated less accurate identification of emotions than the control group in all three sorts of emotion-recognition tasks. The depressed group also gave higher intensity ratings than the controls when scoring negative emotions, and they were more likely to attribute negative emotions to neutral voices and faces. Our in-patient group might differ from the more general population of depressed adults. They were all taking anti-depressant medication, which may have had an influence on their emotional information processing. Major depression is associated with a general negative bias in the processing of emotional stimuli. Emotional processing impairment in depression is not confined to interpersonal stimuli (faces and voices), being also present in the ability to feel music accurately. © 2010 Elsevier B.V. All rights reserved.
Age-Group Differences in Interference from Young and Older Emotional Faces.
Ebner, Natalie C; Johnson, Marcia K
2010-11-01
Human attention is selective, focusing on some aspects of events at the expense of others. In particular, angry faces engage attention. Most studies have used pictures of young faces, even when comparing young and older age groups. Two experiments asked (1) whether task-irrelevant faces of young and older individuals with happy, angry, and neutral expressions disrupt performance on a face-unrelated task, (2) whether interference varies for faces of different ages and different facial expressions, and (3) whether young and older adults differ in this regard. Participants gave speeded responses on a number task while irrelevant faces appeared in the background. Both age groups were more distracted by own than other-age faces. In addition, young participants' responses were slower for angry than happy faces, whereas older participants' responses were slower for happy than angry faces. Factors underlying age-group differences in interference from emotional faces of different ages are discussed.
The image-interpretation-workstation of the future: lessons learned
NASA Astrophysics Data System (ADS)
Maier, S.; van de Camp, F.; Hafermann, J.; Wagner, B.; Peinsipp-Byma, E.; Beyerer, J.
2017-05-01
In recent years, professionally used workstations got increasingly complex and multi-monitor systems are more and more common. Novel interaction techniques like gesture recognition were developed but used mostly for entertainment and gaming purposes. These human computer interfaces are not yet widely used in professional environments where they could greatly improve the user experience. To approach this problem, we combined existing tools in our imageinterpretation-workstation of the future, a multi-monitor workplace comprised of four screens. Each screen is dedicated to a special task in the image interpreting process: a geo-information system to geo-reference the images and provide a spatial reference for the user, an interactive recognition support tool, an annotation tool and a reporting tool. To further support the complex task of image interpreting, self-developed interaction systems for head-pose estimation and hand tracking were used in addition to more common technologies like touchscreens, face identification and speech recognition. A set of experiments were conducted to evaluate the usability of the different interaction systems. Two typical extensive tasks of image interpreting were devised and approved by military personal. They were then tested with a current setup of an image interpreting workstation using only keyboard and mouse against our image-interpretationworkstation of the future. To get a more detailed look at the usefulness of the interaction techniques in a multi-monitorsetup, the hand tracking, head pose estimation and the face recognition were further evaluated using tests inspired by everyday tasks. The results of the evaluation and the discussion are presented in this paper.
Sleep deprivation impairs the accurate recognition of human emotions.
van der Helm, Els; Gujar, Ninad; Walker, Matthew P
2010-03-01
Investigate the impact of sleep deprivation on the ability to recognize the intensity of human facial emotions. Randomized total sleep-deprivation or sleep-rested conditions, involving between-group and within-group repeated measures analysis. Experimental laboratory study. Thirty-seven healthy participants, (21 females) aged 18-25 y, were randomly assigned to the sleep control (SC: n = 17) or total sleep deprivation group (TSD: n = 20). Participants performed an emotional face recognition task, in which they evaluated 3 different affective face categories: Sad, Happy, and Angry, each ranging in a gradient from neutral to increasingly emotional. In the TSD group, the task was performed once under conditions of sleep deprivation, and twice under sleep-rested conditions following different durations of sleep recovery. In the SC group, the task was performed twice under sleep-rested conditions, controlling for repeatability. In the TSD group, when sleep-deprived, there was a marked and significant blunting in the recognition of Angry and Happy affective expressions in the moderate (but not extreme) emotional intensity range; differences that were most reliable and significant in female participants. No change in the recognition of Sad expressions was observed. These recognition deficits were, however, ameliorated following one night of recovery sleep. No changes in task performance were observed in the SC group. Sleep deprivation selectively impairs the accurate judgment of human facial emotions, especially threat relevant (Anger) and reward relevant (Happy) categories, an effect observed most significantly in females. Such findings suggest that sleep loss impairs discrete affective neural systems, disrupting the identification of salient affective social cues.
Sporer, Siegfried L; Kaminski, Kristina S; Davids, Maike C; McQuiston, Dawn
2016-11-01
When witnesses report a crime, police usually ask for a description of the perpetrator. Several studies suggested that verbalising faces leads to a detriment in identification performance (verbal overshadowing effect [VOE]) but the effect has been difficult to replicate. Here, we sought to reverse the VOE by inducing context reinstatement as a system variable through re-reading one's own description before an identification task. Participants (N = 208) watched a video film and were then dismissed (control group), only described the perpetrator, or described and later re-read their own descriptions before identification in either target-present or target-absent lineups after a 2-day or a 5-week delay. Identification accuracy was significantly higher after re-reading (85.0%) than in the no description control group (62.5%) irrespective of target presence. Data were internally replicated using a second target and corroborated by several small meta-analyses. Identification accuracy was related to description quality. Moreover, there was a tendency towards a verbal facilitation effect (VFE) rather than a VOE. Receiver operating characteristic (ROC) curve analyses confirm that our findings are not due to a shift in response bias but truly reflect improvement of recognition performance. Differences in the ecological validity of study paradigms are discussed.
Learning task affects ERP-correlates of the own-race bias, but not recognition memory performance.
Stahl, Johanna; Wiese, Holger; Schweinberger, Stefan R
2010-06-01
People are generally better in recognizing faces from their own ethnic group as opposed to faces from another ethnic group, a finding which has been interpreted in the context of two opposing theories. Whereas perceptual expertise theories stress the role of long-term experience with one's own ethnic group, race feature theories assume that the processing of an other-race-defining feature triggers inferior coding and recognition of faces. The present study tested these hypotheses by manipulating the learning task in a recognition memory test. At learning, one group of participants categorized faces according to ethnicity, whereas another group rated facial attractiveness. Subsequent recognition tests indicated clear and similar own-race biases for both groups. However, ERPs from learning and test phases demonstrated an influence of learning task on neurophysiological processing of own- and other-race faces. While both groups exhibited larger N170 responses to Asian as compared to Caucasian faces, task-dependent differences were seen in a subsequent P2 ERP component. Whereas the P2 was more pronounced for Caucasian faces in the categorization group, this difference was absent in the attractiveness rating group. The learning task thus influences early face encoding. Moreover, comparison with recent research suggests that this attractiveness rating task influences the processes reflected in the P2 in a similar manner as perceptual expertise for other-race faces does. By contrast, the behavioural own-race bias suggests that long-term expertise is required to increase other-race face recognition and hence attenuate the own-race bias. Copyright 2010 Elsevier Ltd. All rights reserved.
Karaminis, Themelis; Neil, Louise; Manning, Catherine; Turi, Marco; Fiorentini, Chiara; Burr, David; Pellicano, Elizabeth
2018-01-01
Ensemble perception, the ability to assess automatically the summary of large amounts of information presented in visual scenes, is available early in typical development. This ability might be compromised in autistic children, who are thought to present limitations in maintaining summary statistics representations for the recent history of sensory input. Here we examined ensemble perception of facial emotional expressions in 35 autistic children, 30 age- and ability-matched typical children and 25 typical adults. Participants received three tasks: a) an 'ensemble' emotion discrimination task; b) a baseline (single-face) emotion discrimination task; and c) a facial expression identification task. Children performed worse than adults on all three tasks. Unexpectedly, autistic and typical children were, on average, indistinguishable in their precision and accuracy on all three tasks. Computational modelling suggested that, on average, autistic and typical children used ensemble-encoding strategies to a similar extent; but ensemble perception was related to non-verbal reasoning abilities in autistic but not in typical children. Eye-movement data also showed no group differences in the way children attended to the stimuli. Our combined findings suggest that the abilities of autistic and typical children for ensemble perception of emotions are comparable on average. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Ensemble perception of emotions in autistic and typical children and adolescents.
Karaminis, Themelis; Neil, Louise; Manning, Catherine; Turi, Marco; Fiorentini, Chiara; Burr, David; Pellicano, Elizabeth
2017-04-01
Ensemble perception, the ability to assess automatically the summary of large amounts of information presented in visual scenes, is available early in typical development. This ability might be compromised in autistic children, who are thought to present limitations in maintaining summary statistics representations for the recent history of sensory input. Here we examined ensemble perception of facial emotional expressions in 35 autistic children, 30 age- and ability-matched typical children and 25 typical adults. Participants received three tasks: a) an 'ensemble' emotion discrimination task; b) a baseline (single-face) emotion discrimination task; and c) a facial expression identification task. Children performed worse than adults on all three tasks. Unexpectedly, autistic and typical children were, on average, indistinguishable in their precision and accuracy on all three tasks. Computational modelling suggested that, on average, autistic and typical children used ensemble-encoding strategies to a similar extent; but ensemble perception was related to non-verbal reasoning abilities in autistic but not in typical children. Eye-movement data also showed no group differences in the way children attended to the stimuli. Our combined findings suggest that the abilities of autistic and typical children for ensemble perception of emotions are comparable on average. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Inter-hemispheric interaction facilitates face processing.
Compton, Rebecca J
2002-01-01
Many recent studies have revealed that interaction between the left and right cerebral hemispheres can aid in task performance, but these studies have tended to examine perception of simple stimuli such as letters, digits or simple shapes, which may have limited naturalistic validity. The present study extends these prior findings to a more naturalistic face perception task. Matching tasks required subjects to indicate when a target face matched one of two probe faces. Matches could be either across-field, requiring inter-hemispheric interaction, or within-field, not requiring inter-hemispheric interaction. Subjects indicated when faces matched in emotional expression (Experiment 1; n=32) or in character identity (Experiment 2; n=32). In both experiments, across-field performance was significantly better than within-field performance, supporting the primary hypothesis. Further, this advantage was greater for the more difficult character identity task. Results offer qualified support for the hypothesis that inter-hemispheric interaction is especially advantageous as task demands increase.
Interfering with memory for faces: The cost of doing two things at once.
Wammes, Jeffrey D; Fernandes, Myra A
2016-01-01
We inferred the processes critical for episodic retrieval of faces by measuring susceptibility to memory interference from different distracting tasks. Experiment 1 examined recognition of studied faces under full attention (FA) or each of two divided attention (DA) conditions requiring concurrent decisions to auditorily presented letters. Memory was disrupted in both DA relative to FA conditions, a result contrary to a material-specific account of interference effects. Experiment 2 investigated whether the magnitude of interference depended on competition between concurrent tasks for common processing resources. Studied faces were presented either upright (configurally processed) or inverted (featurally processed). Recognition was completed under FA, or DA with one of two face-based distracting tasks requiring either featural or configural processing. We found an interaction: memory for upright faces was lower under DA when the distracting task required configural than featural processing, while the reverse was true for memory of inverted faces. Across experiments, the magnitude of memory interference was similar (a 19% or 20% decline from FA) regardless of whether the materials in the distracting task overlapped with the to-be-remembered information. Importantly, interference was significantly larger (42%) when the processing demands of the distracting and target retrieval task overlapped, suggesting a processing-specific account of memory interference.
Domain specificity versus expertise: factors influencing distinct processing of faces.
Carmel, David; Bentin, Shlomo
2002-02-01
To explore face specificity in visual processing, we compared the role of task-associated strategies and expertise on the N170 event-related potential (ERP) component elicited by human faces with the ERPs elicited by cars, birds, items of furniture, and ape faces. In Experiment 1, participants performed a car monitoring task and an animacy decision task. In Experiment 2, participants monitored human faces while faces of apes were the distracters. Faces elicited an equally conspicuous N170, significantly larger than the ERPs elicited by non-face categories regardless of whether they were ignored or had an equal status with other categories (Experiment 1), or were the targets (in Experiment 2). In contrast, the negative component elicited by cars during the same time range was larger if they were targets than if they were not. Furthermore, unlike the posterior-temporal distribution of the N170, the negative component elicited by cars and its modulation by task were more conspicuous at occipital sites. Faces of apes elicited an N170 that was similar in amplitude to that elicited by the human face targets, albeit peaking 10 ms later. As our participants were not ape experts, this pattern indicates that the N170 is face-specific, but not specie-specific, i.e. it is elicited by particular face features regardless of expertise. Overall, these results demonstrate the domain specificity of the visual mechanism implicated in processing faces, a mechanism which is not influenced by either task or expertise. The processing of other objects is probably accomplished by a more general visual processor, which is sensitive to strategic manipulations and attention.
Kim, Seol Hee; Hwang, Soonshin; Hong, Yeon-Ju; Kim, Jae-Jin; Kim, Kyung-Ho; Chung, Chooryung J
2018-05-01
To examine the changes in visual attention influenced by facial angles and smile during the evaluation of facial attractiveness. Thirty-three young adults were asked to rate the overall facial attractiveness (task 1 and 3) or to select the most attractive face (task 2) by looking at multiple panel stimuli consisting of 0°, 15°, 30°, 45°, 60°, and 90° rotated facial photos with or without a smile for three model face photos and a self-photo (self-face). Eye gaze and fixation time (FT) were monitored by the eye-tracking device during the performance. Participants were asked to fill out a subjective questionnaire asking, "Which face was primarily looked at when evaluating facial attractiveness?" When rating the overall facial attractiveness (task 1) for model faces, FT was highest for the 0° face and lowest for the 90° face regardless of the smile ( P < .01). However, when the most attractive face was to be selected (task 2), the FT of the 0° face decreased, while it significantly increased for the 45° face ( P < .001). When facial attractiveness was evaluated with the simplified panels combined with facial angles and smile (task 3), the FT of the 0° smiling face was the highest ( P < .01). While most participants reported that they looked mainly at the 0° smiling face when rating facial attractiveness, visual attention was broadly distributed within facial angles. Laterally rotated faces and presence of a smile highly influence visual attention during the evaluation of facial esthetics.
Selective attention modulates high-frequency activity in the face-processing network.
Müsch, Kathrin; Hamamé, Carlos M; Perrone-Bertolotti, Marcela; Minotti, Lorella; Kahane, Philippe; Engel, Andreas K; Lachaux, Jean-Philippe; Schneider, Till R
2014-11-01
Face processing depends on the orchestrated activity of a large-scale neuronal network. Its activity can be modulated by attention as a function of task demands. However, it remains largely unknown whether voluntary, endogenous attention and reflexive, exogenous attention to facial expressions equally affect all regions of the face-processing network, and whether such effects primarily modify the strength of the neuronal response, the latency, the duration, or the spectral characteristics. We exploited the good temporal and spatial resolution of intracranial electroencephalography (iEEG) and recorded from depth electrodes to uncover the fast dynamics of emotional face processing. We investigated frequency-specific responses and event-related potentials (ERP) in the ventral occipito-temporal cortex (VOTC), ventral temporal cortex (VTC), anterior insula, orbitofrontal cortex (OFC), and amygdala when facial expressions were task-relevant or task-irrelevant. All investigated regions of interest (ROI) were clearly modulated by task demands and exhibited stronger changes in stimulus-induced gamma band activity (50-150 Hz) when facial expressions were task-relevant. Observed latencies demonstrate that the activation is temporally coordinated across the network, rather than serially proceeding along a processing hierarchy. Early and sustained responses to task-relevant faces in VOTC and VTC corroborate their role for the core system of face processing, but they also occurred in the anterior insula. Strong attentional modulation in the OFC and amygdala (300 msec) suggests that the extended system of the face-processing network is only recruited if the task demands active face processing. Contrary to our expectation, we rarely observed differences between fearful and neutral faces. Our results demonstrate that activity in the face-processing network is susceptible to the deployment of selective attention. Moreover, we show that endogenous attention operates along the whole face-processing network, and that these effects are reflected in frequency-specific changes in the gamma band. Copyright © 2014 Elsevier Ltd. All rights reserved.
Effects of Age, Task Performance, and Structural Brain Development on Face Processing
Cohen Kadosh, Kathrin; Johnson, Mark H; Dick, Frederic; Cohen Kadosh, Roi; Blakemore, Sarah-Jayne
2013-01-01
In this combined structural and functional MRI developmental study, we tested 48 participants aged 7–37 years on 3 simple face-processing tasks (identity, expression, and gaze task), which were designed to yield very similar performance levels across the entire age range. The same participants then carried out 3 more difficult out-of-scanner tasks, which provided in-depth measures of changes in performance. For our analysis we adopted a novel, systematic approach that allowed us to differentiate age- from performance-related changes in the BOLD response in the 3 tasks, and compared these effects to concomitant changes in brain structure. The processing of all face aspects activated the core face-network across the age range, as well as additional and partially separable regions. Small task-specific activations in posterior regions were found to increase with age and were distinct from more widespread activations that varied as a function of individual task performance (but not of age). Our results demonstrate that activity during face-processing changes with age, and these effects are still observed when controlling for changes associated with differences in task performance. Moreover, we found that changes in white and gray matter volume were associated with changes in activation with age and performance in the out-of-scanner tasks. PMID:22661406
Wright, Kristyn; Kelley, Elizabeth; Poulin-Dubois, Diane
2014-01-01
Research investigating biological motion perception in children with ASD has revealed conflicting findings concerning whether impairments in biological motion perception exist. The current study investigated how children with high-functioning ASD (HF-ASD) performed on two tasks of biological motion identification: a novel schematic motion identification task and a point-light biological motion identification task. Twenty-two HFASD children were matched with 21 TD children on gender, non-verbal mental, and chronological, age (M years = 6.72). On both tasks, HF-ASD children performed with similar accuracy as TD children. Across groups, children performed better on animate than on inanimate trials of both tasks. These findings suggest that HF-ASD children's identification of both realistic and schematic biological motion identification is unimpaired. PMID:25395988
ERIC Educational Resources Information Center
Falkmer, Marita; Larsson, Matilda; Bjallmark, Anna; Falkmer, Torbjorn
2010-01-01
Partly claimed to explain social difficulties observed in people with Asperger syndrome, face identification and visual search strategies become important. Previous research findings are, however, disparate. In order to explore face identification abilities and visual search strategies, with special focus on the importance of the eye area, 24…
Processing of task-irrelevant emotional faces impacted by implicit sequence learning.
Peng, Ming; Cai, Mengfei; Zhou, Renlai
2015-12-02
Attentional load may be increased by task-relevant attention, such as difficulty of task, or task-irrelevant attention, such as an unexpected light-spot in the screen. Several studies have focused on the influence of task-relevant attentional load on task-irrelevant emotion processing. In this study, we used event-related potentials to examine the impact of task-irrelevant attentional load on task-irrelevant expression processing. Eighteen participants identified the color of a word (i.e. the color Stroop task) while a picture of a fearful or a neutral face was shown in the background. The task-irrelevant attentional load was increased by regularly presented congruence trials (congruence between the color and the meaning of the word) in the regular condition because implicit sequence learning was induced. We compared the task-irrelevant expression processing between the regular condition and the random condition (the congruence and incongruence trials were presented randomly). Behaviorally, reaction times for the fearful face condition were faster than the neutral faces condition in the random condition, whereas no significant difference was found in the regular condition. The event-related potential results indicated enhanced positive amplitudes in P2, N2, and P3 components relative to neutral faces in the random condition. In comparison, only P2 differed significantly for the two types of expressions in the regular condition. The study showed that attentional load increased by implicit sequence learning influenced the late processing of task-irrelevant expression.
The other-race effect does not rely on memory: Evidence from a matching task.
Megreya, Ahmed M; White, David; Burton, A Mike
2011-08-01
Viewers are typically better at remembering faces from their own race than from other races; however, it is not yet established whether this effect is due to memorial or perceptual processes. In this study, UK and Egyptian viewers were given a simultaneous face-matching task, in which the target faces were presented upright or upside down. As with previous research using face memory tasks, participants were worse at matching other-race faces than own-race faces and showed a stronger face inversion effect for own-race faces. However, subjects' performance on own and other-race faces was highly correlated. These data provide strong evidence that difficulty in perceptual encoding of unfamiliar faces contributes substantially to the other-race effect and that accounts based entirely on memory cannot capture the full data. Implications for forensic settings are also discussed.
Fishman, Inna; Ng, Rowena; Bellugi, Ursula
2012-01-01
Williams syndrome (WS) is a genetic condition with a distinctive social phenotype characterized by excessive sociability, accompanied by a relative proficiency in face recognition, despite severe deficits in visuospatial domain of cognition. This consistent phenotypic characteristic and the relative homogeneity of the WS genotype make WS a compelling human model for examining the genotype-phenotype relations, especially with respect to social behavior. Following up on a recent report suggesting that individuals with WS do not show race bias and racial stereotyping, this study was designed to investigate the neural correlates of the perception of faces from different races, in individuals with WS as compared to typically developing (TD) controls. Caucasian WS and TD participants performed a gender identification task with own-race (White) and other-race (Black) faces while event-related potentials (ERPs) were recorded. In line with previous studies with TD participants, other-race faces elicited larger amplitudes ERPs within the first 200 ms following the face onset, in WS and TD participants alike. These results suggest that, just like their TD counterparts, individuals with WS differentially processed faces of own- vs. other-race, at relatively early stages of processing, starting as early as 115 ms after the face onset. Overall, these results indicate that neural processing of faces in individuals with WS is moderated by race at early perceptual stages, calling for a reconsideration of the previous claim that they are uniquely insensitive to race. PMID:22022973
Reward Activates Stimulus-Specific and Task-Dependent Representations in Visual Association Cortices
Muller, Timothy; Yeung, Nick; Waszak, Florian
2014-01-01
Humans reliably learn which actions lead to rewards. One prominent question is how credit is assigned to environmental stimuli that are acted upon. Recent functional magnetic resonance imaging (fMRI) studies have provided evidence that representations of rewarded stimuli are activated upon reward delivery, providing possible eligibility traces for credit assignment. Our study sought evidence of postreward activation in sensory cortices satisfying two conditions of instrumental learning: postreward activity should reflect the stimulus category that preceded reward (stimulus specificity), and should occur only if the stimulus was acted on to obtain reward (task dependency). Our experiment implemented two tasks in the fMRI scanner. The first was a perceptual decision-making task on degraded face and house stimuli. Stimulus specificity was evident as rewards activated the sensory cortices associated with face versus house perception more strongly after face versus house decisions, respectively, particularly in the fusiform face area. Stimulus specificity was further evident in a psychophysiological interaction analysis wherein face-sensitive areas correlated with nucleus accumbens activity after face-decision rewards, whereas house-sensitive areas correlated with nucleus accumbens activity after house-decision rewards. The second task required participants to make an instructed response. The criterion of task dependency was fulfilled as rewards after face versus house responses activated the respective association cortices to a larger degree when faces and houses were relevant to the performed task. Our study is the first to show that postreward sensory cortex activity meets these two key criteria of credit assignment, and does so independently from bottom-up perceptual processing. PMID:25411489
ERIC Educational Resources Information Center
Treese, Anne-Cecile; Johansson, Mikael; Lindgren, Magnus
2010-01-01
The emotional salience of faces has previously been shown to induce memory distortions in recognition memory tasks. This event-related potential (ERP) study used repeated runs of a continuous recognition task with emotional and neutral faces to investigate emotion-induced memory distortions. In the second and third runs, participants made more…
Is a neutral expression also a neutral stimulus? A study with functional magnetic resonance.
Carvajal, Fernando; Rubio, Sandra; Serrano, Juan M; Ríos-Lago, Marcos; Alvarez-Linera, Juan; Pacheco, Lara; Martín, Pilar
2013-08-01
Although neutral faces do not initially convey an explicit emotional message, it has been found that individuals tend to assign them an affective content. Moreover, previous research has shown that affective judgments are mediated by the task they have to perform. Using functional magnetic resonance imaging in 21 healthy participants, we focus this study on the cerebral activity patterns triggered by neutral and emotional faces in two different tasks (social or gender judgments). Results obtained, using conjunction analyses, indicated that viewing both emotional and neutral faces evokes activity in several similar brain areas indicating a common neural substrate. Moreover, neutral faces specifically elicit activation of cerebellum, frontal and temporal areas, while emotional faces involve the cuneus, anterior cingulated gyrus, medial orbitofrontal cortex, posterior superior temporal gyrus, precentral/postcentral gyrus and insula. The task selected was also found to influence brain activity, in that the social task recruited frontal areas while the gender task involved the posterior cingulated, inferior parietal lobule and middle temporal gyrus to a greater extent. Specifically, in the social task viewing neutral faces was associated with longer reaction times and increased activity of left dorsolateral frontal cortex compared with viewing facial expressions of emotions. In contrast, in the same task emotional expressions distinctively activated the left amygdale. The results are discussed taking into consideration the fact that, like other facial expressions, neutral expressions are usually assigned some emotional significance. However, neutral faces evoke a greater activation of circuits probably involved in more elaborate cognitive processing.
Fusiform gyrus face selectivity relates to individual differences in facial recognition ability.
Furl, Nicholas; Garrido, Lúcia; Dolan, Raymond J; Driver, Jon; Duchaine, Bradley
2011-07-01
Regions of the occipital and temporal lobes, including a region in the fusiform gyrus (FG), have been proposed to constitute a "core" visual representation system for faces, in part because they show face selectivity and face repetition suppression. But recent fMRI studies of developmental prosopagnosics (DPs) raise questions about whether these measures relate to face processing skills. Although DPs manifest deficient face processing, most studies to date have not shown unequivocal reductions of functional responses in the proposed core regions. We scanned 15 DPs and 15 non-DP control participants with fMRI while employing factor analysis to derive behavioral components related to face identification or other processes. Repetition suppression specific to facial identities in FG or to expression in FG and STS did not show compelling relationships with face identification ability. However, we identified robust relationships between face selectivity and face identification ability in FG across our sample for several convergent measures, including voxel-wise statistical parametric mapping, peak face selectivity in individually defined "fusiform face areas" (FFAs), and anatomical extents (cluster sizes) of those FFAs. None of these measures showed associations with behavioral expression or object recognition ability. As a group, DPs had reduced face-selective responses in bilateral FFA when compared with non-DPs. Individual DPs were also more likely than non-DPs to lack expected face-selective activity in core regions. These findings associate individual differences in face processing ability with selectivity in core face processing regions. This confirms that face selectivity can provide a valid marker for neural mechanisms that contribute to face identification ability.
ERIC Educational Resources Information Center
Becker, D. Vaughn; Anderson, Uriah S.; Mortensen, Chad R.; Neufeld, Samantha L.; Neel, Rebecca
2011-01-01
Is it easier to detect angry or happy facial expressions in crowds of faces? The present studies used several variations of the visual search task to assess whether people selectively attend to expressive faces. Contrary to widely cited studies (e.g., Ohman, Lundqvist, & Esteves, 2001) that suggest angry faces "pop out" of crowds, our review of…
Cognitive demands of face monitoring: evidence for visuospatial overload.
Doherty-Sneddon, G; Bonner, L; Bruce, V
2001-10-01
Young children perform difficult communication tasks better face to face than when they cannot see one another (e.g., Doherty-Sneddon & Kent, 1996). However, in recent studies, it was found that children aged 6 and 10 years, describing abstract shapes, showed evidence of face-to-face interference rather than facilitation. For some communication tasks, access to visual signals (such as facial expression and eye gaze) may hinder rather than help children's communication. In new research we have pursued this interference effect. Five studies are described with adults and 10- and 6-year-old participants. It was found that looking at a face interfered with children's abilities to listen to descriptions of abstract shapes. Children also performed visuospatial memory tasks worse when they looked at someone's face prior to responding than when they looked at a visuospatial pattern or at the floor. It was concluded that performance on certain tasks was hindered by monitoring another person's face. It is suggested that processing of visual communication signals shares certain processing resources with the processing of other visuospatial information.
ERIC Educational Resources Information Center
Casini, Laurence; Burle, Boris; Nguyen, Noel
2009-01-01
Time is essential to speech. The duration of speech segments plays a critical role in the perceptual identification of these segments, and therefore in that of spoken words. Here, using a French word identification task, we show that vowels are perceived as shorter when attention is divided between two tasks, as compared to a single task control…
Development and early application of the Scottish Community Nursing Workload Measurement Tool.
Grafen, May; Mackenzie, Fiona C
2015-02-01
This article describes the development and early application of the Scottish Community Nursing Workload Measurement Tool, part of a suite of tools aiming to ensure a consistent approach to measuring nursing workload across NHS Scotland. The tool, which enables community nurses to record and report their actual workload by collecting information on six categories of activity, is now being used by all NHS boards as part of a triangulated approach. Data being generated by the tool at national level include indications that approximately 50% of band 6 district nurses' time is spent in face-to-face and non-face-to-face contact and planned sessions with patients, and that over 60% of face-to-face contacts are at 'moderate' and 'complex' levels of intervention (2012 data). These data are providing hard evidence of key elements of community nursing activity and practice that will enable informed decisions about workforce planning to be taken forward locally and nationally. The article features an account of the early impact of the tool's implementation in an NHS board by an associate director of nursing. Positive effects from implementation include the generation of reliable data to inform planning decisions, identification of issues around nursing time spent on administrative tasks, clarification of school nursing roles, and information being fed back to teams on various aspects of performance.
The automaticity of face perception is influenced by familiarity.
Yan, Xiaoqian; Young, Andrew W; Andrews, Timothy J
2017-10-01
In this study, we explore the automaticity of encoding for different facial characteristics and ask whether it is influenced by face familiarity. We used a matching task in which participants had to report whether the gender, identity, race, or expression of two briefly presented faces was the same or different. The task was made challenging by allowing nonrelevant dimensions to vary across trials. To test for automaticity, we compared performance on trials in which the task instruction was given at the beginning of the trial, with trials in which the task instruction was given at the end of the trial. As a strong criterion for automatic processing, we reasoned that if perception of a given characteristic (gender, race, identity, or emotion) is fully automatic, the timing of the instruction should not influence performance. We compared automaticity for the perception of familiar and unfamiliar faces. Performance with unfamiliar faces was higher for all tasks when the instruction was given at the beginning of the trial. However, we found a significant interaction between instruction and task with familiar faces. Accuracy of gender and identity judgments to familiar faces was the same regardless of whether the instruction was given before or after the trial, suggesting automatic processing of these properties. In contrast, there was an effect of instruction for judgments of expression and race to familiar faces. These results show that familiarity enhances the automatic processing of some types of facial information more than others.
Holistic face perception is modulated by experience-dependent perceptual grouping.
Curby, Kim M; Entenman, Robert J; Fleming, Justin T
2016-07-01
What role do general-purpose, experience-sensitive perceptual mechanisms play in producing characteristic features of face perception? We previously demonstrated that different-colored, misaligned framing backgrounds, designed to disrupt perceptual grouping of face parts appearing upon them, disrupt holistic face perception. In the current experiments, a similar part-judgment task with composite faces was performed: face parts appeared in either misaligned, different-colored rectangles or aligned, same-colored rectangles. To investigate whether experience can shape impacts of perceptual grouping on holistic face perception, a pre-task fostered the perception of either (a) the misaligned, differently colored rectangle frames as parts of a single, multicolored polygon or (b) the aligned, same-colored rectangle frames as a single square shape. Faces appearing in the misaligned, differently colored rectangles were processed more holistically by those in the polygon-, compared with the square-, pre-task group. Holistic effects for faces appearing in aligned, same-colored rectangles showed the opposite pattern. Experiment 2, which included a pre-task condition fostering the perception of the aligned, same-colored frames as pairs of independent rectangles, provided converging evidence that experience can modulate impacts of perceptual grouping on holistic face perception. These results are surprising given the proposed impenetrability of holistic face perception and provide insights into the elusive mechanisms underlying holistic perception.
Effect of distracting faces on visual selective attention in the monkey.
Landman, Rogier; Sharma, Jitendra; Sur, Mriganka; Desimone, Robert
2014-12-16
In primates, visual stimuli with social and emotional content tend to attract attention. Attention might be captured through rapid, automatic, subcortical processing or guided by slower, more voluntary cortical processing. Here we examined whether irrelevant faces with varied emotional expressions interfere with a covert attention task in macaque monkeys. In the task, the monkeys monitored a target grating in the periphery for a subtle color change while ignoring distracters that included faces appearing elsewhere on the screen. The onset time of distracter faces before the target change, as well as their spatial proximity to the target, was varied from trial to trial. The presence of faces, especially faces with emotional expressions interfered with the task, indicating a competition for attentional resources between the task and the face stimuli. However, this interference was significant only when faces were presented for greater than 200 ms. Emotional faces also affected saccade velocity and reduced pupillary reflex. Our results indicate that the attraction of attention by emotional faces in the monkey takes a considerable amount of processing time, possibly involving cortical-subcortical interactions. Intranasal application of the hormone oxytocin ameliorated the interfering effects of faces. Together these results provide evidence for slow modulation of attention by emotional distracters, which likely involves oxytocinergic brain circuits.
Learning discriminative features from RGB-D images for gender and ethnicity identification
NASA Astrophysics Data System (ADS)
Azzakhnini, Safaa; Ballihi, Lahoucine; Aboutajdine, Driss
2016-11-01
The development of sophisticated sensor technologies gave rise to an interesting variety of data. With the appearance of affordable devices, such as the Microsoft Kinect, depth-maps and three-dimensional data became easily accessible. This attracted many computer vision researchers seeking to exploit this information in classification and recognition tasks. In this work, the problem of face classification in the context of RGB images and depth information (RGB-D images) is addressed. The purpose of this paper is to study and compare some popular techniques for gender recognition and ethnicity classification to understand how much depth data can improve the quality of recognition. Furthermore, we investigate which combination of face descriptors, feature selection methods, and learning techniques is best suited to better exploit RGB-D images. The experimental results show that depth data improve the recognition accuracy for gender and ethnicity classification applications in many use cases.
Virtually compliant: Immersive video gaming increases conformity to false computer judgments.
Weger, Ulrich W; Loughnan, Stephen; Sharma, Dinkar; Gonidis, Lazaros
2015-08-01
Real-life encounters with face-to-face contact are on the decline in a world in which many routine tasks are delegated to virtual characters-a development that bears both opportunities and risks. Interacting with such virtual-reality beings is particularly common during role-playing videogames, in which we incarnate into the virtual reality of an avatar. Video gaming is known to lead to the training and development of real-life skills and behaviors; hence, in the present study we sought to explore whether role-playing video gaming primes individuals' identification with a computer enough to increase computer-related social conformity. Following immersive video gaming, individuals were indeed more likely to give up their own best judgment and to follow the vote of computers, especially when the stimulus context was ambiguous. Implications for human-computer interactions and for our understanding of the formation of identity and self-concept are discussed.
Neural bases of different cognitive strategies for facial affect processing in schizophrenia.
Fakra, Eric; Salgado-Pineda, Pilar; Delaveau, Pauline; Hariri, Ahmad R; Blin, Olivier
2008-03-01
To examine the neural basis and dynamics of facial affect processing in schizophrenic patients as compared to healthy controls. Fourteen schizophrenic patients and fourteen matched controls performed a facial affect identification task during fMRI acquisition. The emotional task included an intuitive emotional condition (matching emotional faces) and a more cognitively demanding condition (labeling emotional faces). Individual analysis for each emotional condition, and second-level t-tests examining both within-, and between-group differences, were carried out using a random effects approach. Psychophysiological interactions (PPI) were tested for variations in functional connectivity between amygdala and other brain regions as a function of changes in experimental conditions (labeling versus matching). During the labeling condition, both groups engaged similar networks. During the matching condition, schizophrenics failed to activate regions of the limbic system implicated in the automatic processing of emotions. PPI revealed an inverse functional connectivity between prefrontal regions and the left amygdala in healthy volunteers but there was no such change in patients. Furthermore, during the matching condition, and compared to controls, patients showed decreased activation of regions involved in holistic face processing (fusiform gyrus) and increased activation of regions associated with feature analysis (inferior parietal cortex, left middle temporal lobe, right precuneus). Our findings suggest that schizophrenic patients invariably adopt a cognitive approach when identifying facial affect. The distributed neocortical network observed during the intuitive condition indicates that patients may resort to feature-based, rather than configuration-based, processing and may constitute a compensatory strategy for limbic dysfunction.
Mueller, Sven C; Cromheeke, Sofie; Siugzdaite, Roma; Nicolas Boehler, C
2017-08-01
In adults, cognitive control is supported by several brain regions including the limbic system and the dorsolateral prefrontal cortex (dlPFC) when processing emotional information. However, in adolescents, some theories hypothesize a neurobiological imbalance proposing heightened sensitivity to affective material in the amygdala and striatum within a cognitive control context. Yet, direct neurobiological evidence is scarce. Twenty-four adolescents (12-16) and 28 adults (25-35) completed an emotional n-back working memory task in response to happy, angry, and neutral faces during fMRI. Importantly, participants either paid attention to the emotion (task-relevant condition) or judged the gender (task-irrelevant condition). Behaviorally, for both groups, when happy faces were task-relevant, performance improved relative to when they were task-irrelevant, while performance decrements were seen for angry faces. In the dlPFC, angry faces elicited more activation in adults during low relative to high cognitive load (2-back vs. 0-back). By contrast, happy faces elicited more activation in the amygdala in adolescents when they were task-relevant. Happy faces also generally increased nucleus accumbens activity (regardless of relevance) in adolescents relative to adults. Together, the findings are consistent with neurobiological models of adolescent brain development and identify neurodevelopmental differences in cognitive control emotion interactions. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Withholding response to self-face is faster than to other-face.
Zhu, Min; Hu, Yinying; Tang, Xiaochen; Luo, Junlong; Gao, Xiangping
2015-01-01
Self-face advantage refers to adults' response to self-face is faster than that to other-face. A stop-signal task was used to explore how self-face advantage interacted with response inhibition. The results showed that reaction times of self-face were faster than that of other-face not in the go task but in the stop response trials. The novelty of the finding was that self-face has shorter stop-signal reaction time compared to other-face in the successful inhibition trials. These results indicated the processing mechanism of self-face may be characterized by a strong response tendency and a corresponding strong inhibition control.
Searching for differences in race: is there evidence for preferential detection of other-race faces?
Lipp, Ottmar V; Terry, Deborah J; Smith, Joanne R; Tellegen, Cassandra L; Kuebbeler, Jennifer; Newey, Mareka
2009-06-01
Previous research has suggested that like animal and social fear-relevant stimuli, other-race faces (African American) are detected preferentially in visual search. Three experiments using Chinese or Indonesian faces as other-race faces yielded the opposite pattern of results: faster detection of same-race faces among other-race faces. This apparently inconsistent pattern of results was resolved by showing that Asian and African American faces are detected preferentially in tasks that have small stimulus sets and employ fixed target searches. Asian and African American other-race faces are found slower among Caucasian face backgrounds if larger stimulus sets are used in tasks with a variable mapping of stimulus to background or target. Thus, preferential detection of other-race faces was not found under task conditions in which preferential detection of animal and social fear-relevant stimuli is evident. Although consistent with the view that same-race faces are processed in more detail than other-race faces, the current findings suggest that other-race faces do not draw attention preferentially.
Task-irrelevant own-race faces capture attention: eye-tracking evidence.
Cao, Rong; Wang, Shuzhen; Rao, Congquan; Fu, Jia
2013-04-01
To investigate attentional capture by face's race, the current study recorded saccade latencies of eye movement measurements in an inhibition of return (IOR) task. Compared to Caucasian (other-race) faces, Chinese (own-race) faces elicited longer saccade latency. This phenomenon disappeared when faces were inverted. The results indicated that own-race faces capture attention automatically with high-level configural processing. © 2013 The Authors. Scandinavian Journal of Psychology © 2013 The Scandinavian Psychological Associations.
Social cognition and interaction training (SCIT) for outpatients with bipolar disorder.
Lahera, G; Benito, A; Montes, J M; Fernández-Liria, A; Olbert, C M; Penn, D L
2013-03-20
Patients with bipolar disorder show social cognition deficits during both symptomatic and euthymic phases of the illness, partially independent of other cognitive dysfunctions and current mood. Previous studies in schizophrenia have revealed that social cognition is a modifiable domain. Social cognition and interaction training (SCIT) is an 18-week, manual-based, group treatment designed to improve social functioning by way of social cognition. 37 outpatients with DSM-IV-TR bipolar and schizoaffective disorders were randomly assigned to treatment as usual (TAU)+SCIT (n=21) or TAU (n=16). Independent, blind evaluators assessed subjects before and after the intervention on Face Emotion Identification Task (FEIT), Face Emotion Discrimination (FEDT), Emotion Recognition (ER40), Theory of Mind (Hinting Task) and Hostility Bias (AIHQ). Analysis of covariance revealed significant group effects for emotion perception, theory of mind, and depressive symptoms. The SCIT group showed a small within-group decrease on the AIHQ Blame subscale, a moderate decrease in AIHQ Hostility Bias, a small increase in scores on the Hinting Task, a moderate increase on the ER40, and large increases on the FEDT and FEIT. There was no evidence of effects on aggressive attributional biases or on global functioning. No follow up assessment was conducted, so it is unknown whether the effects of SCIT persist over time. This trial provides preliminary evidence that SCIT is feasible and may improve social cognition for bipolar and schizoaffective outpatients. Copyright © 2012 Elsevier B.V. All rights reserved.
The "parts and wholes" of face recognition: A review of the literature.
Tanaka, James W; Simonyi, Diana
2016-10-01
It has been claimed that faces are recognized as a "whole" rather than by the recognition of individual parts. In a paper published in the Quarterly Journal of Experimental Psychology in 1993, Martha Farah and I attempted to operationalize the holistic claim using the part/whole task. In this task, participants studied a face and then their memory presented in isolation and in the whole face. Consistent with the holistic view, recognition of the part was superior when tested in the whole-face condition compared to when it was tested in isolation. The "whole face" or holistic advantage was not found for faces that were inverted, or scrambled, nor for non-face objects, suggesting that holistic encoding was specific to normal, intact faces. In this paper, we reflect on the part/whole paradigm and how it has contributed to our understanding of what it means to recognize a face as a "whole" stimulus. We describe the value of part/whole task for developing theories of holistic and non-holistic recognition of faces and objects. We discuss the research that has probed the neural substrates of holistic processing in healthy adults and people with prosopagnosia and autism. Finally, we examine how experience shapes holistic face recognition in children and recognition of own- and other-race faces in adults. The goal of this article is to summarize the research on the part/whole task and speculate on how it has informed our understanding of holistic face processing.
A Combined sEMG and Accelerometer System for Monitoring Functional Activity in Stroke.
Roy, S; Cheng, M; Chang, S; Moore, J; De Luca, G; Nawab, S; De Luca, C
2014-04-23
Remote monitoring of physical activity using bodyworn sensors provides an alternative to assessment of functional independence by subjective, paper-based questionnaires. This study investigated the classification accuracy of a combined surface electromyographic (sEMG) and accelerometer (ACC) sensor system for monitoring activities of daily living in patients with stroke. sEMG and ACC data were recorded from 10 hemi paretic patients while they carried out a sequence of 11 activities of daily living (Identification tasks), and 10 activities used to evaluate misclassification errors (non-Identification tasks). The sEMG and ACC sensor data were analyzed using a multilayered neural network and an adaptive neuro-fuzzy inference system to identify the minimal sensor configuration needed to accurately classify the identification tasks, with a minimal number of misclassifications from the non-Identification tasks. The results demonstrated that the highest sensitivity and specificity for the identification tasks was achieved using a subset of 4 ACC sensors and adjacent sEMG sensors located on both upper arms, one forearm, and one thigh, respectively. This configuration resulted in a mean sensitivity of 95.0 %, and a mean specificity of 99.7 % for the identification tasks, and a mean misclassification error of < 10% for the non-Identification tasks. The findings support the feasibility of a hybrid sEMG and ACC wearable sensor system for automatic recognition of motor tasks used to assess functional independence in patients with stroke.
Children (but not adults) judge similarity in own- and other-race faces by the color of their skin
Balas, Benjamin; Peissig, Jessie; Moulson, Margaret
2014-01-01
Face shape and pigmentation are both diagnostic cues for face identification and categorization. In particular, both shape and pigmentation contribute to observers’ categorization of faces by race. Though many theoretical accounts of the behavioral other-race effect either explicitly or implicitly depend on differential use of visual information as a function of category expertise, there is little evidence that observers do in fact differentially rely upon distinct visual cues for own- and other-race faces. Presently, we examined how Asian and Caucasian children (4–6 years old) and adults use 3D shape and 2D pigmentation to make similarity judgments of White, Black, and Asian faces. Children in this age range are both capable of making category judgments about race, but are also sufficiently plastic with regard to the behavioral other-race effect that it seems as though their representations of facial appearance across different categories are still emerging. Using a simple match-to-sample similarity task, we found that children tend to use pigmentation to judge facial similarity more than adults, and also that own- vs. other-group category membership appears to influence how quickly children learn to use shape information more readily. We therefore suggest that children continue to adjust how different visual information is weighted during early and middle childhood and that experience with faces affects the speed at which adult-like weightings are established. PMID:25462031
Super-Memorizers Are Not Super-Recognizers
Ramon, Meike; Miellet, Sebastien; Dzieciol, Anna M.; Konrad, Boris Nikolai
2016-01-01
Humans have a natural expertise in recognizing faces. However, the nature of the interaction between this critical visual biological skill and memory is yet unclear. Here, we had the unique opportunity to test two individuals who have had exceptional success in the World Memory Championships, including several world records in face-name association memory. We designed a range of face processing tasks to determine whether superior/expert face memory skills are associated with distinctive perceptual strategies for processing faces. Superior memorizers excelled at tasks involving associative face-name learning. Nevertheless, they were as impaired as controls in tasks probing the efficiency of the face system: face inversion and the other-race effect. Super memorizers did not show increased hippocampal volumes, and exhibited optimal generic eye movement strategies when they performed complex multi-item face-name associations. Our data show that the visual computations of the face system are not malleable and are robust to acquired expertise involving extensive training of associative memory. PMID:27008627
Super-Memorizers Are Not Super-Recognizers.
Ramon, Meike; Miellet, Sebastien; Dzieciol, Anna M; Konrad, Boris Nikolai; Dresler, Martin; Caldara, Roberto
2016-01-01
Humans have a natural expertise in recognizing faces. However, the nature of the interaction between this critical visual biological skill and memory is yet unclear. Here, we had the unique opportunity to test two individuals who have had exceptional success in the World Memory Championships, including several world records in face-name association memory. We designed a range of face processing tasks to determine whether superior/expert face memory skills are associated with distinctive perceptual strategies for processing faces. Superior memorizers excelled at tasks involving associative face-name learning. Nevertheless, they were as impaired as controls in tasks probing the efficiency of the face system: face inversion and the other-race effect. Super memorizers did not show increased hippocampal volumes, and exhibited optimal generic eye movement strategies when they performed complex multi-item face-name associations. Our data show that the visual computations of the face system are not malleable and are robust to acquired expertise involving extensive training of associative memory.
Error-related negativity varies with the activation of gender stereotypes.
Ma, Qingguo; Shu, Liangchao; Wang, Xiaoyi; Dai, Shenyi; Che, Hongmin
2008-09-19
The error-related negativity (ERN) was suggested to reflect the response-performance monitoring process. The purpose of this study is to investigate how the activation of gender stereotypes influences the ERN. Twenty-eight male participants were asked to complete a tool or kitchenware identification task. The prime stimulus is a picture of a male or female face and the target stimulus is either a kitchen utensil or a hand tool. The ERN amplitude on male-kitchenware trials is significantly larger than that on female-kitchenware trials, which reveals the low-level, automatic activation of gender stereotypes. The ERN that was elicited in this task has two sources--operation errors and the conflict between the gender stereotype activation and the non-prejudice beliefs. And the gender stereotype activation may be the key factor leading to this difference of ERN. In other words, the stereotype activation in this experimental paradigm may be indexed by the ERN.
Holistic face processing can inhibit recognition of forensic facial composites.
McIntyre, Alex H; Hancock, Peter J B; Frowd, Charlie D; Langton, Stephen R H
2016-04-01
Facial composite systems help eyewitnesses to show the appearance of criminals. However, likenesses created by unfamiliar witnesses will not be completely accurate, and people familiar with the target can find them difficult to identify. Faces are processed holistically; we explore whether this impairs identification of inaccurate composite images and whether recognition can be improved. In Experiment 1 (n = 64) an imaging technique was used to make composites of celebrity faces more accurate and identification was contrasted with the original composite images. Corrected composites were better recognized, confirming that errors in production of the likenesses impair identification. The influence of holistic face processing was explored by misaligning the top and bottom parts of the composites (cf. Young, Hellawell, & Hay, 1987). Misalignment impaired recognition of corrected composites but identification of the original, inaccurate composites significantly improved. This effect was replicated with facial composites of noncelebrities in Experiment 2 (n = 57). We conclude that, like real faces, facial composites are processed holistically: recognition is impaired because unlike real faces, composites contain inaccuracies and holistic face processing makes it difficult to perceive identifiable features. This effect was consistent across composites of celebrities and composites of people who are personally familiar. Our findings suggest that identification of forensic facial composites can be enhanced by presenting composites in a misaligned format. (c) 2016 APA, all rights reserved).
Repetition priming of access to biographical information from faces.
Johnston, Robert A; Barry, Christopher
2006-02-01
Two experiments examined repetition priming on tasks that require access to semantic (or biographical) information from faces. In the second stage of each experiment, participants made either a nationality or an occupation decision to faces of celebrities, and, in the first stage, they made either the same or a different decision to faces (in Experiment 1) or the same or a different decision to printed names (in Experiment 2). All combinations of priming and test tasks produced clear repetition effects, which occurred irrespective of whether the decisions made were positive or negative. Same-domain (face-to-face) repetition priming was larger than cross-domain (name-to-face) priming, and priming was larger when the two tasks were the same. It is discussed how these findings are more readily accommodated by the Burton, Bruce, and Johnston (1990) model of face recognition than by episode-based accounts of repetition priming.
Tejeria, L; Harper, R A; Artes, P H; Dickinson, C M
2002-09-01
(1) To explore the relation between performance on tasks of familiar face recognition (FFR) and face expression difference discrimination (FED) with both perceived disability in face recognition and clinical measures of visual function in subjects with age related macular degeneration (AMD). (2) To quantify the gain in performance for face recognition tasks when subjects use a bioptic telescopic low vision device. 30 subjects with AMD (age range 66-90 years; visual acuity 0.4-1.4 logMAR) were recruited for the study. Perceived (self rated) disability in face recognition was assessed by an eight item questionnaire covering a range of issues relating to face recognition. Visual functions measured were distance visual acuity (ETDRS logMAR charts), continuous text reading acuity (MNRead charts), contrast sensitivity (Pelli-Robson chart), and colour vision (large panel D-15). In the FFR task, images of famous people had to be identified. FED was assessed by a forced choice test where subjects had to decide which one of four images showed a different facial expression. These tasks were repeated with subjects using a bioptic device. Overall perceived disability in face recognition did not correlate with performance on either task, although a specific item on difficulty recognising familiar faces did correlate with FFR (r = 0.49, p<0.05). FFR performance was most closely related to distance acuity (r = -0.69, p<0.001), while FED performance was most closely related to continuous text reading acuity (r = -0.79, p<0.001). In multiple regression, neither contrast sensitivity nor colour vision significantly increased the explained variance. When using a bioptic telescope, FFR performance improved in 86% of subjects (median gain = 49%; p<0.001), while FED performance increased in 79% of subjects (median gain = 50%; p<0.01). Distance and reading visual acuity are closely associated with measured task performance in FFR and FED. A bioptic low vision device can offer a significant improvement in performance for face recognition tasks, and may be useful in reducing the handicap associated with this disability. There is, however, little evidence for a correlation between self rated difficulty in face recognition and measured performance for either task. Further work is needed to explore the complex relation between the perception of disability and measured performance.
Tejeria, L; Harper, R A; Artes, P H; Dickinson, C M
2002-01-01
Aims: (1) To explore the relation between performance on tasks of familiar face recognition (FFR) and face expression difference discrimination (FED) with both perceived disability in face recognition and clinical measures of visual function in subjects with age related macular degeneration (AMD). (2) To quantify the gain in performance for face recognition tasks when subjects use a bioptic telescopic low vision device. Methods: 30 subjects with AMD (age range 66–90 years; visual acuity 0.4–1.4 logMAR) were recruited for the study. Perceived (self rated) disability in face recognition was assessed by an eight item questionnaire covering a range of issues relating to face recognition. Visual functions measured were distance visual acuity (ETDRS logMAR charts), continuous text reading acuity (MNRead charts), contrast sensitivity (Pelli-Robson chart), and colour vision (large panel D-15). In the FFR task, images of famous people had to be identified. FED was assessed by a forced choice test where subjects had to decide which one of four images showed a different facial expression. These tasks were repeated with subjects using a bioptic device. Results: Overall perceived disability in face recognition did not correlate with performance on either task, although a specific item on difficulty recognising familiar faces did correlate with FFR (r = 0.49, p<0.05). FFR performance was most closely related to distance acuity (r = −0.69, p<0.001), while FED performance was most closely related to continuous text reading acuity (r = −0.79, p<0.001). In multiple regression, neither contrast sensitivity nor colour vision significantly increased the explained variance. When using a bioptic telescope, FFR performance improved in 86% of subjects (median gain = 49%; p<0.001), while FED performance increased in 79% of subjects (median gain = 50%; p<0.01). Conclusion: Distance and reading visual acuity are closely associated with measured task performance in FFR and FED. A bioptic low vision device can offer a significant improvement in performance for face recognition tasks, and may be useful in reducing the handicap associated with this disability. There is, however, little evidence for a correlation between self rated difficulty in face recognition and measured performance for either task. Further work is needed to explore the complex relation between the perception of disability and measured performance. PMID:12185131
Hourihan, Kathleen L.; Benjamin, Aaron S.; Liu, Xiping
2012-01-01
The Cross-Race Effect (CRE) in face recognition is the well-replicated finding that people are better at recognizing faces from their own race, relative to other races. The CRE reveals systematic limitations on eyewitness identification accuracy and suggests that some caution is warranted in evaluating cross-race identification. The CRE is a problem because jurors value eyewitness identification highly in verdict decisions. In the present paper, we explore how accurate people are in predicting their ability to recognize own-race and other-race faces. Caucasian and Asian participants viewed photographs of Caucasian and Asian faces, and made immediate judgments of learning during study. An old/new recognition test replicated the CRE: both groups displayed superior discriminability of own-race faces, relative to other-race faces. Importantly, relative metamnemonic accuracy was also greater for own-race faces, indicating that the accuracy of predictions about face recognition is influenced by race. This result indicates another source of concern when eliciting or evaluating eyewitness identification: people are less accurate in judging whether they will or will not recognize a face when that face is of a different race than they are. This new result suggests that a witness’s claim of being likely to recognize a suspect from a lineup should be interpreted with caution when the suspect is of a different race than the witness. PMID:23162788
Mittermeier, Verena; Leicht, Gregor; Karch, Susanne; Hegerl, Ulrich; Möller, Hans-Jürgen; Pogarell, Oliver; Mulert, Christoph
2011-03-01
Several studies suggest that attention to emotional content is related to specific changes in central information processing. In particular, event-related potential (ERP) studies focusing on emotion recognition in pictures and faces or word processing have pointed toward a distinct component of the visual-evoked potential, the EPN ('early posterior negativity'), which has been shown to be related to attention to emotional content. In the present study, we were interested in the existence of a corresponding ERP component in the auditory modality and a possible relationship with the personality dimension extraversion-introversion, as assessed by the NEO Five-Factors Inventory. We investigated 29 healthy subjects using three types of auditory choice tasks: (1) the distinction of syllables with emotional intonation, (2) the identification of the emotional content of adjectives and (3) a purely cognitive control task. Compared with the cognitive control task, emotional paradigms using auditory stimuli evoked an EPN component with a distinct peak after 170 ms (EPN 170). Interestingly, subjects with high scores in the personality trait extraversion showed significantly higher EPN amplitudes for emotional paradigms (syllables and words) than introverted subjects.
Task-irrelevant fear enhances amygdala-FFG inhibition and decreases subsequent face processing.
Schulte Holthausen, Barbara; Habel, Ute; Kellermann, Thilo; Schelenz, Patrick D; Schneider, Frank; Christopher Edgar, J; Turetsky, Bruce I; Regenbogen, Christina
2016-09-01
Facial threat is associated with changes in limbic activity as well as modifications in the cortical face-related N170. It remains unclear if task-irrelevant threat modulates the response to a subsequent facial stimulus, and whether the amygdala's role in early threat perception is independent and direct, or modulatory. In 19 participants, crowds of emotional faces were followed by target faces and a rating task while simultaneous EEG-fMRI were recorded. In addition to conventional analyses, fMRI-informed EEG analyses and fMRI dynamic causal modeling (DCM) were performed. Fearful crowds reduced EEG N170 target face amplitudes and increased responses in a fMRI network comprising insula, amygdala and inferior frontal cortex. Multimodal analyses showed that amygdala response was present ∼60 ms before the right fusiform gyrus-derived N170. DCM indicated inhibitory connections from amygdala to fusiform gyrus, strengthened when fearful crowds preceded a target face. Results demonstrated the suppressing influence of task-irrelevant fearful crowds on subsequent face processing. The amygdala may be sensitive to task-irrelevant fearful crowds and subsequently strengthen its inhibitory influence on face-responsive fusiform N170 generators. This provides spatiotemporal evidence for a feedback mechanism of the amygdala by narrowing attention in order to focus on potential threats. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Sugden, Nicole A; Marquis, Alexandra R
2017-11-01
Infants show facility for discriminating between individual faces within hours of birth. Over the first year of life, infants' face discrimination shows continued improvement with familiar face types, such as own-race faces, but not with unfamiliar face types, like other-race faces. The goal of this meta-analytic review is to provide an effect size for infants' face discrimination ability overall, with own-race faces, and with other-race faces within the first year of life, how this differs with age, and how it is influenced by task methodology. Inclusion criteria were (a) infant participants aged 0 to 12 months, (b) completing a human own- or other-race face discrimination task, (c) with discrimination being determined by infant looking. Our analysis included 30 works (165 samples, 1,926 participants participated in 2,623 tasks). The effect size for infants' face discrimination was small, 6.53% greater than chance (i.e., equal looking to the novel and familiar). There was a significant difference in discrimination by race, overall (own-race, 8.18%; other-race, 3.18%) and between ages (own-race: 0- to 4.5-month-olds, 7.32%; 5- to 7.5-month-olds, 9.17%; and 8- to 12-month-olds, 7.68%; other-race: 0- to 4.5-month-olds, 6.12%; 5- to 7.5-month-olds, 3.70%; and 8- to 12-month-olds, 2.79%). Multilevel linear (mixed-effects) models were used to predict face discrimination; infants' capacity to discriminate faces is sensitive to face characteristics including race, gender, and emotion as well as the methods used, including task timing, coding method, and visual angle. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
The “parts and wholes” of face recognition: a review of the literature
Tanaka, James W.; Simonyi, Diana
2016-01-01
It has been claimed that faces are recognized as a “whole” rather than the recognition of individual parts. In a paper published in the Quarterly Journal of Experimental Psychology in 1993, Martha Farah and I attempted to operationalize the holistic claim using the part/whole task. In this task, participants studied a face and then their memory presented in isolation and in the whole face. Consistent with the holistic view, recognition of the part was superior when tested in the whole-face condition compared to when it was tested in isolation. The “whole face” or holistic advantage was not found for faces that were inverted, or scrambled, nor for non-face objects suggesting that holistic encoding was specific to normal, intact faces. In this paper, we reflect on the part/whole paradigm and how it has contributed to our understanding of what it means to recognize a face as a “whole” stimulus. We describe the value of part/whole task for developing theories of holistic and non-holistic recognition of faces and objects. We discuss the research that has probed the neural substrates of holistic processing in healthy adults and people with prosopagnosia and autism. Finally, we examine how experience shapes holistic face recognition in children and recognition of own- and other-race faces in adults. The goal of this article is to summarize the research on the part/whole task and speculate on how it has informed our understanding of holistic face processing. PMID:26886495
Affective learning modulates spatial competition during low-load attentional conditions.
Lim, Seung-Lark; Padmala, Srikanth; Pessoa, Luiz
2008-04-01
It has been hypothesized that the amygdala mediates the processing advantage of emotional items. In the present study, we employed functional magnetic resonance imaging (fMRI) to investigate how fear conditioning affected the visual processing of task-irrelevant faces. We hypothesized that faces previously paired with shock (threat faces) would more effectively vie for processing resources during conditions involving spatial competition. To investigate this question, following conditioning, participants performed a letter-detection task on an array of letters that was superimposed on task-irrelevant faces. Attentional resources were manipulated by having participants perform an easy or a difficult search task. Our findings revealed that threat fearful faces evoked stronger responses in the amygdala and fusiform gyrus relative to safe fearful faces during low-load attentional conditions, but not during high-load conditions. Consistent with the increased processing of shock-paired stimuli during the low-load condition, such stimuli exhibited increased behavioral priming and fMRI repetition effects relative to unpaired faces during a subsequent implicit-memory task. Overall, our results suggest a competition model in which affective significance signals from the amygdala may constitute a key modulatory factor determining the neural fate of visual stimuli. In addition, it appears that such competitive advantage is only evident when sufficient processing resources are available to process the affective stimulus.
Schubert, Jonathan T. W.; Badde, Stephanie; Röder, Brigitte
2017-01-01
Task demands modulate tactile localization in sighted humans, presumably through weight adjustments in the spatial integration of anatomical, skin-based, and external, posture-based information. In contrast, previous studies have suggested that congenitally blind humans, by default, refrain from automatic spatial integration and localize touch using only skin-based information. Here, sighted and congenitally blind participants localized tactile targets on the palm or back of one hand, while ignoring simultaneous tactile distractors at congruent or incongruent locations on the other hand. We probed the interplay of anatomical and external location codes for spatial congruency effects by varying hand posture: the palms either both faced down, or one faced down and one up. In the latter posture, externally congruent target and distractor locations were anatomically incongruent and vice versa. Target locations had to be reported either anatomically (“palm” or “back” of the hand), or externally (“up” or “down” in space). Under anatomical instructions, performance was more accurate for anatomically congruent than incongruent target-distractor pairs. In contrast, under external instructions, performance was more accurate for externally congruent than incongruent pairs. These modulations were evident in sighted and blind individuals. Notably, distractor effects were overall far smaller in blind than in sighted participants, despite comparable target-distractor identification performance. Thus, the absence of developmental vision seems to be associated with an increased ability to focus tactile attention towards a non-spatially defined target. Nevertheless, that blind individuals exhibited effects of hand posture and task instructions in their congruency effects suggests that, like the sighted, they automatically integrate anatomical and external information during tactile localization. Moreover, spatial integration in tactile processing is, thus, flexibly adapted by top-down information—here, task instruction—even in the absence of developmental vision. PMID:29228023
The Role of Active Exploration of 3D Face Stimuli on Recognition Memory of Facial Information
ERIC Educational Resources Information Center
Liu, Chang Hong; Ward, James; Markall, Helena
2007-01-01
Research on face recognition has mainly relied on methods in which observers are relatively passive viewers of face stimuli. This study investigated whether active exploration of three-dimensional (3D) face stimuli could facilitate recognition memory. A standard recognition task and a sequential matching task were employed in a yoked design.…
Yu, Fengqiong; Zhou, Xiaoqing; Qing, Wu; Li, Dan; Li, Jing; Chen, Xingui; Ji, Gongjun; Dong, Yi; Luo, Yuejia; Zhu, Chunyan; Wang, Kai
2017-01-30
The present study aimed to investigate neural substrates of response inhibition to sad faces across explicit and implicit tasks in depressed female patients. Event-related potentials were obtained while participants performed modified explicit and implicit emotional go/no-go tasks. Compared to controls, depressed patients showed decreased discrimination accuracy and amplitudes of original and nogo-go difference waves at the P3 interval in response inhibition to sad faces during explicit and implicit tasks. P3 difference wave were positively correlated with discrimination accuracy and were independent of clinical assessment. The activation of right dorsal prefrontal cortex was larger for the implicit than for the explicit task in sad condition in health controls, but was similar for the two tasks in depressed patients. The present study indicated that selectively impairment in response inhibition to sad faces in depressed female patients occurred at the behavior inhibition stage across implicit and explicit tasks and may be a trait-like marker of depression. Longitudinal studies are required to determine whether decreased response inhibition to sad faces increases the risk for future depressive episodes so that appropriate treatment can be administered to patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Wang, Hailing; Ip, Chengteng; Fu, Shimin; Sun, Pei
2017-05-01
Face recognition theories suggest that our brains process invariant (e.g., gender) and changeable (e.g., emotion) facial dimensions separately. To investigate whether these two dimensions are processed in different time courses, we analyzed the selection negativity (SN, an event-related potential component reflecting attentional modulation) elicited by face gender and emotion during a feature selective attention task. Participants were instructed to attend to a combination of face emotion and gender attributes in Experiment 1 (bi-dimensional task) and to either face emotion or gender in Experiment 2 (uni-dimensional task). The results revealed that face emotion did not elicit a substantial SN, whereas face gender consistently generated a substantial SN in both experiments. These results suggest that face gender is more sensitive to feature-selective attention and that face emotion is encoded relatively automatically on SN, implying the existence of different underlying processing mechanisms for invariant and changeable facial dimensions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Thompson, Laura A; Malloy, Daniel M; Cone, John M; Hendrickson, David L
2010-01-01
We introduce a novel paradigm for studying the cognitive processes used by listeners within interactive settings. This paradigm places the talker and the listener in the same physical space, creating opportunities for investigations of attention and comprehension processes taking place during interactive discourse situations. An experiment was conducted to compare results from previous research using videotaped stimuli to those obtained within the live face-to-face task paradigm. A headworn apparatus is used to briefly display LEDs on the talker's face in four locations as the talker communicates with the participant. In addition to the primary task of comprehending speeches, participants make a secondary task light detection response. In the present experiment, the talker gave non-emotionally-expressive speeches that were used in past research with videotaped stimuli. Signal detection analysis was employed to determine which areas of the face received the greatest focus of attention. Results replicate previous findings using videotaped methods.
Thompson, Laura A.; Malloy, Daniel M.; Cone, John M.; Hendrickson, David L.
2009-01-01
We introduce a novel paradigm for studying the cognitive processes used by listeners within interactive settings. This paradigm places the talker and the listener in the same physical space, creating opportunities for investigations of attention and comprehension processes taking place during interactive discourse situations. An experiment was conducted to compare results from previous research using videotaped stimuli to those obtained within the live face-to-face task paradigm. A headworn apparatus is used to briefly display LEDs on the talker’s face in four locations as the talker communicates with the participant. In addition to the primary task of comprehending speeches, participants make a secondary task light detection response. In the present experiment, the talker gave non-emotionally-expressive speeches that were used in past research with videotaped stimuli. Signal detection analysis was employed to determine which areas of the face received the greatest focus of attention. Results replicate previous findings using videotaped methods. PMID:21113354
Dalili, Michael N; Schofield-Toloza, Lawrence; Munafò, Marcus R; Penton-Voak, Ian S
2017-08-01
Many cognitive bias modification (CBM) tasks use facial expressions of emotion as stimuli. Some tasks use unique facial stimuli, while others use composite stimuli, given evidence that emotion is encoded prototypically. However, CBM using composite stimuli may be identity- or emotion-specific, and may not generalise to other stimuli. We investigated the generalisability of effects using composite faces in two experiments. Healthy adults in each study were randomised to one of four training conditions: two stimulus-congruent conditions, where same faces were used during all phases of the task, and two stimulus-incongruent conditions, where faces of the opposite sex (Experiment 1) or faces depicting another emotion (Experiment 2) were used after the modification phase. Our results suggested that training effects generalised across identities. However, our results indicated only partial generalisation across emotions. These findings suggest effects obtained using composite stimuli may extend beyond the stimuli used in the task but remain emotion-specific.
Wiese, Holger; Schweinberger, Stefan R; Neumann, Markus F
2008-11-01
We used repetition priming to investigate implicit and explicit processes of unfamiliar face categorization. During prime and test phases, participants categorized unfamiliar faces according to either age or gender. Faces presented at test were either new or primed in a task-congruent (same task during priming and test) or incongruent (different tasks) condition. During age categorization, reaction times revealed significant priming for both priming conditions, and event-related potentials yielded an increased N170 over the left hemisphere as a result of priming. During gender categorization, congruent faces elicited priming and a latency decrease in the right N170. Accordingly, information about age is extracted irrespective of processing demands, and priming facilitates the extraction of feature information reflected in the left N170 effect. By contrast, priming of gender categorization may depend on whether the task at initial presentation requires configural processing.
Receiver operating characteristic analysis of age-related changes in lineup performance.
Humphries, Joyce E; Flowe, Heather D
2015-04-01
In the basic face memory literature, support has been found for the late maturation hypothesis, which holds that face recognition ability is not fully developed until at least adolescence. Support for the late maturation hypothesis in the criminal lineup identification literature, however, has been equivocal because of the analytic approach that has been used to examine age-related changes in identification performance. Recently, receiver operator characteristic (ROC) analysis was applied for the first time in the adult eyewitness memory literature to examine whether memory sensitivity differs across different types of lineup tests. ROC analysis allows for the separation of memory sensitivity from response bias in the analysis of recognition data. Here, we have made the first ROC-based comparison of adults' and children's (5- and 6-year-olds and 9- and 10-year-olds) memory performance on lineups by reanalyzing data from Humphries, Holliday, and Flowe (2012). In line with the late maturation hypothesis, memory sensitivity was significantly greater for adults compared with young children. Memory sensitivity for older children was similar to that for adults. The results indicate that the late maturation hypothesis can be generalized to account for age-related performance differences on an eyewitness memory task. The implications for developmental eyewitness memory research are discussed. Copyright © 2014 Elsevier Inc. All rights reserved.
Isomura, Tomoko; Ogawa, Shino; Yamada, Satoko; Shibasaki, Masahiro; Masataka, Nobuo
2014-01-01
Previous studies have demonstrated that angry faces capture humans' attention more rapidly than emotionally positive faces. This phenomenon is referred to as the anger superiority effect (ASE). Despite atypical emotional processing, adults and children with Autism Spectrum Disorders (ASD) have been reported to show ASE as well as typically developed (TD) individuals. So far, however, few studies have clarified whether or not the mechanisms underlying ASE are the same for both TD and ASD individuals. Here, we tested how TD and ASD children process schematic emotional faces during detection by employing a recognition task in combination with a face-in-the-crowd task. Results of the face-in-the-crowd task revealed the prevalence of ASE both in TD and ASD children. However, the results of the recognition task revealed group differences: In TD children, detection of angry faces required more configural face processing and disrupted the processing of local features. In ASD children, on the other hand, it required more feature-based processing rather than configural processing. Despite the small sample sizes, these findings provide preliminary evidence that children with ASD, in contrast to TD children, show quick detection of angry faces by extracting local features in faces. PMID:24904477
Bublatzky, Florian; Gerdes, Antje B. M.; White, Andrew J.; Riemer, Martin; Alpers, Georg W.
2014-01-01
Human face perception is modulated by both emotional valence and social relevance, but their interaction has rarely been examined. Event-related brain potentials (ERP) to happy, neutral, and angry facial expressions with different degrees of social relevance were recorded. To implement a social anticipation task, relevance was manipulated by presenting faces of two specific actors as future interaction partners (socially relevant), whereas two other face actors remained non-relevant. In a further control task all stimuli were presented without specific relevance instructions (passive viewing). Face stimuli of four actors (2 women, from the KDEF) were randomly presented for 1s to 26 participants (16 female). Results showed an augmented N170, early posterior negativity (EPN), and late positive potential (LPP) for emotional in contrast to neutral facial expressions. Of particular interest, face processing varied as a function of experimental tasks. Whereas task effects were observed for P1 and EPN regardless of instructed relevance, LPP amplitudes were modulated by emotional facial expression and relevance manipulation. The LPP was specifically enhanced for happy facial expressions of the anticipated future interaction partners. This underscores that social relevance can impact face processing already at an early stage of visual processing. These findings are discussed within the framework of motivated attention and face processing theories. PMID:25076881
Face identity matching is selectively impaired in developmental prosopagnosia.
Fisher, Katie; Towler, John; Eimer, Martin
2017-04-01
Individuals with developmental prosopagnosia (DP) have severe face recognition deficits, but the mechanisms that are responsible for these deficits have not yet been fully identified. We assessed whether the activation of visual working memory for individual faces is selectively impaired in DP. Twelve DPs and twelve age-matched control participants were tested in a task where they reported whether successively presented faces showed the same or two different individuals, and another task where they judged whether the faces showed the same or different facial expressions. Repetitions versus changes of the other currently irrelevant attribute were varied independently. DPs showed impaired performance in the identity task, but performed at the same level as controls in the expression task. An electrophysiological marker for the activation of visual face memory by identity matches (N250r component) was strongly attenuated in the DP group, and the size of this attenuation was correlated with poor performance in a standardized face recognition test. Results demonstrate an identity-specific deficit of visual face memory in DPs. Their reduced sensitivity to identity matches in the presence of other image changes could result from earlier deficits in the perceptual extraction of image-invariant visual identity cues from face images. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Content-specific activational effects of estrogen on working memory performance.
Vranić, Andrea; Hromatko, Ivana
2008-07-01
The authors explored the influence of task content and the menstrual cycle phase on working memory (WM) performance. They addressed the content specificity of WM in the framework of evolutionary psychology, proposing a hormone-mediated adaptive design governing face perception. The authors tested 2 groups of healthy young women (n = 66 women with regular menstrual cycle, n = 27 oral contraceptive users) on a WM task with adult male or infant face photographs. Analyses of variance showed significant interaction between task content and estrogen level. Women were more efficient in solving the male faces task during high-estrogen phase of the cycle than during low-estrogen phase. No differences were found in the efficacy of solving the infant faces task between different phases of the cycle. Results suggest content-specific activational effects of estrogen on the WM performance and are consistent with the notion of a hormonal mechanism underlying adaptive shifts in cognition related to mating motivation.
Oncologists' identification of mental health distress in cancer patients: Strategies and barriers.
Granek, L; Nakash, O; Ariad, S; Shapira, S; Ben-David, M
2018-03-06
The purpose of this research was to examine oncologists' perspectives on indicators of mental health distress in patients: what strategies they use to identify these indicators, and what barriers they face in this task. Twenty-three oncologists were interviewed, and the grounded theory method of data collection and analysis was used. Oncologists perceived distress to be a normative part of having cancer and looked for affective, physical, verbal and behavioural indicators using a number of strategies. Barriers to identification of mental health distress included difficulty in differentiating between mental health distress and symptoms of the disease, and lack of training. A systematic, time-efficient assessment of symptoms of emotional distress is critical for identification of psychiatric disorders among patients and differentiating normative emotional responses from psychopathology. Clinical bias and misdiagnosis can be a consequence of an ad hoc, intuitive approach to assessment, which can have consequences for patients and their families. Once elevated risk is identified for mental health distress, the patient can be referred to specialised care that can offer evidence-based treatments. © 2018 John Wiley & Sons Ltd.
Nowparast Rostami, Hadiseh; Sommer, Werner; Zhou, Changsong; Wilhelm, Oliver; Hildebrandt, Andrea
2017-10-01
The enhanced N1 component in event-related potentials (ERP) to face stimuli, termed N170, is considered to indicate the structural encoding of faces. Previously, individual differences in the latency of the N170 have been related to face and object cognition abilities. By orthogonally manipulating content domain (faces vs objects) and task demands (easy/speed vs difficult/accuracy) in both psychometric and EEG tasks, we investigated the uniqueness of the processes underlying face cognition as compared with object cognition and the extent to which the N1/N170 component can explain individual differences in face and object cognition abilities. Data were recorded from N = 198 healthy young adults. Structural equation modeling (SEM) confirmed that the accuracies of face perception (FP) and memory are specific abilities above general object cognition; in contrast, the speed of face processing was not differentiable from the speed of object cognition. Although there was considerable domain-general variance in the N170 shared with the N1, there was significant face-specific variance in the N170. The brain-behavior relationship showed that faster face-specific processes for structural encoding of faces are associated with higher accuracy in both perceiving and memorizing faces. Moreover, in difficult task conditions, qualitatively different processes are additionally needed for recognizing face and object stimuli as compared with easy tasks. The difficulty-dependent variance components in the N170 amplitude were related with both face and object memory (OM) performance. We discuss implications for understanding individual differences in face cognition. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wagner, Jennifer B.; Hirsch, Suzanna B.; Vogel-Farley, Vanessa K.; Redcay, Elizabeth; Nelson, Charles A.
2014-01-01
Individuals with autism spectrum disorder (ASD) often have difficulty with social-emotional cues. This study examined the neural, behavioral, and autonomic correlates of emotional face processing in adolescents with ASD and typical development (TD) using eye-tracking and event-related potentials (ERPs) across two different paradigms. Scanning of faces was similar across groups in the first task, but the second task found that face-sensitive ERPs varied with emotional expressions only in TD. Further, ASD showed enhanced neural responding to non-social stimuli. In TD only, attention to eyes during eye-tracking related to faster face-sensitive ERPs in a separate task; in ASD, a significant positive association was found between autonomic activity and attention to mouths. Overall, ASD showed an atypical pattern of emotional face processing, with reduced neural differentiation between emotions and a reduced relationship between gaze behavior and neural processing of faces. PMID:22684525
Face-name learning in older adults: a benefit of hyper-binding.
Weeks, Jennifer C; Biss, Renée K; Murphy, Kelly J; Hasher, Lynn
2016-10-01
Difficulty remembering faces and corresponding names is a hallmark of cognitive aging, as is increased susceptibility to distraction. Given evidence that older adults spontaneously encode relationships between target pictures and simultaneously occurring distractors (a hyper-binding phenomenon), we asked whether memory for face-name pairs could be improved through prior exposure to faces presented with distractor names. In three experiments, young and older adults performed a selective attention task on faces while ignoring superimposed names. After a delay, they learned and were tested on face-name pairs that were either maintained or rearranged from the initial task but were not told of the connection between tasks. In each experiment, older but not younger participants showed better memory for maintained than for rearranged pairs, indicating that older adults' natural propensity to tacitly encode and bind relevant and irrelevant information can be employed to aid face-name memory performance.
Never forget a name: white matter connectivity predicts person memory
Metoki, Athanasia; Alm, Kylie H.; Wang, Yin; Ngo, Chi T.; Olson, Ingrid R.
2018-01-01
Through learning and practice, we can acquire numerous skills, ranging from the simple (whistling) to the complex (memorizing operettas in a foreign language). It has been proposed that complex learning requires a network of brain regions that interact with one another via white matter pathways. One candidate white matter pathway, the uncinate fasciculus (UF), has exhibited mixed results for this hypothesis: some studies have shown UF involvement across a range of memory tasks, while other studies report null results. Here, we tested the hypothesis that the UF supports associative memory processes and that this tract can be parcellated into subtracts that support specific types of memory. Healthy young adults performed behavioral tasks (two face-name learning tasks, one word pair memory task) and underwent a diffusion-weighted imaging scan. Our results revealed that variation in UF microstructure was significantly associated with individual differences in performance on both face-name tasks, as well as the word association memory task. A UF sub-tract, functionally defined by its connectivity between face-selective regions in the anterior temporal lobe and orbitofrontal cortex, selectively predicted face-name learning. In contrast, connectivity between the fusiform face patch and both anterior face patches had no predictive validity. These findings suggest that there is a robust and replicable relationship between the UF and associative learning and memory. Moreover, this large white matter pathway can be subdivided to reveal discrete functional profiles. PMID:28646241
Effect of perceptual load on semantic access by speech in children.
Jerger, Susan; Damian, Markus F; Mills, Candice; Bartlett, James; Tye-Murray, Nancy; Abdi, Hervé
2013-04-01
To examine whether semantic access by speech requires attention in children. Children (N = 200) named pictures and ignored distractors on a cross-modal (distractors: auditory-no face) or multimodal (distractors: auditory-static face and audiovisual-dynamic face) picture word task. The cross-modal task had a low load, and the multimodal task had a high load (i.e., respectively naming pictures displayed on a blank screen vs. below the talker's face on his T-shirt). Semantic content of distractors was manipulated to be related vs. unrelated to the picture (e.g., picture "dog" with distractors "bear" vs. "cheese"). If irrelevant semantic content manipulation influences naming times on both tasks despite variations in loads, Lavie's (2005) perceptual load model proposes that semantic access is independent of capacity-limited attentional resources; if, however, irrelevant content influences naming only on the cross-modal task (low load), the perceptual load model proposes that semantic access is dependent on attentional resources exhausted by the higher load task. Irrelevant semantic content affected performance for both tasks in 6- to 9-year-olds but only on the cross-modal task in 4- to 5-year-olds. The addition of visual speech did not influence results on the multimodal task. Younger and older children differ in dependence on attentional resources for semantic access by speech.
Righi, Giulia; Westerlund, Alissa; Congdon, Eliza L.; Troller-Renfree, Sonya; Nelson, Charles A.
2013-01-01
The goal of the present study was to investigate infants’ processing of female and male faces. We used an event-related potential (ERP) priming task, as well as a visual-paired comparison (VPC) eye tracking task to explore how 7-month-old “female expert” infants differed in their responses to faces of different genders. Female faces elicited larger N290 amplitudes than male faces. Furthermore, infants showed a priming effect for female faces only, whereby the N290 was significantly more negative for novel females compared to primed female faces. The VPC experiment was designed to test whether infants could reliably discriminate between two female and two male faces. Analyses showed that infants were able to differentiate faces of both genders. The results of the present study suggest that 7-month olds with a large amount of female face experience show a processing advantage for forming a neural representation of female faces, compared to male faces. However, the enhanced neural sensitivity to the repetition of female faces is not due to the infants' inability to discriminate male faces. Instead, the combination of results from the two tasks suggests that the differential processing for female faces may be a signature of expert-level processing. PMID:24200421
The illusion of fame: how the nonfamous become famous.
Landau, Joshua D; Leed, Stacey A
2012-01-01
This article reports 2 experiments in which nonfamous faces were paired with famous (e.g., Oprah Winfrey) or semifamous (e.g., Annika Sorenstam) faces during an initial orienting task. In Experiment 1, the orienting task directed participants to consider the relationship between the paired faces. In Experiment 2, participants considered distinctive qualities of the paired faces. Participants then judged the fame level of old and new nonfamous faces, semifamous faces, and famous faces. Pairing a nonfamous face with a famous face resulted in a higher fame rating than pairing a nonfamous face with a semifamous face. The fame attached to the famous people was misattributed to their nonfamous partners. We discuss this pattern of results in the context of current theoretical explanations of familiarity misattributions.
Computer-mediated communication: task performance and satisfaction.
Simon, Andrew F
2006-06-01
The author assessed satisfaction and performance on 3 tasks (idea generation, intellective, judgment) among 75 dyads (N = 150) working through 1 of 3 modes of communication (instant messaging, videoconferencing, face to face). The author based predictions on the Media Naturalness Theory (N. Kock, 2001, 2002) and on findings from past researchers (e.g., D. M. DeRosa, C. Smith, & D. A. Hantula, in press) of the interaction between tasks and media. The present author did not identify task performance differences, although satisfaction with the medium was lower among those dyads communicating through an instant-messaging system than among those interacting face to face or through videoconferencing. The findings support the Media Naturalness Theory. The author discussed them in relation to the participants' frequent use of instant messaging and their familiarity with new communication media.
Daini, Roberta; Comparetti, Chiara M.; Ricciardelli, Paola
2014-01-01
Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition. PMID:25520643
Daini, Roberta; Comparetti, Chiara M; Ricciardelli, Paola
2014-01-01
Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition.
Lagattuta, Kristin Hansen; Kramer, Hannah J
2017-01-01
We used eye tracking to examine 4- to 10-year-olds' and adults' (N = 173) visual attention to negative (anger, fear, sadness, disgust) and neutral faces when paired with happy faces in 2 experimental conditions: free-viewing ("look at the faces") and directed ("look only at the happy faces"). Regardless of instruction, all age groups more often looked first to negative versus positive faces (no age differences), suggesting that initial orienting is driven by bottom-up processes. In contrast, biases in more sustained attention-last looks and looking duration-varied by age and could be modified by top-down instruction. On the free-viewing task, all age groups exhibited a negativity bias which attenuated with age and remained stable across trials. When told to look only at happy faces (directed task), all age groups shifted to a positivity bias, with linear age-related improvements. This ability to implement the "look only at the happy faces" instruction, however, fatigued over time, with the decrement stronger for children. Controlling for age, individual differences in executive function (working memory and inhibitory control) had no relation to the free-viewing task; however, these variables explained substantial variance on the directed task, with children and adults higher in executive function showing better skill at looking last and looking longer at happy faces. Greater anxiety predicted more first looks to angry faces on the directed task. These findings advance theory and research on normative development and individual differences in the bias to prioritize negative information, including contributions of bottom-up salience and top-down control. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Peschard, Virginie; Philippot, Pierre; Joassin, Frédéric; Rossignol, Mandy
2013-04-01
Social anxiety has been characterized by an attentional bias towards threatening faces. Electrophysiological studies have demonstrated modulations of cognitive processing from 100 ms after stimulus presentation. However, the impact of the stimulus features and task instructions on facial processing remains unclear. Event-related potentials were recorded while high and low socially anxious individuals performed an adapted Stroop paradigm that included a colour-naming task with non-emotional stimuli, an emotion-naming task (the explicit task) and a colour-naming task (the implicit task) on happy, angry and neutral faces. Whereas the impact of task factors was examined by contrasting an explicit and an implicit emotional task, the effects of perceptual changes on facial processing were explored by including upright and inverted faces. The findings showed an enhanced P1 in social anxiety during the three tasks, without a moderating effect of the type of task or stimulus. These results suggest a global modulation of attentional processing in performance situations. Copyright © 2013 Elsevier B.V. All rights reserved.
The influence of linguistic experience on pitch perception in speech and nonspeech sounds
NASA Astrophysics Data System (ADS)
Bent, Tessa; Bradlow, Ann R.; Wright, Beverly A.
2003-04-01
How does native language experience with a tone or nontone language influence pitch perception? To address this question 12 English and 13 Mandarin listeners participated in an experiment involving three tasks: (1) Mandarin tone identification-a clearly linguistic task where a strong effect of language background was expected, (2) pure-tone and pulse-train frequency discrimination-a clearly nonlinguistic auditory discrimination task where no effect of language background was expected, and (3) pitch glide identification-a nonlinguistic auditory categorization task where some effect of language background was expected. As anticipated, Mandarin listeners identified Mandarin tones significantly more accurately than English listeners (Task 1) and the two groups' pure-tone and pulse-train frequency discrimination thresholds did not differ (Task 2). For pitch glide identification (Task 3), Mandarin listeners made more identification errors: in comparison with English listeners, Mandarin listeners more frequently misidentified falling pitch glides as level, and more often misidentified level pitch ``glides'' with relatively high frequencies as rising and those with relatively low frequencies as falling. Thus, it appears that the effect of long-term linguistic experience can extend beyond lexical tone category identification in syllables to pitch class identification in certain nonspeech sounds. [Work supported by Sigma Xi and NIH.
Socio-cognitive load and social anxiety in an emotional anti-saccade task
Butler, Stephen H.; Grealy, Madeleine A.
2018-01-01
The anti-saccade task has been used to measure attentional control related to general anxiety but less so with social anxiety specifically. Previous research has not been conclusive in suggesting that social anxiety may lead to difficulties in inhibiting faces. It is possible that static face paradigms do not convey a sufficient social threat to elicit an inhibitory response in socially anxious individuals. The aim of the current study was twofold. We investigated the effect of social anxiety on performance in an anti-saccade task with neutral or emotional faces preceded either by a social stressor (Experiment 1), or valenced sentence primes designed to increase the social salience of the task (Experiment 2). Our results indicated that latencies were significantly longer for happy than angry faces. Additionally, and surprisingly, high anxious participants made more erroneous anti-saccades to neutral than angry and happy faces, whilst the low anxious groups exhibited a trend in the opposite direction. Results are consistent with a general approach-avoidance response for positive and threatening social information. However increased socio-cognitive load may alter attentional control with high anxious individuals avoiding emotional faces, but finding it more difficult to inhibit ambiguous faces. The effects of social sentence primes on attention appear to be subtle but suggest that the anti-saccade task will only elicit socially relevant responses where the paradigm is more ecologically valid. PMID:29795619
Ofan, Renana H; Rubin, Nava; Amodio, David M
2011-10-01
We examined the relation between neural activity reflecting early face perception processes and automatic and controlled responses to race. Participants completed a sequential evaluative priming task, in which two-tone images of Black faces, White faces, and cars appeared as primes, followed by target words categorized as pleasant or unpleasant, while encephalography was recorded. Half of these participants were alerted that the task assessed racial prejudice and could reveal their personal bias ("alerted" condition). To assess face perception processes, the N170 component of the ERP was examined. For all participants, stronger automatic pro-White bias was associated with larger N170 amplitudes to Black than White faces. For participants in the alerted condition only, larger N170 amplitudes to Black versus White faces were also associated with less controlled processing on the word categorization task. These findings suggest that preexisting racial attitudes affect early face processing and that situational factors moderate the link between early face processing and behavior.
How do schizophrenia patients use visual information to decode facial emotion?
Lee, Junghee; Gosselin, Frédéric; Wynn, Jonathan K; Green, Michael F
2011-09-01
Impairment in recognizing facial emotions is a prominent feature of schizophrenia patients, but the underlying mechanism of this impairment remains unclear. This study investigated the specific aspects of visual information that are critical for schizophrenia patients to recognize emotional expression. Using the Bubbles technique, we probed the use of visual information during a facial emotion discrimination task (fear vs. happy) in 21 schizophrenia patients and 17 healthy controls. Visual information was sampled through randomly located Gaussian apertures (or "bubbles") at 5 spatial frequency scales. Online calibration of the amount of face exposed through bubbles was used to ensure 75% overall accuracy for each subject. Least-square multiple linear regression analyses between sampled information and accuracy were performed to identify critical visual information that was used to identify emotional expression. To accurately identify emotional expression, schizophrenia patients required more exposure of facial areas (i.e., more bubbles) compared with healthy controls. To identify fearful faces, schizophrenia patients relied less on bilateral eye regions at high-spatial frequency compared with healthy controls. For identification of happy faces, schizophrenia patients relied on the mouth and eye regions; healthy controls did not utilize eyes and used the mouth much less than patients did. Schizophrenia patients needed more facial information to recognize emotional expression of faces. In addition, patients differed from controls in their use of high-spatial frequency information from eye regions to identify fearful faces. This study provides direct evidence that schizophrenia patients employ an atypical strategy of using visual information to recognize emotional faces.
Electrophysiological evidence for women superiority on unfamiliar face processing.
Sun, Tianyi; Li, Lin; Xu, Yuanli; Zheng, Li; Zhang, Weidong; Zhou, Fanzhi Anita; Guo, Xiuyan
2017-02-01
Previous research has reported that women superiority on face recognition tasks, taking sex difference in accuracy rates as major evidence. By appropriately modifying experimental tasks and examining reaction time as behavioral measure, it was possible to explore which stage of face processing contributes to womens' superiority. We used a modified delayed matching-to-sample task to investigate the time course characteristics of face recognition by ERP, for both men and women. In each trial, participants matched successively presented faces to samples (target faces) by key pressing. It was revealed that women were more accurate and faster than men on the task. ERP results showed that compared to men, women had shorter peak latencies of early components P100 and N170, as well as larger mean amplitude of the late positive component P300. Correlations between P300 mean amplitudes and RTs were found for both sexes. Besides, reaction times of women but not men were positively correlated with N170 latencies. In general, we provided further evidence for women superiority on face recognition in both behavioral and neural aspects. Copyright © 2016 Elsevier Ireland Ltd and Japan Neuroscience Society. All rights reserved.
Attention and memory bias to facial emotions underlying negative symptoms of schizophrenia.
Jang, Seon-Kyeong; Park, Seon-Cheol; Lee, Seung-Hwan; Cho, Yang Seok; Choi, Kee-Hong
2016-01-01
This study assessed bias in selective attention to facial emotions in negative symptoms of schizophrenia and its influence on subsequent memory for facial emotions. Thirty people with schizophrenia who had high and low levels of negative symptoms (n = 15, respectively) and 21 healthy controls completed a visual probe detection task investigating selective attention bias (happy, sad, and angry faces randomly presented for 50, 500, or 1000 ms). A yes/no incidental facial memory task was then completed. Attention bias scores and recognition errors were calculated. Those with high negative symptoms exhibited reduced attention to emotional faces relative to neutral faces; those with low negative symptoms showed the opposite pattern when faces were presented for 500 ms regardless of the valence. Compared to healthy controls, those with high negative symptoms made more errors for happy faces in the memory task. Reduced attention to emotional faces in the probe detection task was significantly associated with less pleasure and motivation and more recognition errors for happy faces in schizophrenia group only. Attention bias away from emotional information relatively early in the attentional process and associated diminished positive memory may relate to pathological mechanisms for negative symptoms.
Four year-olds use norm-based coding for face identity.
Jeffery, Linda; Read, Ainsley; Rhodes, Gillian
2013-05-01
Norm-based coding, in which faces are coded as deviations from an average face, is an efficient way of coding visual patterns that share a common structure and must be distinguished by subtle variations that define individuals. Adults and school-aged children use norm-based coding for face identity but it is not yet known if pre-school aged children also use norm-based coding. We reasoned that the transition to school could be critical in developing a norm-based system because school places new demands on children's face identification skills and substantially increases experience with faces. Consistent with this view, face identification performance improves steeply between ages 4 and 7. We used face identity aftereffects to test whether norm-based coding emerges between these ages. We found that 4 year-old children, like adults, showed larger face identity aftereffects for adaptors far from the average than for adaptors closer to the average, consistent with use of norm-based coding. We conclude that experience prior to age 4 is sufficient to develop a norm-based face-space and that failure to use norm-based coding cannot explain 4 year-old children's poor face identification skills. Copyright © 2013 Elsevier B.V. All rights reserved.
Effect of Perceptual Load on Semantic Access by Speech in Children
ERIC Educational Resources Information Center
Jerger, Susan; Damian, Markus F.; Mills, Candice; Bartlett, James; Tye-Murray, Nancy; Abdi, Herve
2013-01-01
Purpose: To examine whether semantic access by speech requires attention in children. Method: Children ("N" = 200) named pictures and ignored distractors on a cross-modal (distractors: auditory-no face) or multimodal (distractors: auditory-static face and audiovisual- dynamic face) picture word task. The cross-modal task had a low load,…
Kozlowska, Kasia; Brown, Kerri J; Palmer, Donna M; Williams, Lea M
2013-04-01
This study aimed to assess how children and adolescents with conversion disorders identify universal facial expressions of emotion and to determine whether identification of emotion in faces relates to subjective emotional distress. Fifty-seven participants (41 girls and 16 boys) aged 8.5 to 18 years with conversion disorders and 57 age- and sex-matched healthy controls completed a computerized task in which their accuracy and reaction times for identifying facial expressions were recorded. To isolate the effect of individual emotional expressions, participants' reaction times for each emotion (fear, anger, sadness, disgust, and happiness) were subtracted from their reaction times for the neutral control face. Participants also completed self-report measures of subjective emotional distress. Children/Adolescents with conversion disorders showed faster reaction times for identifying expressions of sadness (t(112) = -2.2, p = .03; 444 [609] versus 713 [695], p = .03) and slower reactions times for happy expressions (t(99.3) = 2.28, p ≤ .024; -33 [35] versus 174 [51], p = .024), compared with controls (F(33.75, 419.81) = 3.76, p < .001). There were no significant correlations (at the corrected p value of .01) between reaction times and subjective reports of perceived distress (r values ranged from 092 to 0.221; p > .018). There were also no differences in identification accuracy for any emotion (p > .82). The observation of faster reaction times to sad faces in children and adolescents with conversion disorders suggests increased vigilance and motor readiness to emotional signals that are potential threats to self or to close others. These effects may occur before conscious processing.
NASA Astrophysics Data System (ADS)
Golick, Douglas A.; Heng-Moss, Tiffany M.; Steckelberg, Allen L.; Brooks, David. W.; Higley, Leon G.; Fowler, David
2013-08-01
The purpose of the study was to determine whether undergraduate students receiving web-based instruction based on traditional, key character, or classification instruction differed in their performance of insect identification tasks. All groups showed a significant improvement in insect identifications on pre- and post-two-dimensional picture specimen quizzes. The study also determined student performance on insect identification tasks was not as good as for family-level identification as compared to broader insect orders and arthropod classification identification tasks. Finally, students erred significantly more by misidentification than misspelling specimen names on prepared specimen quizzes. Results of this study support that short web-based insect identification exercises can improve insect identification performance. Also included is a discussion of how these results can be used in teaching and future research on biological identification.
An Ensemble Framework Coping with Instability in the Gene Selection Process.
Castellanos-Garzón, José A; Ramos, Juan; López-Sánchez, Daniel; de Paz, Juan F; Corchado, Juan M
2018-03-01
This paper proposes an ensemble framework for gene selection, which is aimed at addressing instability problems presented in the gene filtering task. The complex process of gene selection from gene expression data faces different instability problems from the informative gene subsets found by different filter methods. This makes the identification of significant genes by the experts difficult. The instability of results can come from filter methods, gene classifier methods, different datasets of the same disease and multiple valid groups of biomarkers. Even though there is a wide number of proposals, the complexity imposed by this problem remains a challenge today. This work proposes a framework involving five stages of gene filtering to discover biomarkers for diagnosis and classification tasks. This framework performs a process of stable feature selection, facing the problems above and, thus, providing a more suitable and reliable solution for clinical and research purposes. Our proposal involves a process of multistage gene filtering, in which several ensemble strategies for gene selection were added in such a way that different classifiers simultaneously assess gene subsets to face instability. Firstly, we apply an ensemble of recent gene selection methods to obtain diversity in the genes found (stability according to filter methods). Next, we apply an ensemble of known classifiers to filter genes relevant to all classifiers at a time (stability according to classification methods). The achieved results were evaluated in two different datasets of the same disease (pancreatic ductal adenocarcinoma), in search of stability according to the disease, for which promising results were achieved.
Threatening scenes but not threatening faces shorten time-to-contact estimates.
DeLucia, Patricia R; Brendel, Esther; Hecht, Heiko; Stacy, Ryan L; Larsen, Jeff T
2014-08-01
We previously reported that time-to-contact (TTC) judgments of threatening scene pictures (e.g., frontal attacks) resulted in shortened estimations and were mediated by cognitive processes, and that judgments of threatening (e.g., angry) face pictures resulted in a smaller effect and did not seem cognitively mediated. In the present study, the effects of threatening scenes and faces were compared in two different tasks. An effect of threatening scene pictures occurred in a prediction-motion task, which putatively requires cognitive motion extrapolation, but not in a relative TTC judgment task, which was designed to be less reliant on cognitive processes. An effect of threatening face pictures did not occur in either task. We propose that an object's explicit potential of threat per se, and not only emotional valence, underlies the effect of threatening scenes on TTC judgments and that such an effect occurs only when the task allows sufficient cognitive processing. Results are consistent with distinctions between predator and social fear systems and different underlying physiological mechanisms. Not all threatening information elicits the same responses, and whether an effect occurs at all may depend on the task and the degree to which the task involves cognitive processes.
Modulation of Alpha Oscillations in the Human EEG with Facial Preference
Kang, Jae-Hwan; Kim, Su Jin; Cho, Yang Seok; Kim, Sung-Phil
2015-01-01
Facial preference that results from the processing of facial information plays an important role in social interactions as well as the selection of a mate, friend, candidate, or favorite actor. However, it still remains elusive which brain regions are implicated in the neural mechanisms underlying facial preference, and how neural activities in these regions are modulated during the formation of facial preference. In the present study, we investigated the modulation of electroencephalography (EEG) oscillatory power with facial preference. For the reliable assessments of facial preference, we designed a series of passive viewing and active choice tasks. In the former task, twenty-four face stimuli were passively viewed by participants for multiple times in random order. In the latter task, the same stimuli were then evaluated by participants for their facial preference judgments. In both tasks, significant differences between the preferred and non-preferred faces groups were found in alpha band power (8–13 Hz) but not in other frequency bands. The preferred faces generated more decreases in alpha power. During the passive viewing task, significant differences in alpha power between the preferred and non-preferred face groups were observed at the left frontal regions in the early (0.15–0.4 s) period during the 1-s presentation. By contrast, during the active choice task when participants consecutively watched the first and second face for 1 s and then selected the preferred one, an alpha power difference was found for the late (0.65–0.8 s) period over the whole brain during the first face presentation and over the posterior regions during the second face presentation. These results demonstrate that the modulation of alpha activity by facial preference is a top-down process, which requires additional cognitive resources to facilitate information processing of the preferred faces that capture more visual attention than the non-preferred faces. PMID:26394328
NASA Astrophysics Data System (ADS)
Abdullah, Nurul Azma; Saidi, Md. Jamri; Rahman, Nurul Hidayah Ab; Wen, Chuah Chai; Hamid, Isredza Rahmi A.
2017-10-01
In practice, identification of criminal in Malaysia is done through thumbprint identification. However, this type of identification is constrained as most of criminal nowadays getting cleverer not to leave their thumbprint on the scene. With the advent of security technology, cameras especially CCTV have been installed in many public and private areas to provide surveillance activities. The footage of the CCTV can be used to identify suspects on scene. However, because of limited software developed to automatically detect the similarity between photo in the footage and recorded photo of criminals, the law enforce thumbprint identification. In this paper, an automated facial recognition system for criminal database was proposed using known Principal Component Analysis approach. This system will be able to detect face and recognize face automatically. This will help the law enforcements to detect or recognize suspect of the case if no thumbprint present on the scene. The results show that about 80% of input photo can be matched with the template data.
Amygdala habituation and prefrontal functional connectivity in youth with autism spectrum disorders.
Swartz, Johnna R; Wiggins, Jillian Lee; Carrasco, Melisa; Lord, Catherine; Monk, Christopher S
2013-01-01
Amygdala habituation, the rapid decrease in amygdala responsiveness to the repeated presentation of stimuli, is fundamental to the nervous system. Habituation is important for maintaining adaptive levels of arousal to predictable social stimuli and decreased habituation is associated with heightened anxiety. Input from the ventromedial prefrontal cortex (vmPFC) regulates amygdala activity. Although previous research has shown abnormal amygdala function in youth with autism spectrum disorders (ASD), no study has examined amygdala habituation in a young sample or whether habituation is related to amygdala connectivity with the vmPFC. Data were analyzed from 32 children and adolescents with ASD and 56 typically developing controls who underwent functional magnetic resonance imaging while performing a gender identification task for faces that were fearful, happy, sad, or neutral. Habituation was tested by comparing amygdala activation to faces during the first half versus the second half of the session. VmPFC-amygdala connectivity was examined through psychophysiologic interaction analysis. Youth with ASD had decreased amygdala habituation to sad and neutral faces compared with controls. Moreover, decreased amygdala habituation correlated with autism severity as measured by the Social Responsiveness Scale. There was a group difference in vmPFC-amygdala connectivity while viewing sad faces, and connectivity predicted amygdala habituation to sad faces in controls. Sustained amygdala activation to faces suggests that repeated face presentations are processed differently in individuals with ASD, which could contribute to social impairments. Abnormal modulation of the amygdala by the vmPFC may play a role in decreased habituation. Copyright © 2013 American Academy of Child & Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.
Recovering faces from memory: the distracting influence of external facial features.
Frowd, Charlie D; Skelton, Faye; Atherton, Chris; Pitchford, Melanie; Hepton, Gemma; Holden, Laura; McIntyre, Alex H; Hancock, Peter J B
2012-06-01
Recognition memory for unfamiliar faces is facilitated when contextual cues (e.g., head pose, background environment, hair and clothing) are consistent between study and test. By contrast, inconsistencies in external features, especially hair, promote errors in unfamiliar face-matching tasks. For the construction of facial composites, as carried out by witnesses and victims of crime, the role of external features (hair, ears, and neck) is less clear, although research does suggest their involvement. Here, over three experiments, we investigate the impact of external features for recovering facial memories using a modern, recognition-based composite system, EvoFIT. Participant-constructors inspected an unfamiliar target face and, one day later, repeatedly selected items from arrays of whole faces, with "breeding," to "evolve" a composite with EvoFIT; further participants (evaluators) named the resulting composites. In Experiment 1, the important internal-features (eyes, brows, nose, and mouth) were constructed more identifiably when the visual presence of external features was decreased by Gaussian blur during construction: higher blur yielded more identifiable internal-features. In Experiment 2, increasing the visible extent of external features (to match the target's) in the presented face-arrays also improved internal-features quality, although less so than when external features were masked throughout construction. Experiment 3 demonstrated that masking external-features promoted substantially more identifiable images than using the previous method of blurring external-features. Overall, the research indicates that external features are a distractive rather than a beneficial cue for face construction; the results also provide a much better method to construct composites, one that should dramatically increase identification of offenders.
Marečková, Klára; Weinbrand, Zohar; Chakravarty, M Mallar; Lawrence, Claire; Aleong, Rosanne; Leonard, Gabriel; Perron, Michel; Pike, G Bruce; Richer, Louis; Veillette, Suzanne; Pausova, Zdenka; Paus, Tomáš
2011-11-01
Sex identification of a face is essential for social cognition. Still, perceptual cues indicating the sex of a face, and mechanisms underlying their development, remain poorly understood. Previously, our group described objective age- and sex-related differences in faces of healthy male and female adolescents (12-18 years of age), as derived from magnetic resonance images (MRIs) of the adolescents' heads. In this study, we presented these adolescent faces to 60 female raters to determine which facial features most reliably predicted subjective sex identification. Identification accuracy correlated highly with specific MRI-derived facial features (e.g. broader forehead, chin, jaw, and nose). Facial features that most reliably cued male identity were associated with plasma levels of testosterone (above and beyond age). Perceptible sex differences in face shape are thus associated with specific facial features whose emergence may be, in part, driven by testosterone. Copyright © 2011 Elsevier Inc. All rights reserved.
Face-to-face interference in typical and atypical development
Riby, Deborah M; Doherty-Sneddon, Gwyneth; Whittle, Lisa
2012-01-01
Visual communication cues facilitate interpersonal communication. It is important that we look at faces to retrieve and subsequently process such cues. It is also important that we sometimes look away from faces as they increase cognitive load that may interfere with online processing. Indeed, when typically developing individuals hold face gaze it interferes with task completion. In this novel study we quantify face interference for the first time in Williams syndrome (WS) and Autism Spectrum Disorder (ASD). These disorders of development impact on cognition and social attention, but how do faces interfere with cognitive processing? Individuals developing typically as well as those with ASD (n = 19) and WS (n = 16) were recorded during a question and answer session that involved mathematics questions. In phase 1 gaze behaviour was not manipulated, but in phase 2 participants were required to maintain eye contact with the experimenter at all times. Looking at faces decreased task accuracy for individuals who were developing typically. Critically, the same pattern was seen in WS and ASD, whereby task performance decreased when participants were required to hold face gaze. The results show that looking at faces interferes with task performance in all groups. This finding requires the caveat that individuals with WS and ASD found it harder than individuals who were developing typically to maintain eye contact throughout the interaction. Individuals with ASD struggled to hold eye contact at all points of the interaction while those with WS found it especially difficult when thinking. PMID:22356183
Kim, Jinyoung; Kang, Min-Suk; Cho, Yang Seok; Lee, Sang-Hun
2017-01-01
As documented by Darwin 150 years ago, emotion expressed in human faces readily draws our attention and promotes sympathetic emotional reactions. How do such reactions to the expression of emotion affect our goal-directed actions? Despite the substantial advance made in the neural mechanisms of both cognitive control and emotional processing, it is not yet known well how these two systems interact. Here, we studied how emotion expressed in human faces influences cognitive control of conflict processing, spatial selective attention and inhibitory control in particular, using the Eriksen flanker paradigm. In this task, participants viewed displays of a central target face flanked by peripheral faces and were asked to judge the gender of the target face; task-irrelevant emotion expressions were embedded in the target face, the flanking faces, or both. We also monitored how emotion expression affects gender judgment performance while varying the relative timing between the target and flanker faces. As previously reported, we found robust gender congruency effects, namely slower responses to the target faces whose gender was incongruent with that of the flanker faces, when the flankers preceded the target by 0.1 s. When the flankers further advanced the target by 0.3 s, however, the congruency effect vanished in most of the viewing conditions, except for when emotion was expressed only in the flanking faces or when congruent emotion was expressed in the target and flanking faces. These results suggest that emotional saliency can prolong a substantial degree of conflict by diverting bottom-up attention away from the target, and that inhibitory control on task-irrelevant information from flanking stimuli is deterred by the emotional congruency between target and flanking stimuli. PMID:28676780
Visual information processing of faces in body dysmorphic disorder.
Feusner, Jamie D; Townsend, Jennifer; Bystritsky, Alexander; Bookheimer, Susan
2007-12-01
Body dysmorphic disorder (BDD) is a severe psychiatric condition in which individuals are preoccupied with perceived appearance defects. Clinical observation suggests that patients with BDD focus on details of their appearance at the expense of configural elements. This study examines abnormalities in visual information processing in BDD that may underlie clinical symptoms. To determine whether patients with BDD have abnormal patterns of brain activation when visually processing others' faces with high, low, or normal spatial frequency information. Case-control study. University hospital. Twelve right-handed, medication-free subjects with BDD and 13 control subjects matched by age, sex, and educational achievement. Intervention Functional magnetic resonance imaging while performing matching tasks of face stimuli. Stimuli were neutral-expression photographs of others' faces that were unaltered, altered to include only high spatial frequency visual information, or altered to include only low spatial frequency visual information. Blood oxygen level-dependent functional magnetic resonance imaging signal changes in the BDD and control groups during tasks with each stimulus type. Subjects with BDD showed greater left hemisphere activity relative to controls, particularly in lateral prefrontal cortex and lateral temporal lobe regions for all face tasks (and dorsal anterior cingulate activity for the low spatial frequency task). Controls recruited left-sided prefrontal and dorsal anterior cingulate activity only for the high spatial frequency task. Subjects with BDD demonstrate fundamental differences from controls in visually processing others' faces. The predominance of left-sided activity for low spatial frequency and normal faces suggests detail encoding and analysis rather than holistic processing, a pattern evident in controls only for high spatial frequency faces. These abnormalities may be associated with apparent perceptual distortions in patients with BDD. The fact that these findings occurred while subjects viewed others' faces suggests differences in visual processing beyond distortions of their own appearance.
Local Navon letter processing affects skilled behavior: a golf-putting experiment.
Lewis, Michael B; Dawkins, Gemma
2015-04-01
Expert or skilled behaviors (for example, face recognition or sporting performance) are typically performed automatically and with little conscious awareness. Previous studies, in various domains of performance, have shown that activities immediately prior to a task demanding a learned skill can affect performance. In sport, describing the to-be-performed action is detrimental, whereas in face recognition, describing a face or reading local Navon letters is detrimental. Two golf-putting experiments are presented that compare the effects that these three tasks have on experienced and novice golfers. Experiment 1 found a Navon effect on golf performance for experienced players. Experiment 2 found, for experienced players only, that performance was impaired following the three tasks described above, when compared with reading or global Navon tasks. It is suggested that the three tasks affect skilled performance by provoking a shift from automatic behavior to a more analytic style. By demonstrating similarities between effects in face recognition and sporting behavior, it is hoped to better understand concepts in both fields.
Saito, Atsuko; Hamada, Hiroki; Kikusui, Takefumi; Mogi, Kazutaka; Nagasawa, Miho; Mitsui, Shohei; Higuchi, Takashi; Hasegawa, Toshikazu; Hiraki, Kazuo
2014-01-01
The neuropeptide oxytocin plays a central role in prosocial and parental behavior in non-human mammals as well as humans. It has been suggested that oxytocin may affect visual processing of infant faces and emotional reaction to infants. Healthy male volunteers (N = 13) were tested for their ability to detect infant or adult faces among adult or infant faces (facial visual search task). Urine samples were collected from all participants before the study to measure the concentration of oxytocin. Urinary oxytocin positively correlated with performance in the facial visual search task. However, task performance and its correlation with oxytocin concentration did not differ between infant faces and adult faces. Our data suggests that endogenous oxytocin is related to facial visual cognition, but does not promote infant-specific responses in unmarried men who are not fathers.
Dennett, Hugh W; McKone, Elinor; Tavashmi, Raka; Hall, Ashleigh; Pidcock, Madeleine; Edwards, Mark; Duchaine, Bradley
2012-06-01
Many research questions require a within-class object recognition task matched for general cognitive requirements with a face recognition task. If the object task also has high internal reliability, it can improve accuracy and power in group analyses (e.g., mean inversion effects for faces vs. objects), individual-difference studies (e.g., correlations between certain perceptual abilities and face/object recognition), and case studies in neuropsychology (e.g., whether a prosopagnosic shows a face-specific or object-general deficit). Here, we present such a task. Our Cambridge Car Memory Test (CCMT) was matched in format to the established Cambridge Face Memory Test, requiring recognition of exemplars across view and lighting change. We tested 153 young adults (93 female). Results showed high reliability (Cronbach's alpha = .84) and a range of scores suitable both for normal-range individual-difference studies and, potentially, for diagnosis of impairment. The mean for males was much higher than the mean for females. We demonstrate independence between face memory and car memory (dissociation based on sex, plus a modest correlation between the two), including where participants have high relative expertise with cars. We also show that expertise with real car makes and models of the era used in the test significantly predicts CCMT performance. Surprisingly, however, regression analyses imply that there is an effect of sex per se on the CCMT that is not attributable to a stereotypical male advantage in car expertise.
Gender Differences in Empathy: The Role of the Right Hemisphere
ERIC Educational Resources Information Center
Rueckert, Linda; Naybar, Nicolette
2008-01-01
The relationship between activation of the right cerebral hemisphere (RH) and empathy was investigated. Twenty-two men and 73 women participated by completing a chimeric face task and empathy questionnaire. For the face task, participants were asked to pick which of the two chimeric faces looked happier. Both men and women were significantly more…
ERIC Educational Resources Information Center
Brooks, Brian E.; Cooper, Eric E.
2006-01-01
Three divided visual field experiments tested current hypotheses about the types of visual shape representation tasks that recruit the cognitive and neural mechanisms underlying face recognition. Experiment 1 found a right hemisphere advantage for subordinate but not basic-level face recognition. Experiment 2 found a right hemisphere advantage for…
Skelly, Laurie R.; Decety, Jean
2012-01-01
Emotionally expressive faces are processed by a distributed network of interacting sub-cortical and cortical brain regions. The components of this network have been identified and described in large part by the stimulus properties to which they are sensitive, but as face processing research matures interest has broadened to also probe dynamic interactions between these regions and top-down influences such as task demand and context. While some research has tested the robustness of affective face processing by restricting available attentional resources, it is not known whether face network processing can be augmented by increased motivation to attend to affective face stimuli. Short videos of people expressing emotions were presented to healthy participants during functional magnetic resonance imaging. Motivation to attend to the videos was manipulated by providing an incentive for improved recall performance. During the motivated condition, there was greater coherence among nodes of the face processing network, more widespread correlation between signal intensity and performance, and selective signal increases in a task-relevant subset of face processing regions, including the posterior superior temporal sulcus and right amygdala. In addition, an unexpected task-related laterality effect was seen in the amygdala. These findings provide strong evidence that motivation augmentsco-activity among nodes of the face processing network and the impact of neural activity on performance. These within-subject effects highlight the necessity to consider motivation when interpreting neural function in special populations, and to further explore the effect of task demands on face processing in healthy brains. PMID:22768287
R Innes, Bobby; Burt, D Michael; Birch, Yan K; Hausmann, Markus
2015-12-28
Left hemiface biases observed within the Emotional Chimeric Face Task (ECFT) support emotional face perception models whereby all expressions are preferentially processed by the right hemisphere. However, previous research using this task has not considered that the visible midline between hemifaces might engage atypical facial emotion processing strategies in upright or inverted conditions, nor controlled for left visual field (thus right hemispheric) visuospatial attention biases. This study used novel emotional chimeric faces (blended at the midline) to examine laterality biases for all basic emotions. Left hemiface biases were demonstrated across all emotional expressions and were reduced, but not reversed, for inverted faces. The ECFT bias in upright faces was significantly increased in participants with a large attention bias. These results support the theory that left hemiface biases reflect a genuine bias in emotional face processing, and this bias can interact with attention processes similarly localized in the right hemisphere.
Payne, Sophie; Tsakiris, Manos
2017-02-01
Self-other discrimination is a crucial mechanism for social cognition. Neuroimaging and neurostimulation research has pointed to the involvement of the right temporoparietal region in a variety of self-other discrimination tasks. Although repetitive transcranial magnetic stimulation over the right temporoparietal area has been shown to disrupt self-other discrimination in face-recognition tasks, no research has investigated the effect of increasing the cortical excitability in this region on self-other face discrimination. Here we used transcranial direct current stimulation (tDCS) to investigate changes in self-other discrimination with a video-morphing task in which the participant's face morphed into, or out of, a familiar other's face. The task was performed before and after 20 min of tDCS targeting the right temporoparietal area (anodal, cathodal, or sham stimulation). Differences in task performance following stimulation were taken to indicate a change in self-other discrimination. Following anodal stimulation only, we observed a significant increase in the amount of self-face needed to distinguish between self and other. The findings are discussed in relation to the control of self and other representations and to domain-general theories of social cognition.
The evolution of face processing in primates
Parr, Lisa A.
2011-01-01
The ability to recognize faces is an important socio-cognitive skill that is associated with a number of cognitive specializations in humans. While numerous studies have examined the presence of these specializations in non-human primates, species where face recognition would confer distinct advantages in social situations, results have been mixed. The majority of studies in chimpanzees support homologous face-processing mechanisms with humans, but results from monkey studies appear largely dependent on the type of testing methods used. Studies that employ passive viewing paradigms, like the visual paired comparison task, report evidence of similarities between monkeys and humans, but tasks that use more stringent, operant response tasks, like the matching-to-sample task, often report species differences. Moreover, the data suggest that monkeys may be less sensitive than chimpanzees and humans to the precise spacing of facial features, in addition to the surface-based cues reflected in those features, information that is critical for the representation of individual identity. The aim of this paper is to provide a comprehensive review of the available data from face-processing tasks in non-human primates with the goal of understanding the evolution of this complex cognitive skill. PMID:21536559
ERIC Educational Resources Information Center
Baralt, Melissa
2013-01-01
Informed by the cognition hypothesis (Robinson, 2011), recent studies indicate that more cognitively complex tasks can result in better incorporation of feedback during interaction and, as a consequence, more learning. It is not known, however, how task complexity and feedback work together in computerized environments. The present study addressed…
Gender differences in empathy: the role of the right hemisphere.
Rueckert, Linda; Naybar, Nicolette
2008-07-01
The relationship between activation of the right cerebral hemisphere (RH) and empathy was investigated. Twenty-two men and 73 women participated by completing a chimeric face task and empathy questionnaire. For the face task, participants were asked to pick which of the two chimeric faces looked happier. Both men and women were significantly more likely to say the chimera with the smile to their left was happier, suggesting activation of the RH. As expected, men scored significantly lower than women on the empathy questionnaire, p=.003. A correlation was found between RH activation on the face task and empathy for women only, p=.037, suggesting a possible neural basis for gender differences in empathy.
Context Influences Holistic Processing of Non-face Objects in the Composite Task
Richler, Jennifer J.; Bukach, Cindy M.; Gauthier, Isabel
2013-01-01
We explore whether holistic-like effects can be observed for non-face objects in novices as a result of the task context. We measure contextually-induced congruency effects for novel objects (Greebles) in a sequential matching selective attention task (composite task). When format at study was blocked, congruency effects were observed for study-misaligned, but not study-aligned, conditions (Experiment 1). However, congruency effects were observed in all conditions when study formats were randomized (Experiment 2), revealing that the presence of certain trial types (study-misaligned) in an experiment can induce congruency effects. In a dual task, a congruency effect for Greebles was induced in trials where a face was first encoded, only if it was aligned (Experiment 3). Thus, congruency effects can be induced by context that operates at the scale of the entire experiment or within a single trial. Implications for using the composite task to measure holistic processing are discussed. PMID:19304644
Chen, Wenfeng; Liu, Chang Hong; Nakabayashi, Kazuyo
2012-01-01
Recent research has shown that the presence of a task-irrelevant attractive face can induce a transient diversion of attention from a perceptual task that requires covert deployment of attention to one of the two locations. However, it is not known whether this spontaneous appraisal for facial beauty also modulates attention in change detection among multiple locations, where a slower, and more controlled search process is simultaneously affected by the magnitude of a change and the facial distinctiveness. Using the flicker paradigm, this study examines how spontaneous appraisal for facial beauty affects the detection of identity change among multiple faces. Participants viewed a display consisting of two alternating frames of four faces separated by a blank frame. In half of the trials, one of the faces (target face) changed to a different person. The task of the participant was to indicate whether a change of face identity had occurred. The results showed that (1) observers were less efficient at detecting identity change among multiple attractive faces relative to unattractive faces when the target and distractor faces were not highly distinctive from one another; and (2) it is difficult to detect a change if the new face is similar to the old. The findings suggest that attractive faces may interfere with the attention-switch process in change detection. The results also show that attention in change detection was strongly modulated by physical similarity between the alternating faces. Although facial beauty is a powerful stimulus that has well-demonstrated priority, its influence on change detection is easily superseded by low-level image similarity. The visual system appears to take a different approach to facial beauty when a task requires resource-demanding feature comparisons.
Perceptual and Gaze Biases during Face Processing: Related or Not?
Samson, Hélène; Fiori-Duharcourt, Nicole; Doré-Mazars, Karine; Lemoine, Christelle; Vergilino-Perez, Dorine
2014-01-01
Previous studies have demonstrated a left perceptual bias while looking at faces, due to the fact that observers mainly use information from the left side of a face (from the observer's point of view) to perform a judgment task. Such a bias is consistent with the right hemisphere dominance for face processing and has sometimes been linked to a left gaze bias, i.e. more and/or longer fixations on the left side of the face. Here, we recorded eye-movements, in two different experiments during a gender judgment task, using normal and chimeric faces which were presented above, below, right or left to the central fixation point or on it (central position). Participants performed the judgment task by remaining fixated on the fixation point or after executing several saccades (up to three). A left perceptual bias was not systematically found as it depended on the number of allowed saccades and face position. Moreover, the gaze bias clearly depended on the face position as the initial fixation was guided by face position and landed on the closest half-face, toward the center of gravity of the face. The analysis of the subsequent fixations revealed that observers move their eyes from one side to the other. More importantly, no apparent link between gaze and perceptual biases was found here. This implies that we do not look necessarily toward the side of the face that we use to make a gender judgment task. Despite the fact that these results may be limited by the absence of perceptual and gaze biases in some conditions, we emphasized the inter-individual differences observed in terms of perceptual bias, hinting at the importance of performing individual analysis and drawing attention to the influence of the method used to study this bias. PMID:24454927
Advanced human machine interaction for an image interpretation workstation
NASA Astrophysics Data System (ADS)
Maier, S.; Martin, M.; van de Camp, F.; Peinsipp-Byma, E.; Beyerer, J.
2016-05-01
In recent years, many new interaction technologies have been developed that enhance the usability of computer systems and allow for novel types of interaction. The areas of application for these technologies have mostly been in gaming and entertainment. However, in professional environments, there are especially demanding tasks that would greatly benefit from improved human machine interfaces as well as an overall improved user experience. We, therefore, envisioned and built an image-interpretation-workstation of the future, a multi-monitor workplace comprised of four screens. Each screen is dedicated to a complex software product such as a geo-information system to provide geographic context, an image annotation tool, software to generate standardized reports and a tool to aid in the identification of objects. Using self-developed systems for hand tracking, pointing gestures and head pose estimation in addition to touchscreens, face identification, and speech recognition systems we created a novel approach to this complex task. For example, head pose information is used to save the position of the mouse cursor on the currently focused screen and to restore it as soon as the same screen is focused again while hand gestures allow for intuitive manipulation of 3d objects in mid-air. While the primary focus is on the task of image interpretation, all of the technologies involved provide generic ways of efficiently interacting with a multi-screen setup and could be utilized in other fields as well. In preliminary experiments, we received promising feedback from users in the military and started to tailor the functionality to their needs
Stewart, Suzanne L K; Schepman, Astrid; Haigh, Matthew; McHugh, Rhian; Stewart, Andrew J
2018-03-14
The recognition of emotional facial expressions is often subject to contextual influence, particularly when the face and the context convey similar emotions. We investigated whether spontaneous, incidental affective theory of mind inferences made while reading vignettes describing social situations would produce context effects on the identification of same-valenced emotions (Experiment 1) as well as differently-valenced emotions (Experiment 2) conveyed by subsequently presented faces. Crucially, we found an effect of context on reaction times in both experiments while, in line with previous work, we found evidence for a context effect on accuracy only in Experiment 1. This demonstrates that affective theory of mind inferences made at the pragmatic level of a text can automatically, contextually influence the perceptual processing of emotional facial expressions in a separate task even when those emotions are of a distinctive valence. Thus, our novel findings suggest that language acts as a contextual influence to the recognition of emotional facial expressions for both same and different valences.
Neuroanatomical substrates involved in unrelated false facial recognition.
Ronzon-Gonzalez, Eliane; Hernandez-Castillo, Carlos R; Pasaye, Erick H; Vaca-Palomares, Israel; Fernandez-Ruiz, Juan
2017-11-22
Identifying faces is a process central for social interaction and a relevant factor in eyewitness theory. False recognition is a critical mistake during an eyewitness's identification scenario because it can lead to a wrongful conviction. Previous studies have described neural areas related to false facial recognition using the standard Deese/Roediger-McDermott (DRM) paradigm, triggering related false recognition. Nonetheless, misidentification of faces without trying to elicit false memories (unrelated false recognition) in a police lineup could involve different cognitive processes, and distinct neural areas. To delve into the neural circuitry of unrelated false recognition, we evaluated the memory and response confidence of participants while watching faces photographs in an fMRI task. Functional activations of unrelated false recognition were identified by contrasting the activation on this condition vs. the activations related to recognition (hits) and correct rejections. The results identified the right precentral and cingulate gyri as areas with distinctive activations during false recognition events suggesting a conflict resulting in a dysfunction during memory retrieval. High confidence suggested that about 50% of misidentifications may be related to an unconscious process. These findings add to our understanding of the construction of facial memories and its biological basis, and the fallibility of the eyewitness testimony.
Emotion recognition in girls with conduct problems.
Schwenck, Christina; Gensthaler, Angelika; Romanos, Marcel; Freitag, Christine M; Schneider, Wolfgang; Taurines, Regina
2014-01-01
A deficit in emotion recognition has been suggested to underlie conduct problems. Although several studies have been conducted on this topic so far, most concentrated on male participants. The aim of the current study was to compare recognition of morphed emotional faces in girls with conduct problems (CP) with elevated or low callous-unemotional (CU+ vs. CU-) traits and a matched healthy developing control group (CG). Sixteen girls with CP-CU+, 16 girls with CP-CU- and 32 controls (mean age: 13.23 years, SD=2.33 years) were included. Video clips with morphed faces were presented in two runs to assess emotion recognition. Multivariate analysis of variance with the factors group and run was performed. Girls with CP-CU- needed more time than the CG to encode sad, fearful, and happy faces and they correctly identified sadness less often. Girls with CP-CU+ outperformed the other groups in the identification of fear. Learning effects throughout runs were the same for all groups except that girls with CP-CU- correctly identified fear less often in the second run compared to the first run. Results need to be replicated with comparable tasks, which might result in subgroup-specific therapeutic recommendations.
Duncan, Justin; Gosselin, Frédéric; Cobarro, Charlène; Dugas, Gabrielle; Blais, Caroline; Fiset, Daniel
2017-12-01
Horizontal information was recently suggested to be crucial for face identification. In the present paper, we expand on this finding and investigate the role of orientations for all the basic facial expressions and neutrality. To this end, we developed orientation bubbles to quantify utilization of the orientation spectrum by the visual system in a facial expression categorization task. We first validated the procedure in Experiment 1 with a simple plaid-detection task. In Experiment 2, we used orientation bubbles to reveal the diagnostic-i.e., task relevant-orientations for the basic facial expressions and neutrality. Overall, we found that horizontal information was highly diagnostic for expressions-surprise excepted. We also found that utilization of horizontal information strongly predicted performance level in this task. Despite the recent surge of research on horizontals, the link with local features remains unexplored. We were thus also interested in investigating this link. In Experiment 3, location bubbles were used to reveal the diagnostic features for the basic facial expressions. Crucially, Experiments 2 and 3 were run in parallel on the same participants, in an interleaved fashion. This way, we were able to correlate individual orientation and local diagnostic profiles. Our results indicate that individual differences in horizontal tuning are best predicted by utilization of the eyes.
No Prior Entry for Threat-Related Faces: Evidence from Temporal Order Judgments
Schettino, Antonio; Loeys, Tom; Pourtois, Gilles
2013-01-01
Previous research showed that threat-related faces, due to their intrinsic motivational relevance, capture attention more readily than neutral faces. Here we used a standard temporal order judgment (TOJ) task to assess whether negative (either angry or fearful) emotional faces, when competing with neutral faces for attention selection, may lead to a prior entry effect and hence be perceived as appearing first, especially when uncertainty is high regarding the order of the two onsets. We did not find evidence for this conjecture across five different experiments, despite the fact that participants were invariably influenced by asynchronies in the respective onsets of the two competing faces in the pair, and could reliably identify the emotion in the faces. Importantly, by systematically varying task demands across experiments, we could rule out confounds related to suboptimal stimulus presentation or inappropriate task demands. These findings challenge the notion of an early automatic capture of attention by (negative) emotion. Future studies are needed to investigate whether the lack of systematic bias of attention by emotion is imputed to the primacy of a non-emotional cue to resolve the TOJ task, which in turn prevents negative emotion to exert an early bottom-up influence on the guidance of spatial and temporal attention. PMID:23646126
Social anxiety under load: the effects of perceptual load in processing emotional faces.
Soares, Sandra C; Rocha, Marta; Neiva, Tiago; Rodrigues, Paulo; Silva, Carlos F
2015-01-01
Previous studies in the social anxiety arena have shown an impaired attentional control system, similar to that found in trait anxiety. However, the effect of task demands on social anxiety in socially threatening stimuli, such as angry faces, remains unseen. In the present study, 54 university students scoring high and low in the Social Interaction and Performance Anxiety and Avoidance Scale (SIPAAS) questionnaire, participated in a target letter discrimination task while task-irrelevant face stimuli (angry, disgust, happy, and neutral) were simultaneously presented. The results showed that high (compared to low) socially anxious individuals were more prone to distraction by task-irrelevant stimuli, particularly under high perceptual load conditions. More importantly, for such individuals, the accuracy proportions for angry faces significantly differed between the low and high perceptual load conditions, which is discussed in light of current evolutionary models of social anxiety.
How does cognitive load influence speech perception? An encoding hypothesis.
Mitterer, Holger; Mattys, Sven L
2017-01-01
Two experiments investigated the conditions under which cognitive load exerts an effect on the acuity of speech perception. These experiments extend earlier research by using a different speech perception task (four-interval oddity task) and by implementing cognitive load through a task often thought to be modular, namely, face processing. In the cognitive-load conditions, participants were required to remember two faces presented before the speech stimuli. In Experiment 1, performance in the speech-perception task under cognitive load was not impaired in comparison to a no-load baseline condition. In Experiment 2, we modified the load condition minimally such that it required encoding of the two faces simultaneously with the speech stimuli. As a reference condition, we also used a visual search task that in earlier experiments had led to poorer speech perception. Both concurrent tasks led to decrements in the speech task. The results suggest that speech perception is affected even by loads thought to be processed modularly, and that, critically, encoding in working memory might be the locus of interference.
When Emotions Matter: Focusing on Emotion Improves Working Memory Updating in Older Adults
Berger, Natalie; Richards, Anne; Davelaar, Eddy J.
2017-01-01
Research indicates that emotion can affect the ability to monitor and replace content in working memory, an executive function that is usually referred to as updating. However, it is less clear if the effects of emotion on updating vary with its relevance for the task and with age. Here, 25 younger (20–34 years of age) and 25 older adults (63–80 years of age) performed a 1-back and a 2-back task, in which they responded to younger, middle-aged, and older faces showing neutral, happy or angry expressions. The relevance of emotion for the task was manipulated through instructions to make match/non-match judgments based on the emotion (i.e., emotion was task-relevant) or the age (i.e., emotion was task-irrelevant) of the face. It was found that only older adults updated emotional faces more readily compared to neutral faces as evidenced by faster RTs on non-match trials. This emotion benefit was observed under low-load conditions (1-back task) but not under high-load conditions (2-back task) and only if emotion was task-relevant. In contrast, task-irrelevant emotion did not impair updating performance in either age group. These findings suggest that older adults can benefit from task-relevant emotional information to a greater extent than younger adults when sufficient cognitive resources are available. They also highlight that emotional processing can buffer age-related decline in WM tasks that require not only maintenance but also manipulation of material. PMID:28966602
Lambert, Bruno; Declerck, Carolyn H; Boone, Christophe
2014-02-01
Previous research on the relation between oxytocin and trustworthiness evaluations has yielded inconsistent results. The current study reports an experiment using artificial faces which allows manipulating the dimension of trustworthiness without changing factors like emotions or face symmetry. We investigate whether (1) oxytocin increases the average trustworthiness evaluation of faces (level effect), and/or whether (2) oxytocin improves the discriminatory ability of trustworthiness perception so that people become more accurate in distinguishing faces that vary along a gradient of trustworthiness. In a double blind oxytocin/placebo experiment (N=106) participants conducted two judgement tasks. First they evaluated the trustworthiness of a series of pictures of artificially generated faces, neutral in the trustworthiness dimension. Next they compared neutral faces with artificially generated faces that were manipulated to vary in trustworthiness. The results indicate that oxytocin (relative to a placebo) does not affect the evaluation of trustworthiness in the first task. However, in the second task, misclassification of untrustworthy faces as trustworthy occurred significantly less in the oxytocin group. Furthermore, oxytocin improved the discriminatory ability of untrustworthy, but not trustworthy faces. We conclude that oxytocin does not increase trustworthiness judgments on average, but that it helps people to more accurately recognize an untrustworthy face. Copyright © 2013 Elsevier Ltd. All rights reserved.
Childhood anxiety and attention to emotion faces in a modified stroop task.
Hadwin, Julie A; Donnelly, Nick; Richards, Anne; French, Christopher C; Patel, Umang
2009-06-01
This study used an emotional face stroop task to investigate the effects of self-report trait anxiety, social concern (SC), and chronological age (CA) on reaction time to match coloured outlines of angry, happy, and neutral faces (and control faces with scrambled features) with coloured buttons in a community sample of 74 children aged 6-12 years. The results showed an interference of colour matching for angry (relative to neutral) faces in children with elevated SC. The same effect was not found for happy or control faces. In addition, the results suggest that selective attention to angry faces in children with social concern (SC) was not significantly moderated by age.
Face adaptation does not improve performance on search or discrimination tasks
Ng, Minna; Boynton, Geoffrey M.; Fine, Ione
2011-01-01
The face adaptation effect, as described by M. A. Webster and O. H. MacLin (1999), is a robust perceptual shift in the appearance of faces after a brief adaptation period. For example, prolonged exposure to Asian faces causes a Eurasian face to appear distinctly Caucasian. This adaptation effect has been documented for general configural effects, as well as for the facial properties of gender, ethnicity, expression, and identity. We began by replicating the finding that adaptation to ethnicity, gender, and a combination of both features induces selective shifts in category appearance. We then investigated whether this adaptation has perceptual consequences beyond a shift in the perceived category boundary by measuring the effects of adaptation on RSVP, spatial search, and discrimination tasks. Adaptation had no discernable effect on performance for any of these tasks. PMID:18318604
Face adaptation does not improve performance on search or discrimination tasks.
Ng, Minna; Boynton, Geoffrey M; Fine, Ione
2008-01-04
The face adaptation effect, as described by M. A. Webster and O. H. MacLin (1999), is a robust perceptual shift in the appearance of faces after a brief adaptation period. For example, prolonged exposure to Asian faces causes a Eurasian face to appear distinctly Caucasian. This adaptation effect has been documented for general configural effects, as well as for the facial properties of gender, ethnicity, expression, and identity. We began by replicating the finding that adaptation to ethnicity, gender, and a combination of both features induces selective shifts in category appearance. We then investigated whether this adaptation has perceptual consequences beyond a shift in the perceived category boundary by measuring the effects of adaptation on RSVP, spatial search, and discrimination tasks. Adaptation had no discernable effect on performance for any of these tasks.
McComb, Sara; Kennedy, Deanna; Perryman, Rebecca; Warner, Norman; Letsky, Michael
2010-04-01
Our objective is to capture temporal patterns in mental model convergence processes and differences in these patterns between distributed teams using an electronic collaboration space and face-to-face teams with no interface. Distributed teams, as sociotechnical systems, collaborate via technology to work on their task. The way in which they process information to inform their mental models may be examined via team communication and may unfold differently than it does in face-to-face teams. We conducted our analysis on 32 three-member teams working on a planning task. Half of the teams worked as distributed teams in an electronic collaboration space, and the other half worked face-to-face without an interface. Using event history analysis, we found temporal interdependencies among the initial convergence points of the multiple mental models we examined. Furthermore, the timing of mental model convergence and the onset of task work discussions were related to team performance. Differences existed in the temporal patterns of convergence and task work discussions across conditions. Distributed teams interacting via an electronic interface and face-to-face teams with no interface converged on multiple mental models, but their communication patterns differed. In particular, distributed teams with an electronic interface required less overall communication, converged on all mental models later in their life cycles, and exhibited more linear cognitive processes than did face-to-face teams interacting verbally. Managers need unique strategies for facilitating communication and mental model convergence depending on teams' degrees of collocation and access to an interface, which in turn will enhance team performance.
Research of Face Recognition with Fisher Linear Discriminant
NASA Astrophysics Data System (ADS)
Rahim, R.; Afriliansyah, T.; Winata, H.; Nofriansyah, D.; Ratnadewi; Aryza, S.
2018-01-01
Face identification systems are developing rapidly, and these developments drive the advancement of biometric-based identification systems that have high accuracy. However, to develop a good face recognition system and to have high accuracy is something that’s hard to find. Human faces have diverse expressions and attribute changes such as eyeglasses, mustache, beard and others. Fisher Linear Discriminant (FLD) is a class-specific method that distinguishes facial image images into classes and also creates distance between classes and intra classes so as to produce better classification.
Auditory temporal-order processing of vowel sequences by young and elderly listeners1
Fogerty, Daniel; Humes, Larry E.; Kewley-Port, Diane
2010-01-01
This project focused on the individual differences underlying observed variability in temporal processing among older listeners. Four measures of vowel temporal-order identification were completed by young (N=35; 18–31 years) and older (N=151; 60–88 years) listeners. Experiments used forced-choice, constant-stimuli methods to determine the smallest stimulus onset asynchrony (SOA) between brief (40 or 70 ms) vowels that enabled identification of a stimulus sequence. Four words (pit, pet, pot, and put) spoken by a male talker were processed to serve as vowel stimuli. All listeners identified the vowels in isolation with better than 90% accuracy. Vowel temporal-order tasks included the following: (1) monaural two-item identification, (2) monaural four-item identification, (3) dichotic two-item vowel identification, and (4) dichotic two-item ear identification. Results indicated that older listeners had more variability and performed poorer than young listeners on vowel-identification tasks, although a large overlap in distributions was observed. Both age groups performed similarly on the dichotic ear-identification task. For both groups, the monaural four-item and dichotic two-item tasks were significantly harder than the monaural two-item task. Older listeners’ SOA thresholds improved with additional stimulus exposure and shorter dichotic stimulus durations. Individual differences of temporal-order performance among the older listeners demonstrated the influence of cognitive measures, but not audibility or age. PMID:20370033
Auditory temporal-order processing of vowel sequences by young and elderly listeners.
Fogerty, Daniel; Humes, Larry E; Kewley-Port, Diane
2010-04-01
This project focused on the individual differences underlying observed variability in temporal processing among older listeners. Four measures of vowel temporal-order identification were completed by young (N=35; 18-31 years) and older (N=151; 60-88 years) listeners. Experiments used forced-choice, constant-stimuli methods to determine the smallest stimulus onset asynchrony (SOA) between brief (40 or 70 ms) vowels that enabled identification of a stimulus sequence. Four words (pit, pet, pot, and put) spoken by a male talker were processed to serve as vowel stimuli. All listeners identified the vowels in isolation with better than 90% accuracy. Vowel temporal-order tasks included the following: (1) monaural two-item identification, (2) monaural four-item identification, (3) dichotic two-item vowel identification, and (4) dichotic two-item ear identification. Results indicated that older listeners had more variability and performed poorer than young listeners on vowel-identification tasks, although a large overlap in distributions was observed. Both age groups performed similarly on the dichotic ear-identification task. For both groups, the monaural four-item and dichotic two-item tasks were significantly harder than the monaural two-item task. Older listeners' SOA thresholds improved with additional stimulus exposure and shorter dichotic stimulus durations. Individual differences of temporal-order performance among the older listeners demonstrated the influence of cognitive measures, but not audibility or age.
Saddiki, Najat; Hennion, Sophie; Viard, Romain; Ramdane, Nassima; Lopes, Renaud; Baroncini, Marc; Szurhaj, William; Reyns, Nicolas; Pruvo, Jean Pierre; Delmaire, Christine
2018-05-01
Medial lobe temporal structures and more specifically the hippocampus play a decisive role in episodic memory. Most of the memory functional magnetic resonance imaging (fMRI) studies evaluate the encoding phase; the retrieval phase being performed outside the MRI. We aimed to determine the ability to reveal greater hippocampal fMRI activations during retrieval phase. Thirty-five epileptic patients underwent a two-step memory fMRI. During encoding phase, subjects were requested to identify the feminine or masculine gender of faces and words presented, in order to encourage stimulus encoding. One hour after, during retrieval phase, subjects had to recognize the word and face. We used an event-related design to identify hippocampal activations. There was no significant difference between patients with left temporal lobe epilepsy, patients with right temporal lobe epilepsy and patients with extratemporal lobe epilepsy on verbal and visual learning task. For words, patients demonstrated significantly more bilateral hippocampal activation for retrieval task than encoding task and when the tasks were associated than during encoding alone. Significant difference was seen between face-encoding alone and face retrieval alone. This study demonstrates the essential contribution of the retrieval task during a fMRI memory task but the number of patients with hippocampal activations was greater when the two tasks were taken into account. Copyright © 2018. Published by Elsevier Masson SAS.
ERIC Educational Resources Information Center
Sydorenko, Tetyana V.
2011-01-01
Past research has uncovered ways to improve communicative competence, including task-based learner-learner interaction (e.g., R. Ellis, 2003) and task planning (e.g., Mochizuki and Ortega, 2008). Teacher-guided planning particularly increases the benefits of learner-learner interaction (Foster and Skehan, 1999). One component of communicative…
Sharvit, Keren; Brambilla, Marco; Babush, Maxim; Colucci, Francesco Paolo
2015-09-01
Four studies tested the proposition that regulation of collective guilt in the face of harmful ingroup behavior involves motivated reasoning. Cognitive energetics theory suggests that motivated reasoning is a function of goal importance, mental resource availability, and task demands. Accordingly, three studies conducted in the United States and Israel demonstrated that high importance of avoiding collective guilt, represented by group identification (Studies 1 and 3) and conservative ideological orientation (Study 2), is negatively related to collective guilt, but only when mental resources are not depleted by cognitive load. The fourth study, conducted in Italy, demonstrated that when justifications for the ingroup's harmful behavior are immediately available, the task of regulating collective guilt and shame becomes less demanding and less susceptible to resource depletion. By combining knowledge from the domains of motivated cognition, emotion regulation, and intergroup relations, these cross-cultural studies offer novel insights regarding factors underlying the regulation of collective guilt. © 2015 by the Society for Personality and Social Psychology, Inc.
Alrashdan, Abdalla; Momani, Amer; Ababneh, Tamador
2012-01-01
One of the most challenging problems facing healthcare providers is to determine the actual cost for their procedures, which is important for internal accounting and price justification to insurers. The objective of this paper is to find suitable categories to identify the diagnostic outpatient medical procedures and translate them from functional orientation to process orientation. A hierarchal task tree is developed based on a classification schema of procedural activities. Each procedure is seen as a process consisting of a number of activities. This makes a powerful foundation for activity-based cost/management implementation and provides enough information to discover the value-added and non-value-added activities that assist in process improvement and eventually may lead to cost reduction. Work measurement techniques are used to identify the standard time of each activity at the lowest level of the task tree. A real case study at a private hospital is presented to demonstrate the proposed methodology. © 2011 National Association for Healthcare Quality.
Music to my ears: Age-related decline in musical and facial emotion recognition.
Sutcliffe, Ryan; Rendell, Peter G; Henry, Julie D; Bailey, Phoebe E; Ruffman, Ted
2017-12-01
We investigated young-old differences in emotion recognition using music and face stimuli and tested explanatory hypotheses regarding older adults' typically worse emotion recognition. In Experiment 1, young and older adults labeled emotions in an established set of faces, and in classical piano stimuli that we pilot-tested on other young and older adults. Older adults were worse at detecting anger, sadness, fear, and happiness in music. Performance on the music and face emotion tasks was not correlated for either age group. Because musical expressions of fear were not equated for age groups in the pilot study of Experiment 1, we conducted a second experiment in which we created a novel set of music stimuli that included more accessible musical styles, and which we again pilot-tested on young and older adults. In this pilot study, all musical emotions were identified similarly by young and older adults. In Experiment 2, participants also made age estimations in another set of faces to examine whether potential relations between the face and music emotion tasks would be shared with the age estimation task. Older adults did worse in each of the tasks, and had specific difficulty recognizing happy, sad, peaceful, angry, and fearful music clips. Older adults' difficulties in each of the 3 tasks-music emotion, face emotion, and face age-were not correlated with each other. General cognitive decline did not appear to explain our results as increasing age predicted emotion performance even after fluid IQ was controlled for within the older adult group. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Liu, Yanling; Lan, Haiying; Teng, Zhaojun; Guo, Cheng; Yao, Dezhong
2017-01-01
Previous research has been inconsistent on whether violent video games exert positive and/or negative effects on cognition. In particular, attentional bias in facial affect processing after violent video game exposure continues to be controversial. The aim of the present study was to investigate attentional bias in facial recognition after short term exposure to violent video games and to characterize the neural correlates of this effect. In order to accomplish this, participants were exposed to either neutral or violent video games for 25 min and then event-related potentials (ERPs) were recorded during two emotional search tasks. The first search task assessed attentional facilitation, in which participants were required to identify an emotional face from a crowd of neutral faces. In contrast, the second task measured disengagement, in which participants were required to identify a neutral face from a crowd of emotional faces. Our results found a significant presence of the ERP component, N2pc, during the facilitation task; however, no differences were observed between the two video game groups. This finding does not support a link between attentional facilitation and violent video game exposure. Comparatively, during the disengagement task, N2pc responses were not observed when participants viewed happy faces following violent video game exposure; however, a weak N2pc response was observed after neutral video game exposure. These results provided only inconsistent support for the disengagement hypothesis, suggesting that participants found it difficult to separate a neutral face from a crowd of emotional faces. PMID:28249033
Liu, Yanling; Lan, Haiying; Teng, Zhaojun; Guo, Cheng; Yao, Dezhong
2017-01-01
Previous research has been inconsistent on whether violent video games exert positive and/or negative effects on cognition. In particular, attentional bias in facial affect processing after violent video game exposure continues to be controversial. The aim of the present study was to investigate attentional bias in facial recognition after short term exposure to violent video games and to characterize the neural correlates of this effect. In order to accomplish this, participants were exposed to either neutral or violent video games for 25 min and then event-related potentials (ERPs) were recorded during two emotional search tasks. The first search task assessed attentional facilitation, in which participants were required to identify an emotional face from a crowd of neutral faces. In contrast, the second task measured disengagement, in which participants were required to identify a neutral face from a crowd of emotional faces. Our results found a significant presence of the ERP component, N2pc, during the facilitation task; however, no differences were observed between the two video game groups. This finding does not support a link between attentional facilitation and violent video game exposure. Comparatively, during the disengagement task, N2pc responses were not observed when participants viewed happy faces following violent video game exposure; however, a weak N2pc response was observed after neutral video game exposure. These results provided only inconsistent support for the disengagement hypothesis, suggesting that participants found it difficult to separate a neutral face from a crowd of emotional faces.
Individual differences and the effect of face configuration information in the McGurk effect.
Ujiie, Yuta; Asai, Tomohisa; Wakabayashi, Akio
2018-04-01
The McGurk effect, which denotes the influence of visual information on audiovisual speech perception, is less frequently observed in individuals with autism spectrum disorder (ASD) compared to those without it; the reason for this remains unclear. Several studies have suggested that facial configuration context might play a role in this difference. More specifically, people with ASD show a local processing bias for faces-that is, they process global face information to a lesser extent. This study examined the role of facial configuration context in the McGurk effect in 46 healthy students. Adopting an analogue approach using the Autism-Spectrum Quotient (AQ), we sought to determine whether this facial configuration context is crucial to previously observed reductions in the McGurk effect in people with ASD. Lip-reading and audiovisual syllable identification tasks were assessed via presentation of upright normal, inverted normal, upright Thatcher-type, and inverted Thatcher-type faces. When the Thatcher-type face was presented, perceivers were found to be sensitive to the misoriented facial characteristics, causing them to perceive a weaker McGurk effect than when the normal face was presented (this is known as the McThatcher effect). Additionally, the McGurk effect was weaker in individuals with high AQ scores than in those with low AQ scores in the incongruent audiovisual condition, regardless of their ability to read lips or process facial configuration contexts. Our findings, therefore, do not support the assumption that individuals with ASD show a weaker McGurk effect due to a difficulty in processing facial configuration context.
Methodology of the determination of the uncertainties by using the biometric device the broadway 3D
NASA Astrophysics Data System (ADS)
Jasek, Roman; Talandova, Hana; Adamek, Milan
2016-06-01
The biometric identification by face is among one of the most widely used methods of biometric identification. Due to it provides a faster and more accurate identification; it was implemented into area of security 3D face reader by Broadway manufacturer was used to measure. It is equipped with the 3D camera system, which uses the method of structured light scanning and saves the template into the 3D model of face. The obtained data were evaluated by software Turnstile Enrolment Application (TEA). The measurements were used 3D face reader the Broadway 3D. First, the person was scanned and stored in the database. Thereafter person has already been compared with the stored template in the database for each method. Finally, a measure of reliability was evaluated for the Broadway 3D face reader.
Morriss, Jayne; McSorley, Eugene; van Reekum, Carien M
2017-08-24
Attentional bias to uncertain threat is associated with anxiety disorders. Here we examine the extent to which emotional face distractors (happy, angry and neutral) and individual differences in intolerance of uncertainty (IU), impact saccades in two versions of the "follow a cross" task. In both versions of the follow the cross task, the probability of receiving an emotional face distractor was 66.7%. To increase perceived uncertainty regarding the location of the face distractors, in one of the tasks additional non-predictive cues were presented before the onset of the face distractors and target. We did not find IU to impact saccades towards non-cued face distractors. However, we found IU, over Trait Anxiety, to impact saccades towards non-predictive cueing of face distractors. Under these conditions, IU individuals' eyes were pulled towards angry face distractors and away from happy face distractors overall, and the speed of this deviation of the eyes was determined by the combination of the cue and emotion of the face. Overall, these results suggest a specific role of IU on attentional bias to threat during uncertainty. These findings highlight the potential of intolerance of uncertainty-based mechanisms to help understand anxiety disorder pathology and inform potential treatment targets.
Person authentication using brainwaves (EEG) and maximum a posteriori model adaptation.
Marcel, Sébastien; Millán, José Del R
2007-04-01
In this paper, we investigate the use of brain activity for person authentication. It has been shown in previous studies that the brain-wave pattern of every individual is unique and that the electroencephalogram (EEG) can be used for biometric identification. EEG-based biometry is an emerging research topic and we believe that it may open new research directions and applications in the future. However, very little work has been done in this area and was focusing mainly on person identification but not on person authentication. Person authentication aims to accept or to reject a person claiming an identity, i.e., comparing a biometric data to one template, while the goal of person identification is to match the biometric data against all the records in a database. We propose the use of a statistical framework based on Gaussian Mixture Models and Maximum A Posteriori model adaptation, successfully applied to speaker and face authentication, which can deal with only one training session. We perform intensive experimental simulations using several strict train/test protocols to show the potential of our method. We also show that there are some mental tasks that are more appropriate for person authentication than others.
Kar, Bhoomika Rastogi; Srinivasan, Narayanan; Nehabala, Yagyima; Nigam, Richa
2018-03-01
We examined proactive and reactive control effects in the context of task-relevant happy, sad, and angry facial expressions on a face-word Stroop task. Participants identified the emotion expressed by a face that contained a congruent or incongruent emotional word (happy/sad/angry). Proactive control effects were measured in terms of the reduction in Stroop interference (difference between incongruent and congruent trials) as a function of previous trial emotion and previous trial congruence. Reactive control effects were measured in terms of the reduction in Stroop interference as a function of current trial emotion and previous trial congruence. Previous trial negative emotions exert greater influence on proactive control than the positive emotion. Sad faces in the previous trial resulted in greater reduction in the Stroop interference for happy faces in the current trial. However, current trial angry faces showed stronger adaptation effects compared to happy faces. Thus, both proactive and reactive control mechanisms are dependent on emotional valence of task-relevant stimuli.
Face recognition and description abilities in people with mild intellectual disabilities.
Gawrylowicz, Julie; Gabbert, Fiona; Carson, Derek; Lindsay, William R; Hancock, Peter J B
2013-09-01
People with intellectual disabilities (ID) are as likely as the general population to find themselves in the situation of having to identify and/or describe a perpetrator's face to the police. However, limited verbal and memory abilities in people with ID might prevent them to engage in standard police procedures. Two experiments examined face recognition and description abilities in people with mild intellectual disabilities (mID) and compared their performance with that of people without ID. Experiment 1 used three old/new face recognition tasks. Experiment 2 consisted of two face description tasks, during which participants had to verbally describe faces from memory and with the target in view. Participants with mID performed significantly poorer on both recognition and recall tasks than control participants. However, their group performance was better than chance and they showed variability in performance depending on the measures introduced. The practical implications of these findings in forensic settings are discussed. © 2013 John Wiley & Sons Ltd.
Bohil, Corey J; Higgins, Nicholas A; Keebler, Joseph R
2014-01-01
We compared methods for predicting and understanding the source of confusion errors during military vehicle identification training. Participants completed training to identify main battle tanks. They also completed card-sorting and similarity-rating tasks to express their mental representation of resemblance across the set of training items. We expected participants to selectively attend to a subset of vehicle features during these tasks, and we hypothesised that we could predict identification confusion errors based on the outcomes of the card-sort and similarity-rating tasks. Based on card-sorting results, we were able to predict about 45% of observed identification confusions. Based on multidimensional scaling of the similarity-rating data, we could predict more than 80% of identification confusions. These methods also enabled us to infer the dimensions receiving significant attention from each participant. This understanding of mental representation may be crucial in creating personalised training that directs attention to features that are critical for accurate identification. Participants completed military vehicle identification training and testing, along with card-sorting and similarity-rating tasks. The data enabled us to predict up to 84% of identification confusion errors and to understand the mental representation underlying these errors. These methods have potential to improve training and reduce identification errors leading to fratricide.
Ursache, Alexandra; Raver, C. Cybele
2015-01-01
This study examines preadolescents’ reports of risk-taking as predicted by two different, but related inhibitory control systems involving sensitivity to reward and loss on the one hand, and higher order processing in the context of cognitive conflict, known as executive functioning (EF), on the other. Importantly, this study examines these processes with a sample of inner-city, low-income preadolescents and as such examines the ways in which these processes may be related to risky behaviors as a function of children's levels of both concurrent and chronic exposure to household poverty. As part of a larger longitudinal study, 382 children (ages 9 -11) provided a self-report of risky behaviors and participated in the Iowa Gambling task, assessing bias for infrequent loss (preference for infrequent, high magnitude versus frequent, low magnitude loss) and the Hearts and Flowers task assessing executive functioning. Results demonstrated that a higher bias for infrequent loss was associated with higher risky behaviors for children who demonstrated lower EF. Furthermore, bias for infrequent loss was most strongly associated with higher risk-taking for children facing highest levels of poverty. Implications for early identification and prevention of risk-taking in inner-city preadolescents are discussed. PMID:26412918
Imhoff, Roland; Lange, Jens; Germar, Markus
2018-02-22
Spatial cueing paradigms are popular tools to assess human attention to emotional stimuli, but different variants of these paradigms differ in what participants' primary task is. In one variant, participants indicate the location of the target (location task), whereas in the other they indicate the shape of the target (identification task). In the present paper we test the idea that although these two variants produce seemingly comparable cue validity effects on response times, they rest on different underlying processes. Across four studies (total N = 397; two in the supplement) using both variants and manipulating the motivational relevance of cue content, diffusion model analyses revealed that cue validity effects in location tasks are primarily driven by response biases, whereas the same effect rests on delay due to attention to the cue in identification tasks. Based on this, we predict and empirically support that a symmetrical distribution of valid and invalid cues would reduce cue validity effects in location tasks to a greater extent than in identification tasks. Across all variants of the task, we fail to replicate the effect of greater cue validity effects for arousing (vs. neutral) stimuli. We discuss the implications of these findings for best practice in spatial cueing research.
Using eye movements as an index of implicit face recognition in autism spectrum disorder.
Hedley, Darren; Young, Robyn; Brewer, Neil
2012-10-01
Individuals with an autism spectrum disorder (ASD) typically show impairment on face recognition tasks. Performance has usually been assessed using overt, explicit recognition tasks. Here, a complementary method involving eye tracking was used to examine implicit face recognition in participants with ASD and in an intelligence quotient-matched non-ASD control group. Differences in eye movement indices between target and foil faces were used as an indicator of implicit face recognition. Explicit face recognition was assessed using old-new discrimination and reaction time measures. Stimuli were faces of studied (target) or unfamiliar (foil) persons. Target images at test were either identical to the images presented at study or altered by changing the lighting, pose, or by masking with visual noise. Participants with ASD performed worse than controls on the explicit recognition task. Eye movement-based measures, however, indicated that implicit recognition may not be affected to the same degree as explicit recognition. Autism Res 2012, 5: 363-379. © 2012 International Society for Autism Research, Wiley Periodicals, Inc. © 2012 International Society for Autism Research, Wiley Periodicals, Inc.
Face-induced expectancies influence neural mechanisms of performance monitoring.
Osinsky, Roman; Seeger, Jennifer; Mussel, Patrick; Hewig, Johannes
2016-04-01
In many daily situations, the consequences of our actions are predicted by cues that are often social in nature. For instance, seeing the face of an evaluator (e.g., a supervisor at work) may activate certain evaluative expectancies, depending on the history of prior encounters with that particular person. We investigated how such face-induced expectancies influence neurocognitive functions of performance monitoring. We recorded an electroencephalogram while participants completed a time-estimation task, during which they received performance feedback from a strict and a lenient evaluator. During each trial, participants first saw the evaluator's face before performing the task and, finally, receiving feedback. Therefore, faces could be used as predictive cues for the upcoming evaluation. We analyzed electrocortical signatures of performance monitoring at the stages of cue processing, task performance, and feedback reception. Our results indicate that, at the cue stage, seeing the strict evaluator's face results in an anticipatory preparation of fronto-medial monitoring mechanisms, as reflected by a sustained negative-going amplitude shift (i.e., the contingent negative variation). At the performance stage, face-induced expectancies of a strict evaluation rule led to increases of early performance monitoring signals (i.e., frontal-midline theta power). At the final stage of feedback reception, violations of outcome expectancies differentially affected the feedback-related negativity and frontal-midline theta power, pointing to a functional dissociation between these signatures. Altogether, our results indicate that evaluative expectancies induced by face-cues lead to adjustments of internal performance monitoring mechanisms at various stages of task processing.
Measures of skin conductance and heart rate in alcoholic men and women during memory performance
Poey, Alan; Ruiz, Susan Mosher; Marinkovic, Ksenija; Oscar-Berman, Marlene
2015-01-01
We examined abnormalities in physiological responses to emotional stimuli associated with long-term chronic alcoholism. Skin conductance responses (SCR) and heart rate (HR) responses were measured in 32 abstinent alcoholic (ALC) and 30 healthy nonalcoholic (NC) men and women undergoing an emotional memory task in an MRI scanner. The task required participants to remember the identity of two emotionally-valenced faces presented at the onset of each trial during functional magnetic resonance imaging (fMRI) scanning. After viewing the faces, participants saw a distractor image (an alcoholic beverage, nonalcoholic beverage, or scrambled image) followed by a single probe face. The task was to decide whether the probe face matched one of the two encoded faces. Skin conductance measurements (before and after the encoded faces, distractor, and probe) were obtained from electrodes on the index and middle fingers on the left hand. HR measurements (beats per minute before and after the encoded faces, distractor, and probe) were obtained by a pulse oximeter placed on the little finger on the left hand. We expected that, relative to NC participants, the ALC participants would show reduced SCR and HR responses to the face stimuli, and that we would identify greater reactivity to the alcoholic beverage stimuli than to the distractor stimuli unrelated to alcohol. While the beverage type did not differentiate the groups, the ALC group did have reduced skin conductance and HR responses to elements of the task, as compared to the NC group. PMID:26020002
Koda, Hiroki; Sato, Anna; Kato, Akemi
2013-09-01
Humans innately perceive infantile features as cute. The ethologist Konrad Lorenz proposed that the infantile features of mammals and birds, known as the baby schema (kindchenschema), motivate caretaking behaviour. As biologically relevant stimuli, newborns are likely to be processed specially in terms of visual attention, perception, and cognition. Recent demonstrations on human participants have shown visual attentional prioritisation to newborn faces (i.e., newborn faces capture visual attention). Although characteristics equivalent to those found in the faces of human infants are found in nonhuman primates, attentional capture by newborn faces has not been tested in nonhuman primates. We examined whether conspecific newborn faces captured the visual attention of two Japanese monkeys using a target-detection task based on dot-probe tasks commonly used in human visual attention studies. Although visual cues enhanced target detection in subject monkeys, our results, unlike those for humans, showed no evidence of an attentional prioritisation for newborn faces by monkeys. Our demonstrations showed the validity of dot-probe task for visual attention studies in monkeys and propose a novel approach to bridge the gap between human and nonhuman primate social cognition research. This suggests that attentional capture by newborn faces is not common to macaques, but it is unclear if nursing experiences influence their perception and recognition of infantile appraisal stimuli. We need additional comparative studies to reveal the evolutionary origins of baby-schema perception and recognition. Copyright © 2013 Elsevier B.V. All rights reserved.
Visual scanpath abnormalities in 22q11.2 deletion syndrome: is this a face specific deficit?
McCabe, Kathryn; Rich, Dominique; Loughland, Carmel Maree; Schall, Ulrich; Campbell, Linda Elisabet
2011-09-30
People with 22q11.2 deletion syndrome (22q11DS) have deficits in face emotion recognition. However, it is not known whether this is a deficit specific to faces, or represents maladaptive information processing strategies to complex stimuli in general. This study examined the specificity of face emotion processing deficits in 22q11DS by exploring recognition accuracy and visual scanpath performance to a Faces task compared to a Weather Scene task. Seventeen adolescents with 22q11DS (11=females, age=17.4) and 18 healthy controls (11=females, age=17.7) participated in the study. People with 22q11DS displayed an overall impoverished scanning strategy to face and weather stimuli alike, resulting in poorer accuracy across all stimuli for the 22q11DS participants compared to controls. While the control subjects altered their information processing in response to faces, a similar change was not present in the 22q11DS group indicating different visual scanpath strategies to identify category within each of the tasks, of which faces appear to represent a particularly difficult subcategory. To conclude, while this study indicates that people with 22q11DS have a general visual processing deficit, the lack of strategic change between tasks suggest that the 22q11DS group did not adapt to the change in stimuli content as well as the controls, indicative of cognitive inflexibility rather than a face specific deficit. Copyright © 2011 Elsevier Ltd. All rights reserved.
Processing deficits for familiar and novel faces in patients with left posterior fusiform lesions.
Roberts, Daniel J; Lambon Ralph, Matthew A; Kim, Esther; Tainturier, Marie-Josephe; Beeson, Pelagie M; Rapcsak, Steven Z; Woollams, Anna M
2015-11-01
Pure alexia (PA) arises from damage to the left posterior fusiform gyrus (pFG) and the striking reading disorder that defines this condition has meant that such patients are often cited as evidence for the specialisation of this region to processing of written words. There is, however, an alternative view that suggests this region is devoted to processing of high acuity foveal input, which is particularly salient for complex visual stimuli like letter strings. Previous reports have highlighted disrupted processing of non-linguistic visual stimuli after damage to the left pFG, both for familiar and unfamiliar objects and also for novel faces. This study explored the nature of face processing deficits in patients with left pFG damage. Identification of famous faces was found to be compromised in both expressive and receptive tasks. Discrimination of novel faces was also impaired, particularly for those that varied in terms of second-order spacing information, and this deficit was most apparent for the patients with the more severe reading deficits. Interestingly, discrimination of faces that varied in terms of feature identity was considerably better in these patients and it was performance in this condition that was related to the size of the length effects shown in reading. This finding complements functional imaging studies showing left pFG activation for faces varying only in spacing and frontal activation for faces varying only on features. These results suggest that the sequential part-based processing strategy that promotes the length effect in the reading of these patients also allows them to discriminate between faces on the basis of feature identity, but processing of second-order configural information is most compromised due to their left pFG lesion. This study supports a view in which the left pFG is specialised for processing of high acuity foveal visual information that supports processing of both words and faces. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Processing deficits for familiar and novel faces in patients with left posterior fusiform lesions
Roberts, Daniel J.; Lambon Ralph, Matthew A.; Kim, Esther; Tainturier, Marie-Josephe; Beeson, Pelagie M.; Rapcsak, Steven Z.; Woollams, Anna M.
2015-01-01
Pure alexia (PA) arises from damage to the left posterior fusiform gyrus (pFG) and the striking reading disorder that defines this condition has meant that such patients are often cited as evidence for the specialisation of this region to processing of written words. There is, however, an alternative view that suggests this region is devoted to processing of high acuity foveal input, which is particularly salient for complex visual stimuli like letter strings. Previous reports have highlighted disrupted processing of non-linguistic visual stimuli after damage to the left pFG, both for familiar and unfamiliar objects and also for novel faces. This study explored the nature of face processing deficits in patients with left pFG damage. Identification of famous faces was found to be compromised in both expressive and receptive tasks. Discrimination of novel faces was also impaired, particularly for those that varied in terms of second-order spacing information, and this deficit was most apparent for the patients with the more severe reading deficits. Interestingly, discrimination of faces that varied in terms of feature identity was considerably better in these patients and it was performance in this condition that was related to the size of the length effects shown in reading. This finding complements functional imaging studies showing left pFG activation for faces varying only in spacing and frontal activation for faces varying only on features. These results suggest that the sequential part-based processing strategy that promotes the length effect in the reading of these patients also allows them to discriminate between faces on the basis of feature identity, but processing of second-order configural information is most compromised due to their left pFG lesion. This study supports a view in which the left pFG is specialised for processing of high acuity foveal visual information that supports processing of both words and faces. PMID:25837867
Sandford, Adam; Burton, A Mike
2014-09-01
Face recognition is widely held to rely on 'configural processing', an analysis of spatial relations between facial features. We present three experiments in which viewers were shown distorted faces, and asked to resize these to their correct shape. Based on configural theories appealing to metric distances between features, we reason that this should be an easier task for familiar than unfamiliar faces (whose subtle arrangements of features are unknown). In fact, participants were inaccurate at this task, making between 8% and 13% errors across experiments. Importantly, we observed no advantage for familiar faces: in one experiment participants were more accurate with unfamiliars, and in two experiments there was no difference. These findings were not due to general task difficulty - participants were able to resize blocks of colour to target shapes (squares) more accurately. We also found an advantage of familiarity for resizing other stimuli (brand logos). If configural processing does underlie face recognition, these results place constraints on the definition of 'configural'. Alternatively, familiar face recognition might rely on more complex criteria - based on tolerance to within-person variation rather than highly specific measurement. Copyright © 2014 Elsevier B.V. All rights reserved.
Neural networks related to dysfunctional face processing in autism spectrum disorder
Nickl-Jockschat, Thomas; Rottschy, Claudia; Thommes, Johanna; Schneider, Frank; Laird, Angela R.; Fox, Peter T.; Eickhoff, Simon B.
2016-01-01
One of the most consistent neuropsychological findings in autism spectrum disorders (ASD) is a reduced interest in and impaired processing of human faces. We conducted an activation likelihood estimation meta-analysis on 14 functional imaging studies on neural correlates of face processing enrolling a total of 164 ASD patients. Subsequently, normative whole-brain functional connectivity maps for the identified regions of significant convergence were computed for the task-independent (resting-state) and task-dependent (co-activations) state in healthy subjects. Quantitative functional decoding was performed by reference to the BrainMap database. Finally, we examined the overlap of the delineated network with the results of a previous meta-analysis on structural abnormalities in ASD as well as with brain regions involved in human action observation/imitation. We found a single cluster in the left fusiform gyrus showing significantly reduced activation during face processing in ASD across all studies. Both task-dependent and task-independent analyses indicated significant functional connectivity of this region with the temporo-occipital and lateral occipital cortex, the inferior frontal and parietal cortices, the thalamus and the amygdala. Quantitative reverse inference then indicated an association of these regions mainly with face processing, affective processing, and language-related tasks. Moreover, we found that the cortex in the region of right area V5 displaying structural changes in ASD patients showed consistent connectivity with the region showing aberrant responses in the context of face processing. Finally, this network was also implicated in the human action observation/imitation network. In summary, our findings thus suggest a functionally and structurally disturbed network of occipital regions related primarily to face (but potentially also language) processing, which interact with inferior frontal as well as limbic regions and may be the core of aberrant face processing and reduced interest in faces in ASD. PMID:24869925
Pehlivanoglu, Didem; Jain, Shivangi; Ariel, Robert; Verhaeghen, Paul
2014-01-01
In the present study, we investigated age-related differences in the processing of emotional stimuli. Specifically, we were interested in whether older adults would show deficits in unbinding emotional expression (i.e., either no emotion, happiness, anger, or disgust) from bound stimuli (i.e., photographs of faces expressing these emotions), as a hyper-binding account of age-related differences in working memory would predict. Younger and older adults completed different N-Back tasks (side-by-side 0-Back, 1-Back, 2-Back) under three conditions: match/mismatch judgments based on either the identity of the face (identity condition), the face's emotional expression (expression condition), or both identity and expression of the face (both condition). The two age groups performed more slowly and with lower accuracy in the expression condition than in the both condition, indicating the presence of an unbinding process. This unbinding effect was more pronounced in older adults than in younger adults, but only in the 2-Back task. Thus, older adults seemed to have a specific deficit in unbinding in working memory. Additionally, no age-related differences were found in accuracy in the 0-Back task, but such differences emerged in the 1-Back task, and were further magnified in the 2-Back task, indicating independent age-related differences in attention/STM and working memory. Pupil dilation data confirmed that the attention/STM version of the task (1-Back) is more effortful for older adults than younger adults.
Sauerland, Melanie; Wolfs, Andrea C F; Crans, Samantha; Verschuere, Bruno
2017-11-27
Direct eyewitness identification is widely used, but prone to error. We tested the validity of indirect eyewitness identification decisions using the reaction time-based concealed information test (CIT) for assessing cooperative eyewitnesses' face memory as an alternative to traditional lineup procedures. In a series of five experiments, a total of 401 mock eyewitnesses watched one of 11 different stimulus events that depicted a breach of law. Eyewitness identifications in the CIT were derived from longer reaction times as compared to well-matched foil faces not encountered before. Across the five experiments, the weighted mean effect size d was 0.14 (95% CI 0.08-0.19). The reaction time-based CIT seems unsuited for testing cooperative eyewitnesses' memory for faces. The careful matching of the faces required for a fair lineup or the lack of intent to deceive may have hampered the diagnosticity of the reaction time-based CIT.
On the neural control of social emotional behavior
Roelofs, Karin; Minelli, Alessandra; Mars, Rogier B.; van Peer, Jacobien; Toni, Ivan
2009-01-01
It is known that the orbitofrontal cortex (OFC) is crucially involved in emotion regulation. However, the specific role of the OFC in controlling the behavior evoked by these emotions, such as approach–avoidance (AA) responses, remains largely unexplored. We measured behavioral and neural responses (using fMRI) during the performance of a social task, a reaction time (RT) task where subjects approached or avoided visually presented emotional faces by pulling or pushing a joystick, respectively. RTs were longer for affect-incongruent responses (approach angry faces and avoid happy faces) as compared to affect-congruent responses (approach–happy; avoid–angry). Moreover, affect-incongruent responses recruited increased activity in the left lateral OFC. These behavioral and neural effects emerged only when the subjects responded explicitly to the emotional value of the faces (AA-task) and largely disappeared when subjects responded to an affectively irrelevant feature of the faces during a control (gender evaluation: GE) task. Most crucially, the size of the OFC-effect correlated positively with the size of the behavioral costs of approaching angry faces. These findings qualify the role of the lateral OFC in the voluntary control of social–motivational behavior, emphasizing the relevance of this region for selecting rule-driven stimulus–response associations, while overriding automatic (affect-congruent) stimulus–response mappings. PMID:19047074
Face and body perception in schizophrenia: a configural processing deficit?
Soria Bauser, Denise; Thoma, Patrizia; Aizenberg, Victoria; Brüne, Martin; Juckel, Georg; Daum, Irene
2012-01-30
Face and body perception rely on common processing mechanisms and activate similar but not identical brain networks. Patients with schizophrenia show impaired face perception, and the present study addressed for the first time body perception in this group. Seventeen patients diagnosed with schizophrenia or schizoaffective disorder were compared to 17 healthy controls on standardized tests assessing basic face perception skills (identity discrimination, memory for faces, recognition of facial affect). A matching-to-sample task including emotional and neutral faces, bodies and cars either in an upright or in an inverted position was administered to assess potential category-specific performance deficits and impairments of configural processing. Relative to healthy controls, schizophrenia patients showed poorer performance on the tasks assessing face perception skills. In the matching-to-sample task, they also responded more slowly and less accurately than controls, regardless of the stimulus category. Accuracy analysis showed significant inversion effects for faces and bodies across groups, reflecting configural processing mechanisms; however reaction time analysis indicated evidence of reduced inversion effects regardless of category in schizophrenia patients. The magnitude of the inversion effects was not related to clinical symptoms. Overall, the data point towards reduced configural processing, not only for faces but also for bodies and cars in individuals with schizophrenia. © 2011 Elsevier Ltd. All rights reserved.
Building face composites can harm lineup identification performance.
Wells, Gary L; Charman, Steve D; Olson, Elizabeth A
2005-09-01
Face composite programs permit eyewitnesses to build likenesses of target faces by selecting facial features and combining them into an intact face. Research has shown that these composites are generally poor likenesses of the target face. Two experiments tested the proposition that this composite-building process could harm the builder's memory for the face. In Experiment 1 (n = 150), the authors used 50 different faces and found that the building of a composite reduced the chances that the person could later identify the original face from a lineup when compared with no composite control conditions or with yoked composite-exposure control conditions. In Experiment 2 (n = 200), the authors found that this effect generalized to a simulated-crime video, but mistaken identifications from target-absent lineups were not inflated by composite building. Copyright 2005 APA, all rights reserved.
Attractive faces temporally modulate visual attention
Nakamura, Koyo; Kawabata, Hideaki
2014-01-01
Facial attractiveness is an important biological and social signal on social interaction. Recent research has demonstrated that an attractive face captures greater spatial attention than an unattractive face does. Little is known, however, about the temporal characteristics of visual attention for facial attractiveness. In this study, we investigated the temporal modulation of visual attention induced by facial attractiveness by using a rapid serial visual presentation. Fourteen male faces and two female faces were successively presented for 160 ms, respectively, and participants were asked to identify two female faces embedded among a series of multiple male distractor faces. Identification of a second female target (T2) was impaired when a first target (T1) was attractive compared to neutral or unattractive faces, at 320 ms stimulus onset asynchrony (SOA); identification was improved when T1 was attractive compared to unattractive faces at 640 ms SOA. These findings suggest that the spontaneous appraisal of facial attractiveness modulates temporal attention. PMID:24994994
Unaware person recognition from the body when face identification fails.
Rice, Allyson; Phillips, P Jonathon; Natu, Vaidehi; An, Xiaobo; O'Toole, Alice J
2013-11-01
How does one recognize a person when face identification fails? Here, we show that people rely on the body but are unaware of doing so. State-of-the-art face-recognition algorithms were used to select images of people with almost no useful identity information in the face. Recognition of the face alone in these cases was near chance level, but recognition of the person was accurate. Accuracy in identifying the person without the face was identical to that in identifying the whole person. Paradoxically, people reported relying heavily on facial features over noninternal face and body features in making their identity decisions. Eye movements indicated otherwise, with gaze duration and fixations shifting adaptively toward the body and away from the face when the body was a better indicator of identity than the face. This shift occurred with no cost to accuracy or response time. Human identity processing may be partially inaccessible to conscious awareness.
Gender differences in recognition of toy faces suggest a contribution of experience.
Ryan, Kaitlin F; Gauthier, Isabel
2016-12-01
When there is a gender effect, women perform better then men in face recognition tasks. Prior work has not documented a male advantage on a face recognition task, suggesting that women may outperform men at face recognition generally either due to evolutionary reasons or the influence of social roles. Here, we question the idea that women excel at all face recognition and provide a proof of concept based on a face category for which men outperform women. We developed a test of face learning to measures individual differences with face categories for which men and women may differ in experience, using the faces of Barbie dolls and of Transformers. The results show a crossover interaction between subject gender and category, where men outperform women with Transformers' faces. We demonstrate that men can outperform women with some categories of faces, suggesting that explanations for a general face recognition advantage for women are in fact not needed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Still Not Enough Time in the Day: Media Specialists, Program Planning and Time Management, Part II
ERIC Educational Resources Information Center
Fitzgerald, Mary Ann; Waldrip, Andrea
2004-01-01
The ways in which one can keep the library media plan on track in the face of the realistic challenges faced by media are discussed. A system for prioritizing tasks, both planned goal-related tasks and tasks that walk in the door in varying degrees of emergency status and by need of more time management strategies could be used. [For Part I, see…
Heenan, Adam; Troje, Nikolaus F
2014-01-01
Biological motion stimuli, such as orthographically projected stick figure walkers, are ambiguous about their orientation in depth. The projection of a stick figure walker oriented towards the viewer, therefore, is the same as its projection when oriented away. Even though such figures are depth-ambiguous, however, observers tend to interpret them as facing towards them more often than facing away. Some have speculated that this facing-the-viewer bias may exist for sociobiological reasons: Mistaking another human as retreating when they are actually approaching could have more severe consequences than the opposite error. Implied in this hypothesis is that the facing-towards percept of biological motion stimuli is potentially more threatening. Measures of anxiety and the facing-the-viewer bias should therefore be related, as researchers have consistently found that anxious individuals display an attentional bias towards more threatening stimuli. The goal of this study was to assess whether physical exercise (Experiment 1) or an anxiety induction/reduction task (Experiment 2) would significantly affect facing-the-viewer biases. We hypothesized that both physical exercise and progressive muscle relaxation would decrease facing-the-viewer biases for full stick figure walkers, but not for bottom- or top-half-only human stimuli, as these carry less sociobiological relevance. On the other hand, we expected that the anxiety induction task (Experiment 2) would increase facing-the-viewer biases for full stick figure walkers only. In both experiments, participants completed anxiety questionnaires, exercised on a treadmill (Experiment 1) or performed an anxiety induction/reduction task (Experiment 2), and then immediately completed a perceptual task that allowed us to assess their facing-the-viewer bias. As hypothesized, we found that physical exercise and progressive muscle relaxation reduced facing-the-viewer biases for full stick figure walkers only. Our results provide further support that the facing-the-viewer bias for biological motion stimuli is related to the sociobiological relevance of such stimuli.
Heenan, Adam; Troje, Nikolaus F.
2014-01-01
Biological motion stimuli, such as orthographically projected stick figure walkers, are ambiguous about their orientation in depth. The projection of a stick figure walker oriented towards the viewer, therefore, is the same as its projection when oriented away. Even though such figures are depth-ambiguous, however, observers tend to interpret them as facing towards them more often than facing away. Some have speculated that this facing-the-viewer bias may exist for sociobiological reasons: Mistaking another human as retreating when they are actually approaching could have more severe consequences than the opposite error. Implied in this hypothesis is that the facing-towards percept of biological motion stimuli is potentially more threatening. Measures of anxiety and the facing-the-viewer bias should therefore be related, as researchers have consistently found that anxious individuals display an attentional bias towards more threatening stimuli. The goal of this study was to assess whether physical exercise (Experiment 1) or an anxiety induction/reduction task (Experiment 2) would significantly affect facing-the-viewer biases. We hypothesized that both physical exercise and progressive muscle relaxation would decrease facing-the-viewer biases for full stick figure walkers, but not for bottom- or top-half-only human stimuli, as these carry less sociobiological relevance. On the other hand, we expected that the anxiety induction task (Experiment 2) would increase facing-the-viewer biases for full stick figure walkers only. In both experiments, participants completed anxiety questionnaires, exercised on a treadmill (Experiment 1) or performed an anxiety induction/reduction task (Experiment 2), and then immediately completed a perceptual task that allowed us to assess their facing-the-viewer bias. As hypothesized, we found that physical exercise and progressive muscle relaxation reduced facing-the-viewer biases for full stick figure walkers only. Our results provide further support that the facing-the-viewer bias for biological motion stimuli is related to the sociobiological relevance of such stimuli. PMID:24987956
Preliminary Face and Construct Validation Study of a Virtual Basic Laparoscopic Skill Trainer
Sankaranarayanan, Ganesh; Lin, Henry; Arikatla, Venkata S.; Mulcare, Maureen; Zhang, Likun; Derevianko, Alexandre; Lim, Robert; Fobert, David; Cao, Caroline; Schwaitzberg, Steven D.; Jones, Daniel B.
2010-01-01
Abstract Background The Virtual Basic Laparoscopic Skill Trainer (VBLaST™) is a developing virtual-reality–based surgical skill training system that incorporates several of the tasks of the Fundamentals of Laparoscopic Surgery (FLS) training system. This study aimed to evaluate the face and construct validity of the VBLaST™ system. Materials and Methods Thirty-nine subjects were voluntarily recruited at the Beth Israel Deaconess Medical Center (Boston, MA) and classified into two groups: experts (PGY 5, fellow and practicing surgeons) and novice (PGY 1–4). They were then asked to perform three FLS tasks, consisting of peg transfer, pattern cutting, and endoloop, on both the VBLaST and FLS systems. The VBLaST performance scores were automatically computed, while the FLS scores were rated by a trained evaluator. Face validity was assessed using a 5-point Likert scale, varying from not realistic/useful (1) to very realistic/useful (5). Results Face-validity scores showed that the VBLaST system was significantly realistic in portraying the three FLS tasks (3.95 ± 0.909), as well as the reality in trocar placement and tool movements (3.67 ± 0.874). Construct-validity results show that VBLaST was able to differentiate between the expert and novice group (P = 0.015). However, of the two tasks used for evaluating VBLaST, only the peg-transfer task showed a significant difference between the expert and novice groups (P = 0.003). Spearman correlation coefficient analysis between the two scores showed significant correlation for the peg-transfer task (Spearman coefficient 0.364; P = 0.023). Conclusions VBLaST demonstrated significant face and construct validity. A further set of studies, involving improvement to the current VBLaST system, is needed to thoroughly demonstrate face and construct validity for all the tasks. PMID:20201683
High confidence in falsely recognizing prototypical faces.
Sampaio, Cristina; Reinke, Victoria; Mathews, Jeffrey; Swart, Alexandra; Wallinger, Stephen
2018-06-01
We applied a metacognitive approach to investigate confidence in recognition of prototypical faces. Participants were presented with sets of faces constructed digitally as deviations from prototype/base faces. Participants were then tested with a simple recognition task (Experiment 1) or a multiple-choice task (Experiment 2) for old and new items plus new prototypes, and they showed a high rate of confident false alarms to the prototypes. Confidence and accuracy relationship in this face recognition paradigm was found to be positive for standard items but negative for the prototypes; thus, it was contingent on the nature of the items used. The data have implications for lineups that employ match-to-suspect strategies.
Stepanova, Elena V; Strube, Michael J
2012-01-01
Participants (N = 106) performed an affective priming task with facial primes that varied in their skin tone and facial physiognomy, and, which were presented either in color or in gray-scale. Participants' racial evaluations were more positive for Eurocentric than for Afrocentric physiognomy faces. Light skin tone faces were evaluated more positively than dark skin tone faces, but the magnitude of this effect depended on the mode of color presentation. The results suggest that in affective priming tasks, faces might not be processed holistically, and instead, visual features of facial priming stimuli independently affect implicit evaluations.
Morita, Tomoyo; Saito, Daisuke N; Ban, Midori; Shimada, Koji; Okamoto, Yuko; Kosaka, Hirotaka; Okazawa, Hidehiko; Asada, Minoru; Naito, Eiichi
2017-04-21
Proprioception is somatic sensation that allows us to sense and recognize position, posture, and their changes in our body parts. It pertains directly to oneself and may contribute to bodily awareness. Likewise, one's face is a symbol of oneself, so that visual self-face recognition directly contributes to the awareness of self as distinct from others. Recently, we showed that right-hemispheric dominant activity in the inferior fronto-parietal cortices, which are connected by the inferior branch of the superior longitudinal fasciculus (SLF III), is associated with proprioceptive illusion (awareness), in concert with sensorimotor activity. Herein, we tested the hypothesis that visual self-face recognition shares brain regions active during proprioceptive illusion in the right inferior fronto-parietal SLF III network. We scanned brain activity using functional magnetic resonance imaging while twenty-two right-handed healthy adults performed two tasks. One was a proprioceptive illusion task, where blindfolded participants experienced a proprioceptive illusion of right hand movement. The other was a visual self-face recognition task, where the participants judged whether an observed face was their own. We examined whether the self-face recognition and the proprioceptive illusion commonly activated the inferior fronto-parietal cortices connected by the SLF III in a right-hemispheric dominant manner. Despite the difference in sensory modality and in the body parts involved in the two tasks, both tasks activated the right inferior fronto-parietal cortices, which are likely connected by the SLF III, in a right-side dominant manner. Here we discuss possible roles for right inferior fronto-parietal activity in bodily awareness and self-awareness. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Lagattuta, Kristin Hansen; Kramer, Hannah J.
2017-01-01
We used eye tracking to examine 4- to 10-year-olds’ and adults’ (N = 173) visual attention to negative (anger, fear, sadness, disgust) and neutral faces when paired with happy faces in 2 experimental conditions: free-viewing (“look at the faces”) and directed (“look only at the happy faces”). Regardless of instruction, all age groups more often looked first to negative versus positive faces (no age differences), suggesting that initial orienting is driven by bottom-up processes. In contrast, biases in more sustained attention—last looks and looking duration—varied by age and could be modified by top-down instruction. On the free-viewing task, all age groups exhibited a negativity bias which attenuated with age and remained stable across trials. When told to look only at happy faces (directed task), all age groups shifted to a positivity bias, with linear age-related improvements. This ability to implement the “look only at the happy faces” instruction, however, fatigued over time, with the decrement stronger for children. Controlling for age, individual differences in executive function (working memory and inhibitory control) had no relation to the free-viewing task; however, these variables explained substantial variance on the directed task, with children and adults higher in executive function showing better skill at looking last and looking longer at happy faces. Greater anxiety predicted more first looks to angry faces on the directed task. These findings advance theory and research on normative development and individual differences in the bias to prioritize negative information, including contributions of bottom-up salience and top-down control. PMID:28054815
ERIC Educational Resources Information Center
Abrams, Alvin J.; Cook, Richard L.
In training people to perform auditory identification tasks (e.g., training students to identify sound characteristics in a sonar classification task), it is important to know whether or not training procedures are merely sustaining performance during training or whether they enhance learning of the task. Often an incorrect assumption is made that…
Informal Face-to-Face Interaction Improves Mood State Reflected in Prefrontal Cortex Activity
Watanabe, Jun-ichiro; Atsumori, Hirokazu; Kiguchi, Masashi
2016-01-01
Recent progress with wearable sensors has enabled researchers to capture face-to-face interactions quantitatively and given great insight into human dynamics. One attractive field for applying such sensors is the workplace, where the relationship between the face-to-face behaviors of employees and the productivity of the organization has been investigated. One interesting result of previous studies showed that informal face-to-face interaction among employees, captured by wearable sensors that the employees wore, significantly affects their performance. However, the mechanism behind this relationship has not yet been adequately explained, though experiences at the job scene might qualitatively support the finding. We hypothesized that informal face-to-face interaction improves mood state, which in turn affects the task performance. To test this hypothesis, we evaluated the change of mood state before and after break time for two groups of participants, one that spent their breaks alone and one that spent them with other participants, by administering questionnaires and taking brain activity measurements. Recent neuroimaging studies have suggested a significant relationship between mood state and brain activity. Here, we show that face-to-face interaction during breaks significantly improved mood state, which was measured by Profiles of Mood States (POMS). We also observed that the verbal working memory (WM) task performance of participants who did not have face-to-face interaction during breaks decreased significantly. In this paper, we discuss how the change of mood state was evidenced in the prefrontal cortex (PFC) activity accompanied by WM tasks measured by near-infrared spectroscopy (NIRS). PMID:27199715
Shyi, Gary C-W; Wang, Chao-Chih
2016-01-01
The composite face task is one of the most popular research paradigms for measuring holistic processing of upright faces. The exact mechanism underlying holistic processing remains elusive and controversial, and some studies have suggested that holistic processing may not be evenly distributed, in that the top-half of a face might induce stronger holistic processing than its bottom-half counterpart. In two experiments, we further examined the possibility of asymmetric holistic processing. Prior to Experiment 1, we confirmed that perceptual discriminability was equated between top and bottom face halves; we found no differences in performance between top and bottom face halves when they were presented individually. Then, in Experiment 1, using the composite face task with the complete design to reduce response bias, we failed to obtain evidence that would support the notion of asymmetric holistic processing between top and bottom face halves. To further reduce performance variability and to remove lingering holistic effects observed in the misaligned condition in Experiment 1, we doubled the number of trials and increased misalignment between top and bottom face halves to make misalignment more salient in Experiment 2. Even with these additional manipulations, we were unable to find evidence indicative of asymmetric holistic processing. Taken together, these findings suggest that holistic processing is distributed homogenously within an upright face.
Hornung, Jonas; Kogler, Lydia; Wolpert, Stephan; Freiherr, Jessica; Derntl, Birgit
2017-01-01
The androgen derivative androstadienone is a substance found in human sweat and thus is a putative human chemosignal. Androstadienone has been studied with respect to effects on mood states, attractiveness ratings, physiological and neural activation. With the current experiment, we aimed to explore in which way androstadienone affects attention to social cues (human faces). Moreover, we wanted to test whether effects depend on specific emotions, the participants' sex and individual sensitivity to smell androstadienone. To do so, we investigated 56 healthy individuals (thereof 29 females taking oral contraceptives) with two attention tasks on two consecutive days (once under androstadienone, once under placebo exposure in pseudorandomized order). With an emotional dot-probe task we measured visuo-spatial cueing while an emotional Stroop task allowed us to investigate interference control. Our results suggest that androstadienone acts in a sex, task and emotion-specific manner as a reduction in interference processes in the emotional Stroop task was only apparent for angry faces in men under androstadienone exposure. More specifically, men showed a smaller difference in reaction times for congruent compared to incongruent trials. At the same time also women were slightly affected by smelling androstadienone as they classified angry faces more often correctly under androstadienone. For the emotional dot-probe task no modulation by androstadienone was observed. Furthermore, in both attention paradigms individual sensitivity to androstadienone was neither correlated with reaction times nor error rates in men and women. To conclude, exposure to androstadienone seems to potentiate the relevance of angry faces in both men and women in connection with interference control, while processes of visuo-spatial cueing remain unaffected.
Kogler, Lydia; Wolpert, Stephan; Freiherr, Jessica; Derntl, Birgit
2017-01-01
The androgen derivative androstadienone is a substance found in human sweat and thus is a putative human chemosignal. Androstadienone has been studied with respect to effects on mood states, attractiveness ratings, physiological and neural activation. With the current experiment, we aimed to explore in which way androstadienone affects attention to social cues (human faces). Moreover, we wanted to test whether effects depend on specific emotions, the participants' sex and individual sensitivity to smell androstadienone. To do so, we investigated 56 healthy individuals (thereof 29 females taking oral contraceptives) with two attention tasks on two consecutive days (once under androstadienone, once under placebo exposure in pseudorandomized order). With an emotional dot-probe task we measured visuo-spatial cueing while an emotional Stroop task allowed us to investigate interference control. Our results suggest that androstadienone acts in a sex, task and emotion-specific manner as a reduction in interference processes in the emotional Stroop task was only apparent for angry faces in men under androstadienone exposure. More specifically, men showed a smaller difference in reaction times for congruent compared to incongruent trials. At the same time also women were slightly affected by smelling androstadienone as they classified angry faces more often correctly under androstadienone. For the emotional dot-probe task no modulation by androstadienone was observed. Furthermore, in both attention paradigms individual sensitivity to androstadienone was neither correlated with reaction times nor error rates in men and women. To conclude, exposure to androstadienone seems to potentiate the relevance of angry faces in both men and women in connection with interference control, while processes of visuo-spatial cueing remain unaffected. PMID:28369152
Identification of cutting force coefficients in machining process considering cutter vibration
NASA Astrophysics Data System (ADS)
Yao, Qi; Luo, Ming; Zhang, Dinghua; Wu, Baohai
2018-03-01
Among current cutting force models, cutting force coefficients still are the foundation of predicting calculation combined with consideration of geometry engagement variation, equipment characteristics, material properties and so on. Attached with unimpeachable significance, the traditional and some novel identification methods of cutting force coefficient are still faced with trouble, including repeated onerous work, over ideal measuring condition, variation of value due to material divergence, interference from measuring units. To utilize the large amount of data from real manufacturing section, enlarge data sources and enrich cutting data base for former prediction task, a novel identification method is proposed by considering stiffness properties of the cutter-holder-spindle system in this paper. According to previously proposed studies, the direct result of cutter vibration is the form of dynamic undeformed chip thickness. This fluctuation is considered in two stages of this investigation. Firstly, a cutting force model combined with cutter vibration is established in detailed way. Then, on the foundation of modeling, a novel identification method is developed, in which the dynamic undeformed chip thickness could be obtained by using collected data. In a carefully designed experiment procedure, the reliability of model is validated by comparing predicted and measured results. Under different cutting condition and cutter stiffness, data is collected for the justification of identification method. The results showed divergence in calculated coefficients is acceptable confirming the possibility of accomplishing targets by applying this new method. In discussion, the potential directions of improvement are proposed.
Relational-Cultural Theory for Middle School Counselors
ERIC Educational Resources Information Center
Tucker, Catherine; Smith-Adcock, Sondra; Trepal, Heather C.
2011-01-01
Young adolescents (ages 11-14), typically in the middle school grades, face life tasks involving connections and belonging with their peer group along with the development of their individual identity (Henderson & Thompson, 2010). Learning to negotiate through these developmental tasks, they face myriad relational challenges. This article explores…
Priming Effects for Affective vs. Neutral Faces
ERIC Educational Resources Information Center
Burton, Leslie A.; Rabin, Laura; Wyatt, Gwinne; Frohlich, Jonathan; Vardy, Susan B.; Dimitri, Diana
2005-01-01
Affective and Neutral Tasks (faces with negative or neutral content, with different lighting and orientation) requiring reaction time judgments of poser identity were administered to 32 participants. Speed and accuracy were better for the Affective than Neutral Task, consistent with literature suggesting facilitation of performance by affective…
Summary statistics in the attentional blink.
McNair, Nicolas A; Goodbourn, Patrick T; Shone, Lauren T; Harris, Irina M
2017-01-01
We used the attentional blink (AB) paradigm to investigate the processing stage at which extraction of summary statistics from visual stimuli ("ensemble coding") occurs. Experiment 1 examined whether ensemble coding requires attentional engagement with the items in the ensemble. Participants performed two sequential tasks on each trial: gender discrimination of a single face (T1) and estimating the average emotional expression of an ensemble of four faces (or of a single face, as a control condition) as T2. Ensemble coding was affected by the AB when the tasks were separated by a short temporal lag. In Experiment 2, the order of the tasks was reversed to test whether ensemble coding requires more working-memory resources, and therefore induces a larger AB, than estimating the expression of a single face. Each condition produced a similar magnitude AB in the subsequent gender-discrimination T2 task. Experiment 3 additionally investigated whether the previous results were due to participants adopting a subsampling strategy during the ensemble-coding task. Contrary to this explanation, we found different patterns of performance in the ensemble-coding condition and a condition in which participants were instructed to focus on only a single face within an ensemble. Taken together, these findings suggest that ensemble coding emerges automatically as a result of the deployment of attentional resources across the ensemble of stimuli, prior to information being consolidated in working memory.
Gao, Zaifeng; Flevaris, Anastasia V; Robertson, Lynn C; Bentin, Shlomo
2011-07-01
We used the composite-face illusion and Navon stimuli to determine the consequences of priming local or global processing on subsequent face recognition. The composite-face illusion reflects the difficulty of ignoring the task-irrelevant half-face while attending the task-relevant half if the half-faces in the composite are aligned. On each trial, participants first matched two Navon stimuli, attending to either the global or the local level, and then matched the upper halves of two composite faces presented sequentially. Global processing of Navon stimuli increased the sensitivity to incongruence between the upper and the lower halves of the composite face, relative to a baseline in which the composite faces were not primed. Local processing of Navon stimuli did not influence the sensitivity to incongruence. Although incongruence induced a bias toward different responses, this bias was not modulated by priming. We conclude that global processing of Navon stimuli augments holistic processing of the face.
Matheson, H E; Bilsbury, T G; McMullen, P A
2012-03-01
A large body of research suggests that faces are processed by a specialized mechanism within the human visual system. This specialized mechanism is made up of subprocesses (Maurer, LeGrand, & Mondloch, 2002). One subprocess, called second- order relational processing, analyzes the metric distances between face parts. Importantly, it is well established that other-race faces and contrast-reversed faces are associated with impaired performance on numerous face processing tasks. Here, we investigated the specificity of second-order relational processing by testing how this process is applied to faces of different race and photographic contrast. Participants completed a feature displacement discrimination task, directly measuring the sensitivity to second-order relations between face parts. Across three experiments we show that, despite absolute differences in sensitivity in some conditions, inversion impaired performance in all conditions. The presence of robust inversion effects for all faces suggests that second-order relational processing can be applied to faces of different race and photographic contrast.
Face-n-Food: Gender Differences in Tuning to Faces.
Pavlova, Marina A; Scheffler, Klaus; Sokolov, Alexander N
2015-01-01
Faces represent valuable signals for social cognition and non-verbal communication. A wealth of research indicates that women tend to excel in recognition of facial expressions. However, it remains unclear whether females are better tuned to faces. We presented healthy adult females and males with a set of newly created food-plate images resembling faces (slightly bordering on the Giuseppe Arcimboldo style). In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Females not only more readily recognized the images as a face (they reported resembling a face on images, on which males still did not), but gave on overall more face responses. The findings are discussed in the light of gender differences in deficient face perception. As most neuropsychiatric, neurodevelopmental and psychosomatic disorders characterized by social brain abnormalities are sex specific, the task may serve as a valuable tool for uncovering impairments in visual face processing.
Face-n-Food: Gender Differences in Tuning to Faces
Pavlova, Marina A.; Scheffler, Klaus; Sokolov, Alexander N.
2015-01-01
Faces represent valuable signals for social cognition and non-verbal communication. A wealth of research indicates that women tend to excel in recognition of facial expressions. However, it remains unclear whether females are better tuned to faces. We presented healthy adult females and males with a set of newly created food-plate images resembling faces (slightly bordering on the Giuseppe Arcimboldo style). In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Females not only more readily recognized the images as a face (they reported resembling a face on images, on which males still did not), but gave on overall more face responses. The findings are discussed in the light of gender differences in deficient face perception. As most neuropsychiatric, neurodevelopmental and psychosomatic disorders characterized by social brain abnormalities are sex specific, the task may serve as a valuable tool for uncovering impairments in visual face processing. PMID:26154177
Finding a face in the crowd: testing the anger superiority effect in Asperger Syndrome.
Ashwin, Chris; Wheelwright, Sally; Baron-Cohen, Simon
2006-06-01
Social threat captures attention and is processed rapidly and efficiently, with many lines of research showing involvement of the amygdala. Visual search paradigms looking at social threat have shown angry faces 'pop-out' in a crowd, compared to happy faces. Autism and Asperger Syndrome (AS) are neurodevelopmental conditions characterised by social deficits, abnormal face processing, and amygdala dysfunction. We tested adults with high-functioning autism (HFA) and AS using a facial visual search paradigm with schematic neutral and emotional faces. We found, contrary to predictions, that people with HFA/AS performed similarly to controls in many conditions. However, the effect was reduced in the HFA/AS group when using widely varying crowd sizes and when faces were inverted, suggesting a difference in face-processing style may be evident even with simple schematic faces. We conclude there are intact threat detection mechanisms in AS, under simple and predictable conditions, but that like other face-perception tasks, the visual search of threat faces task reveals atypical face-processing in HFA/AS.
ERIC Educational Resources Information Center
Seymour, Karen E.; Pescosolido, Matthew F.; Reidy, Brooke L.; Galvan, Thania; Kim, Kerri L.; Young, Matthew; Dickstein, Daniel P.
2013-01-01
Objective: Bipolar disorder (BD) and attention-deficit/hyperactivity disorder (ADHD) are often comorbid or confounded; therefore, we evaluated emotional face identification to better understand brain/behavior interactions in children and adolescents with either primary BD, primary ADHD, or typically developing controls (TDC). Method: Participants…
Wang, Jinzhao; Kitayama, Shinobu; Han, Shihui
2011-08-23
Behavioral studies suggest that men are more likely to develop independent self-construals whereas women are more likely to develop interdependent self-construals. The gender difference in self-construals leads to two predictions. First, independent self-construals may result in a bias of attentional processing of self-related information that is stronger in men than in women. Second, interdependent self-construals may induce greater sensitivity to contextual information from the environment in women than in men. The present study tested these hypotheses by recording event-related potentials (ERPs) to familiar faces (self-, mother-, and father-faces) and unfamiliar faces (gender/age matched strangers' faces) from 14 male and 14 female adults. Using an odd ball paradigm, in separate blocks of trials, familiar faces were designated as either targets that required behavioral responses or as non-targets that did not require a behavioral response. We found that a long latency positivity at 420-620 ms over the parietal area (the attention sensitive target P3) showed a larger amplitude to self-face than to mother-/father-faces that were designated as targets in men but not in women. In contrast, a long latency positivity at 430-530 ms over the central area (the context sensitive novelty P3) was enlarged to familiar compared to strangers' faces that were designated as non-targets and this effect was greater in women than in men. Our results showed ERP evidence for sex differences in the processing of task-relevant and task-irrelevant social information. Copyright © 2011 Elsevier B.V. All rights reserved.
Expertise with unfamiliar objects is flexible to changes in task but not changes in class
Tangen, Jason M.
2017-01-01
Perceptual expertise is notoriously specific and bound by familiarity; generalizing to novel or unfamiliar images, objects, identities, and categories often comes at some cost to performance. In forensic and security settings, however, examiners are faced with the task of discriminating unfamiliar images of unfamiliar objects within their general domain of expertise (e.g., fingerprints, faces, or firearms). The job of a fingerprint expert, for instance, is to decide whether two unfamiliar fingerprint images were left by the same unfamiliar finger (e.g., Smith’s left thumb), or two different unfamiliar fingers (e.g., Smith and Jones’s left thumb). Little is known about the limits of this kind of perceptual expertise. Here, we examine fingerprint experts’ and novices’ ability to distinguish fingerprints compared to inverted faces in two different tasks. Inverted face images serve as an ideal comparison because they vary naturally between and within identities, as do fingerprints, and people tend to be less accurate or more novice-like at distinguishing faces when they are presented in an inverted or unfamiliar orientation. In Experiment 1, fingerprint experts outperformed novices in locating categorical fingerprint outliers (i.e., a loop pattern in an array of whorls), but not inverted face outliers (i.e., an inverted male face in an array of inverted female faces). In Experiment 2, fingerprint experts were more accurate than novices at discriminating matching and mismatching fingerprints that were presented very briefly, but not so for inverted faces. Our data show that perceptual expertise with fingerprints can be flexible to changing task demands, but there can also be abrupt limits: fingerprint expertise did not generalize to an unfamiliar class of stimuli. We interpret these findings as evidence that perceptual expertise with unfamiliar objects is highly constrained by one’s experience. PMID:28574998
Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.
Nummenmaa, Lauri; Calvo, Manuel G
2015-04-01
Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. (c) 2015 APA, all rights reserved).
Phasic alertness enhances processing of face and non-face stimuli in congenital prosopagnosia.
Tanzer, Michal; Weinbach, Noam; Mardo, Elite; Henik, Avishai; Avidan, Galia
2016-08-01
Congenital prosopagnosia (CP) is a severe face processing impairment that occurs in the absence of any obvious brain damage and has often been associated with a more general deficit in deriving holistic relations between facial features or even between non-face shape dimensions. Here we further characterized this deficit and examined a potential way to ameliorate it. To this end we manipulated phasic alertness using alerting cues previously shown to modulate attention and enhance global processing of visual stimuli in normal observers. Specifically, we first examined whether individuals with CP, similarly to controls, would show greater global processing when exposed to an alerting cue in the context of a non-facial task (Navon global/local task). We then explored the effect of an alerting cue on face processing (upright/inverted face discrimination). Confirming previous findings, in the absence of alerting cues, controls showed a typical global bias in the Navon task and an inversion effect indexing holistic processing in the upright/inverted task, while CP failed to show these effects. Critically, when alerting cues preceded the experimental trials, both groups showed enhanced global interference and a larger inversion effect. These results suggest that phasic alertness may modulate visual processing and consequently, affect global/holistic perception. Hence, these findings further reinforce the notion that global/holistic processing may serve as a possible mechanism underlying the face processing deficit in CP. Moreover, they imply a possible route for enhancing face processing in individuals with CP and thus shed new light on potential amelioration of this disorder. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mothersill, Omar; Morris, Derek W; Kelly, Sinead; Rose, Emma Jane; Bokde, Arun; Reilly, Richard; Gill, Michael; Corvin, Aiden P; Donohoe, Gary
2014-08-01
Processing the emotional content of faces is recognised as a key deficit of schizophrenia, associated with poorer functional outcomes and possibly contributing to the severity of clinical symptoms such as paranoia. At the neural level, fMRI studies have reported altered limbic activity in response to facial stimuli. However, previous studies may be limited by the use of cognitively demanding tasks and static facial stimuli. To address these issues, the current study used a face processing task involving both passive face viewing and dynamic social stimuli. Such a task may (1) lack the potentially confounding effects of high cognitive demands and (2) show higher ecological validity. Functional MRI was used to examine neural activity in 25 patients with a DSM-IV diagnosis of schizophrenia/schizoaffective disorder and 21 age- and gender-matched healthy controls while they participated in a face processing task, which involved viewing videos of angry and neutral facial expressions, and a non-biological baseline condition. While viewing faces, patients showed significantly weaker deactivation of the medial prefrontal cortex, including the anterior cingulate, and decreased activation in the left cerebellum, compared to controls. Patients also showed weaker medial prefrontal deactivation while viewing the angry faces relative to baseline. Given that the anterior cingulate plays a role in processing negative emotion, weaker deactivation of this region in patients while viewing faces may contribute to an increased perception of social threat. Future studies examining the neurobiology of social cognition in schizophrenia using fMRI may help establish targets for treatment interventions. Copyright © 2014 Elsevier B.V. All rights reserved.
Artificial faces are harder to remember
Balas, Benjamin; Pacella, Jonathan
2015-01-01
Observers interact with artificial faces in a range of different settings and in many cases must remember and identify computer-generated faces. In general, however, most adults have heavily biased experience favoring real faces over synthetic faces. It is well known that face recognition abilities are affected by experience such that faces belonging to “out-groups” defined by race or age are more poorly remembered and harder to discriminate from one another than faces belonging to the “in-group.” Here, we examine the extent to which artificial faces form an “out-group” in this sense when other perceptual categories are matched. We rendered synthetic faces using photographs of real human faces and compared performance in a memory task and a discrimination task across real and artificial versions of the same faces. We found that real faces were easier to remember, but only slightly more discriminable than artificial faces. Artificial faces were also equally susceptible to the well-known face inversion effect, suggesting that while these patterns are still processed by the human visual system in a face-like manner, artificial appearance does compromise the efficiency of face processing. PMID:26195852
Semantic Learning Modifies Perceptual Face Processing
ERIC Educational Resources Information Center
Heisz, Jennifer J.; Shedden, Judith M.
2009-01-01
Face processing changes when a face is learned with personally relevant information. In a five-day learning paradigm, faces were presented with rich semantic stories that conveyed personal information about the faces. Event-related potentials were recorded before and after learning during a passive viewing task. When faces were novel, we observed…
Cameron, E Leslie; Tai, Joanna C; Eckstein, Miguel P; Carrasco, Marisa
2004-01-01
Adding distracters to a display impairs performance on visual tasks (i.e. the set-size effect). While keeping the display characteristics constant, we investigated this effect in three tasks: 2 target identification, yes-no detection with 2 targets, and 8-alternative localization. A Signal Detection Theory (SDT) model, tailored for each task, accounts for the set-size effects observed in identification and localization tasks, and slightly under-predicts the set-size effect in a detection task. Given that sensitivity varies as a function of spatial frequency (SF), we measured performance in each of these three tasks in neutral and peripheral precue conditions for each of six spatial frequencies (0.5-12 cpd). For all spatial frequencies tested, performance on the three tasks decreased as set size increased in the neutral precue condition, and the peripheral precue reduced the effect. Larger set-size effects were observed at low SFs in the identification and localization tasks. This effect can be described using the SDT model, but was not predicted by it. For each of these tasks we also established the extent to which covert attention modulates performance across a range of set sizes. A peripheral precue substantially diminished the set-size effect and improved performance, even at set size 1. These results provide support for distracter exclusion, and suggest that signal enhancement may also be a mechanism by which covert attention can impose its effect.
Repetition Blindness for Faces: A Comparison of Face Identity, Expression, and Gender Judgments.
Murphy, Karen; Ward, Zoe
2017-01-01
Repetition blindness (RB) refers to the impairment in reporting two identical targets within a rapid serial visual presentation stream. While numerous studies have demonstrated RB for words and picture of objects, very few studies have examined RB for faces. This study extended this research by examining RB when the two faces were complete repeats (same emotion and identity), identity repeats (same individual, different emotion), and emotion repeats (different individual, same emotion) for identity, gender, and expression judgment tasks. Complete RB and identity RB effects were evident for all three judgment tasks. Emotion RB was only evident for the expression and gender judgments. Complete RB effects were larger than emotion or identity RB effects across all judgment tasks. For the expression judgments, there was more emotion than identity RB. The identity RB effect was larger than the emotion RB effect for the gender judgments. Cross task comparisons revealed larger complete RB effects for the expression and gender judgments than the identity decisions. There was a larger emotion RB effect for the expression than gender judgments and the identity RB effect was larger for the gender than for the identity and expression judgments. These results indicate that while faces are subject to RB, this is affected by the type of repeated information and relevance of the facial characteristic to the judgment decision. This study provides further support for the operation of separate processing mechanisms for face gender, emotion, and identity information within models of face recognition.
Repetition Blindness for Faces: A Comparison of Face Identity, Expression, and Gender Judgments
Murphy, Karen; Ward, Zoe
2017-01-01
Repetition blindness (RB) refers to the impairment in reporting two identical targets within a rapid serial visual presentation stream. While numerous studies have demonstrated RB for words and picture of objects, very few studies have examined RB for faces. This study extended this research by examining RB when the two faces were complete repeats (same emotion and identity), identity repeats (same individual, different emotion), and emotion repeats (different individual, same emotion) for identity, gender, and expression judgment tasks. Complete RB and identity RB effects were evident for all three judgment tasks. Emotion RB was only evident for the expression and gender judgments. Complete RB effects were larger than emotion or identity RB effects across all judgment tasks. For the expression judgments, there was more emotion than identity RB. The identity RB effect was larger than the emotion RB effect for the gender judgments. Cross task comparisons revealed larger complete RB effects for the expression and gender judgments than the identity decisions. There was a larger emotion RB effect for the expression than gender judgments and the identity RB effect was larger for the gender than for the identity and expression judgments. These results indicate that while faces are subject to RB, this is affected by the type of repeated information and relevance of the facial characteristic to the judgment decision. This study provides further support for the operation of separate processing mechanisms for face gender, emotion, and identity information within models of face recognition. PMID:29038663
Matching voice and face identity from static images.
Mavica, Lauren W; Barenholtz, Elan
2013-04-01
Previous research has suggested that people are unable to correctly choose which unfamiliar voice and static image of a face belong to the same person. Here, we present evidence that people can perform this task with greater than chance accuracy. In Experiment 1, participants saw photographs of two, same-gender models, while simultaneously listening to a voice recording of one of the models pictured in the photographs and chose which of the two faces they thought belonged to the same model as the recorded voice. We included three conditions: (a) the visual stimuli were frontal headshots (including the neck and shoulders) and the auditory stimuli were recordings of spoken sentences; (b) the visual stimuli only contained cropped faces and the auditory stimuli were full sentences; (c) we used the same pictures as Condition 1 but the auditory stimuli were recordings of a single word. In Experiment 2, participants performed the same task as in Condition 1 of Experiment 1 but with the stimuli presented in sequence. Participants also rated the model's faces and voices along multiple "physical" dimensions (e.g., weight,) or "personality" dimensions (e.g., extroversion); the degree of agreement between the ratings for each model's face and voice was compared to performance for that model in the matching task. In all three conditions, we found that participants chose, at better than chance levels, which faces and voices belonged to the same person. Performance in the matching task was not correlated with the degree of agreement on any of the rated dimensions.
Individual Differences in Face Identity Processing with Fast Periodic Visual Stimulation.
Xu, Buyun; Liu-Shuang, Joan; Rossion, Bruno; Tanaka, James
2017-08-01
A growing body of literature suggests that human individuals differ in their ability to process face identity. These findings mainly stem from explicit behavioral tasks, such as the Cambridge Face Memory Test (CFMT). However, it remains an open question whether such individual differences can be found in the absence of an explicit face identity task and when faces have to be individualized at a single glance. In the current study, we tested 49 participants with a recently developed fast periodic visual stimulation (FPVS) paradigm [Liu-Shuang, J., Norcia, A. M., & Rossion, B. An objective index of individual face discrimination in the right occipitotemporal cortex by means of fast periodic oddball stimulation. Neuropsychologia, 52, 57-72, 2014] in EEG to rapidly, objectively, and implicitly quantify face identity processing. In the FPVS paradigm, one face identity (A) was presented at the frequency of 6 Hz, allowing only one gaze fixation, with different face identities (B, C, D) presented every fifth face (1.2 Hz; i.e., AAAABAAAACAAAAD…). Results showed a face individuation response at 1.2 Hz and its harmonics, peaking over occipitotemporal locations. The magnitude of this response showed high reliability across different recording sequences and was significant in all but two participants, with the magnitude and lateralization differing widely across participants. There was a modest but significant correlation between the individuation response amplitude and the performance of the behavioral CFMT task, despite the fact that CFMT and FPVS measured different aspects of face identity processing. Taken together, the current study highlights the FPVS approach as a promising means for studying individual differences in face identity processing.
Conflict resolved: On the role of spatial attention in reading and color naming tasks.
Robidoux, Serje; Besner, Derek
2015-12-01
The debate about whether or not visual word recognition requires spatial attention has been marked by a conflict: the results from different tasks yield different conclusions. Experiments in which the primary task is reading based show no evidence that unattended words are processed, whereas when the primary task is color identification, supposedly unattended words do affect processing. However, the color stimuli used to date does not appear to demand as much spatial attention as explicit word reading tasks. We first identify a color stimulus that requires as much spatial attention to identify as does a word. We then demonstrate that when spatial attention is appropriately captured, distractor words in unattended locations do not affect color identification. We conclude that there is no word identification without spatial attention.
Positive and negative emotion enhances the processing of famous faces in a semantic judgment task.
Bate, Sarah; Haslam, Catherine; Hodgson, Timothy L; Jansari, Ashok; Gregory, Nicola; Kay, Janice
2010-01-01
Previous work has consistently reported a facilitatory influence of positive emotion in face recognition (e.g., D'Argembeau, Van der Linden, Comblain, & Etienne, 2003). However, these reports asked participants to make recognition judgments in response to faces, and it is unknown whether emotional valence may influence other stages of processing, such as at the level of semantics. Furthermore, other evidence suggests that negative rather than positive emotion facilitates higher level judgments when processing nonfacial stimuli (e.g., Mickley & Kensinger, 2008), and it is possible that negative emotion also influences latter stages of face processing. The present study addressed this issue, examining the influence of emotional valence while participants made semantic judgments in response to a set of famous faces. Eye movements were monitored while participants performed this task, and analyses revealed a reduction in information extraction for the faces of liked and disliked celebrities compared with those of emotionally neutral celebrities. Thus, in contrast to work using familiarity judgments, both positive and negative emotion facilitated processing in this semantic-based task. This pattern of findings is discussed in relation to current models of face processing. Copyright 2009 APA, all rights reserved.
Memory for faces and objects by deaf and hearing signers and hearing nonsigners.
Arnold, P; Murray, C
1998-07-01
The memory of 11 deaf and 11 hearing British Sign Language users and 11 hearing nonsigners for pictures of faces of and verbalizable objects was measured using the game Concentration. The three groups performed at the same level for the objects. In contrast the deaf signers were better for faces than the hearing signers, who in turn were superior to the hearing nonsigners, who were the worst. Three hypotheses were made: That there would be no significant difference in terms of the number of attempts between the three groups on the verbalizable object task, that the hearing and deaf signers would demonstrate superior performance to that of the hearing nonsigners on the matching faces task, and that the hearing and deaf signers would exhibit similar performance levels on the matching faces task. The first two hypotheses were supported, but the third was not. Deaf signers were found to be superior for memory for faces to hearing signers and hearing nonsigners. Possible explanations for the findings are discussed, including the possibility that deafness and the long use of sign language have additive effects.
Making heads turn: the effect of familiarity and stimulus rotation on a gender-classification task.
Stevenage, Sarah V; Osborne, Cara D
2006-01-01
Recent work has demonstrated that facial familiarity can moderate the influence of inversion when completing a configural processing task. Here, we examine whether familiarity interacts with intermediate angles of orientation in the same way that it interacts with inversion. Participants were asked to make a gender classification to familiar and unfamiliar faces shown at seven angles of orientation. Speed and accuracy of performance were assessed for stimuli presented (i) as whole faces and (ii) as internal features. When presented as whole faces, the task was easy, as revealed by ceiling levels of accuracy and no effect of familiarity or angle of rotation on response times. However, when stimuli were presented as internal features, an influence of facial familiarity was evident. Unfamiliar faces showed no increase in difficulty across angle of rotation, whereas familiar faces showed a marked increase in difficulty across angle, which was explained by significant linear and cubic trends in the data. Results were interpreted in terms of the benefit gained from a mental representation when face processing was impaired by stimulus rotation.
Time course of implicit processing and explicit processing of emotional faces and emotional words.
Frühholz, Sascha; Jellinghaus, Anne; Herrmann, Manfred
2011-05-01
Facial expressions are important emotional stimuli during social interactions. Symbolic emotional cues, such as affective words, also convey information regarding emotions that is relevant for social communication. Various studies have demonstrated fast decoding of emotions from words, as was shown for faces, whereas others report a rather delayed decoding of information about emotions from words. Here, we introduced an implicit (color naming) and explicit task (emotion judgment) with facial expressions and words, both containing information about emotions, to directly compare the time course of emotion processing using event-related potentials (ERP). The data show that only negative faces affected task performance, resulting in increased error rates compared to neutral faces. Presentation of emotional faces resulted in a modulation of the N170, the EPN and the LPP components and these modulations were found during both the explicit and implicit tasks. Emotional words only affected the EPN during the explicit task, but a task-independent effect on the LPP was revealed. Finally, emotional faces modulated source activity in the extrastriate cortex underlying the generation of the N170, EPN and LPP components. Emotional words led to a modulation of source activity corresponding to the EPN and LPP, but they also affected the N170 source on the right hemisphere. These data show that facial expressions affect earlier stages of emotion processing compared to emotional words, but the emotional value of words may have been detected at early stages of emotional processing in the visual cortex, as was indicated by the extrastriate source activity. Copyright © 2011 Elsevier B.V. All rights reserved.
Development of response inhibition in the context of relevant versus irrelevant emotions.
Schel, Margot A; Crone, Eveline A
2013-01-01
The present study examined the influence of relevant and irrelevant emotions on response inhibition from childhood to early adulthood. Ninety-four participants between 6 and 25 years of age performed two go/nogo tasks with emotional faces (neutral, happy, and fearful) as stimuli. In one go/nogo task emotion formed a relevant dimension of the task and in the other go/nogo task emotion was irrelevant and participants had to respond to the color of the faces instead. A special feature of the latter task, in which emotion was irrelevant, was the inclusion of free choice trials, in which participants could freely decide between acting and inhibiting. Results showed a linear increase in response inhibition performance with increasing age both in relevant and irrelevant affective contexts. Relevant emotions had a pronounced influence on performance across age, whereas irrelevant emotions did not. Overall, participants made more false alarms on trials with fearful faces than happy faces, and happy faces were associated with better performance on go trials (higher percentage correct and faster RTs) than fearful faces. The latter effect was stronger for young children in terms of accuracy. Finally, during the free choice trials participants did not base their decisions on affective context, confirming that irrelevant emotions do not have a strong impact on inhibition. Together, these findings suggest that across development relevant affective context has a larger influence on response inhibition than irrelevant affective context. When emotions are relevant, a context of positive emotions is associated with better performance compared to a context with negative emotions, especially in young children.
A novel thermal face recognition approach using face pattern words
NASA Astrophysics Data System (ADS)
Zheng, Yufeng
2010-04-01
A reliable thermal face recognition system can enhance the national security applications such as prevention against terrorism, surveillance, monitoring and tracking, especially at nighttime. The system can be applied at airports, customs or high-alert facilities (e.g., nuclear power plant) for 24 hours a day. In this paper, we propose a novel face recognition approach utilizing thermal (long wave infrared) face images that can automatically identify a subject at both daytime and nighttime. With a properly acquired thermal image (as a query image) in monitoring zone, the following processes will be employed: normalization and denoising, face detection, face alignment, face masking, Gabor wavelet transform, face pattern words (FPWs) creation, face identification by similarity measure (Hamming distance). If eyeglasses are present on a subject's face, an eyeglasses mask will be automatically extracted from the querying face image, and then masked with all comparing FPWs (no more transforms). A high identification rate (97.44% with Top-1 match) has been achieved upon our preliminary face dataset (of 39 subjects) from the proposed approach regardless operating time and glasses-wearing condition.e
Mere social categorization modulates identification of facial expressions of emotion.
Young, Steven G; Hugenberg, Kurt
2010-12-01
The ability of the human face to communicate emotional states via facial expressions is well known, and past research has established the importance and universality of emotional facial expressions. However, recent evidence has revealed that facial expressions of emotion are most accurately recognized when the perceiver and expresser are from the same cultural ingroup. The current research builds on this literature and extends this work. Specifically, we find that mere social categorization, using a minimal-group paradigm, can create an ingroup emotion-identification advantage even when the culture of the target and perceiver is held constant. Follow-up experiments show that this effect is supported by differential motivation to process ingroup versus outgroup faces and that this motivational disparity leads to more configural processing of ingroup faces than of outgroup faces. Overall, the results point to distinct processing modes for ingroup and outgroup faces, resulting in differential identification accuracy for facial expressions of emotion. PsycINFO Database Record (c) 2010 APA, all rights reserved.
Modulation of the composite face effect by unintended emotion cues.
Gray, Katie L H; Murphy, Jennifer; Marsh, Jade E; Cook, Richard
2017-04-01
When upper and lower regions from different emotionless faces are aligned to form a facial composite, observers 'fuse' the two halves together, perceptually. The illusory distortion induced by task-irrelevant ('distractor') halves hinders participants' judgements about task-relevant ('target') halves. This composite-face effect reveals a tendency to integrate feature information from disparate regions of intact upright faces, consistent with theories of holistic face processing. However, observers frequently perceive emotion in ostensibly neutral faces, contrary to the intentions of experimenters. This study sought to determine whether this 'perceived emotion' influences the composite-face effect. In our first experiment, we confirmed that the composite effect grows stronger as the strength of distractor emotion increased. Critically, effects of distractor emotion were induced by weak emotion intensities, and were incidental insofar as emotion cues hindered image matching, not emotion labelling per se . In Experiment 2, we found a correlation between the presence of perceived emotion in a set of ostensibly neutral distractor regions sourced from commonly used face databases, and the strength of illusory distortion they induced. In Experiment 3, participants completed a sequential matching composite task in which half of the distractor regions were rated high and low for perceived emotion, respectively. Significantly stronger composite effects were induced by the high-emotion distractor halves. These convergent results suggest that perceived emotion increases the strength of the composite-face effect induced by supposedly emotionless faces. These findings have important implications for the study of holistic face processing in typical and atypical populations.
The relation of saturated fats and dietary cholesterol to childhood cognitive flexibility.
Khan, Naiman A; Raine, Lauren B; Drollette, Eric S; Scudder, Mark R; Hillman, Charles H
2015-10-01
Identification of health behaviors and markers of physiological health associated with childhood cognitive function has important implications for public health policy targeted toward cognitive health throughout the life span. Although previous studies have shown that aerobic fitness and obesity exert contrasting effects on cognitive flexibility among prepubertal children, the extent to which diet plays a role in cognitive flexibility has received little attention. Accordingly, this study examined associations between saturated fats and cholesterol intake and cognitive flexibility, assessed using a task switching paradigm, among prepubertal children between 7 and 10 years (N = 150). Following adjustment of confounding variables (age, sex, socioeconomic status, IQ, VO2max, and BMI), children consuming diets higher in saturated fats exhibited longer reaction time during the task condition requiring greater amounts of cognitive flexibility. Further, increasing saturated fat intake and dietary cholesterol were correlated with greater switch costs, reflecting impaired ability to maintain multiple task sets in working memory and poorer efficiency of cognitive control processes involved in task switching. These data are among the first to indicate that children consuming diets higher in saturated fats and cholesterol exhibit compromised ability to flexibly modulate their cognitive operations, particularly when faced with greater cognitive challenge. Future longitudinal and intervention studies are necessary to comprehensively characterize the interrelationships between diet, aerobic fitness, obesity, and children's cognitive abilities. Copyright © 2015 Elsevier Ltd. All rights reserved.
Medial temporal lobe contributions to short-term memory for faces
Race, Elizabeth; LaRocque, Karen F.; Keane, Margaret M.; Verfaellie, Mieke
2015-01-01
The role of the medial temporal lobes (MTL) in short-term memory (STM) remains a matter of debate. While imaging studies commonly show hippocampal activation during short-delay memory tasks, evidence from amnesic patients with MTL lesions is mixed. It has been argued that apparent STM impairments in amnesia may reflect long-term memory (LTM) contributions to performance. We challenge this conclusion by demonstrating that MTL amnesic patients show impaired delayed matching-to-sample (DMS) for faces in a task that meets both a traditional delay-based and a recently proposed distractor-based criterion for classification as a STM task. In Experiment 1, we demonstrate that our face DMS task meets the proposed distractor-based criterion for STM classification, in that extensive processing of delay-period distractor stimuli disrupts performance of healthy individuals. In Experiment 2, MTL amnesic patients with lesions extending into anterior subhippocampal cortex, but not patients with lesions limited to the hippocampus, show impaired performance on this task without distraction at delays as short as 8s, within temporal range of delay-based STM classification, in the context of intact perceptual matching performance. Experiment 3 provides support for the hypothesis that STM for faces relies on configural processing by showing that the extent to which healthy participants’ performance is disrupted by interference depends on the configural demands of the distractor task. Together, these findings are consistent with the notion that the amnesic impairment in STM for faces reflects a deficit in configural processing associated with subhippocampal cortices and provide novel evidence that the MTL supports cognition beyond the LTM domain. PMID:23937185
Schall, Sonja; von Kriegstein, Katharina
2014-01-01
It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.
MacNamara, Annmarie; Vergés, Alvaro; Kujawa, Autumn; Fitzgerald, Kate D.; Monk, Christopher S.; Phan, K. Luan
2016-01-01
Socio-emotional processing is an essential part of development, and age-related changes in its neural correlates can be observed. The late positive potential (LPP) is a measure of motivated attention that can be used to assess emotional processing; however, changes in the LPP elicited by emotional faces have not been assessed across a wide age range in childhood and young adulthood. We used an emotional face matching task to examine behavior and event-related potentials (ERPs) in 33 youth aged 7 to 19 years old. Younger children were slower when performing the matching task. The LPP elicited by emotional faces but not control stimuli (geometric shapes) decreased with age; by contrast, an earlier ERP (the P1) decreased with age for both faces and shapes, suggesting increased efficiency of early visual processing. Results indicate age-related attenuation in emotional processing that may stem from increased efficiency and regulatory control when performing a socio-emotional task. PMID:26220144
Structural face encoding: How task affects the N170's sensitivity to race.
Senholzi, Keith B; Ito, Tiffany A
2013-12-01
The N170 event-related potential (ERP) component differentiates faces from non-faces, but studies aimed at investigating whether the processing indexed by this component is also sensitive to racial differences among faces have garnered conflicting results. Here, we explore how task affects the influence of race on the N170 among White participants. N170s were larger to ingroup White faces than outgroup Black faces, but only for those required to attend to race, suggesting that attention to race can result in deeper levels of processing for ingroup members. Conversely, N170s were larger to Black faces than White faces for participants who attended to the unique identity of the faces, suggesting that attention to identity can result in preferential recruitment of cognitive resources for outgroup members. Taken together, these findings suggest that race can differentially impact face processing at early stages of encoding, but differences in processing are contingent upon one's goal state.
Bennetts, Rachel J; Mole, Joseph; Bate, Sarah
2017-09-01
Face recognition abilities vary widely. While face recognition deficits have been reported in children, it is unclear whether superior face recognition skills can be encountered during development. This paper presents O.B., a 14-year-old female with extraordinary face recognition skills: a "super-recognizer" (SR). O.B. demonstrated exceptional face-processing skills across multiple tasks, with a level of performance that is comparable to adult SRs. Her superior abilities appear to be specific to face identity: She showed an exaggerated face inversion effect and her superior abilities did not extend to object processing or non-identity aspects of face recognition. Finally, an eye-movement task demonstrated that O.B. spent more time than controls examining the nose - a pattern previously reported in adult SRs. O.B. is therefore particularly skilled at extracting and using identity-specific facial cues, indicating that face and object recognition are dissociable during development, and that super recognition can be detected in adolescence.
Who is who: Italian norms for visual recognition and identification of celebrities.
Bizzozero, I; Ferrari, F; Pozzoli, S; Saetti, M C; Spinnler, H
2005-06-01
Age-, education- and sex-adjusted norms are provided for two new neuropsychological tests, namely (i) Face Recognition (guess of familiarity) and (ii) Person Identification (biographical contextualisation). Sixty-three pictures of celebrities and 63 of unknown people were selected following two interwoven criteria(1): the realm of their celebrity (i.e., entertainment, culture and politics) and the period of celebrity acquisition (i.e., pre-war, post-war and contemporary). Both media- and education-dependent knowledge of celebrity were considered. Ninety-eight unpaid healthy participants aged between 50 and 93 years and with at least 8 years of formal education took part in this study. Reference is made to serial models of familiar face/persons processing. Recognition is held to tackle the activity of Personal Identity Nodes (PINs) and identification of the Exemplar Semantics Archives. Given the seriality of the reference model, the Identification Test is embedded in the Recognition test. This entailed that only previously recognised faces were employed to achieve norms for identification.
ERIC Educational Resources Information Center
Chung, C-W.; Lee, C-C.; Liu, C-C.
2013-01-01
Mobile computers are now increasingly applied to facilitate face-to-face collaborative learning. However, the factors affecting face-to-face peer interactions are complex as they involve rich communication media. In particular, non-verbal interactions are necessary to convey critical communication messages in face-to-face communication. Through…
ERIC Educational Resources Information Center
Wilson, Rebecca; Pascalis, Olivier; Blades, Mark
2007-01-01
We investigated whether children with autistic spectrum disorders (ASD) have a deficit in recognising familiar faces. Children with ASD were given a forced choice familiar face recognition task with three conditions: full faces, inner face parts and outer face parts. Control groups were children with developmental delay (DD) and typically…
ERIC Educational Resources Information Center
Lagattuta, Kristin Hansen; Sayfan, Liat; Monsour, Michael
2011-01-01
Two experiments examined 4- to 11-year-olds' and adults' performance (N = 350) on two variants of a Stroop-like card task: the "day-night task" (say "day" when shown a moon and "night" when shown a sun) and a new "happy-sad task" (say "happy" for a sad face and "sad" for a happy face). Experiment 1 featured colored cartoon drawings. In Experiment…
A combined sEMG and accelerometer system for monitoring functional activity in stroke.
Roy, Serge H; Cheng, M Samuel; Chang, Shey-Sheen; Moore, John; De Luca, Gianluca; Nawab, S Hamid; De Luca, Carlo J
2009-12-01
Remote monitoring of physical activity using body-worn sensors provides an alternative to assessment of functional independence by subjective, paper-based questionnaires. This study investigated the classification accuracy of a combined surface electromyographic (sEMG) and accelerometer (ACC) sensor system for monitoring activities of daily living in patients with stroke. sEMG and ACC data (eight channels each) were recorded from 10 hemiparetic patients while they carried out a sequence of 11 activities of daily living (identification tasks), and 10 activities used to evaluate misclassification errors (nonidentification tasks). The sEMG and ACC sensor data were analyzed using a multilayered neural network and an adaptive neuro-fuzzy inference system to identify the minimal sensor configuration needed to accurately classify the identification tasks, with a minimal number of misclassifications from the nonidentification tasks. The results demonstrated that the highest sensitivity and specificity for the identification tasks was achieved using a subset of four ACC sensors and adjacent sEMG sensors located on both upper arms, one forearm, and one thigh, respectively. This configuration resulted in a mean sensitivity of 95.0%, and a mean specificity of 99.7% for the identification tasks, and a mean misclassification error of < 10% for the nonidentification tasks. The findings support the feasibility of a hybrid sEMG and ACC wearable sensor system for automatic recognition of motor tasks used to assess functional independence in patients with stroke.
Task uncertainty and communication during nursing shift handovers.
Mayor, Eric; Bangerter, Adrian; Aribot, Myriam
2012-09-01
We explore variations in handover duration and communication in nursing units. We hypothesize that duration per patient is higher in units facing high task uncertainty. We expect both topics and functions of communication to vary depending on task uncertainty. Handovers are changing in modern healthcare organizations, where standardized procedures are increasingly advocated for efficiency and reliability reasons. However, redesign of handover should take environmental contingencies of different clinical unit types into account. An important contingency in institutions is task uncertainty, which may affect how communicative routines like handover are accomplished. Nurse unit managers of 80 care units in 18 hospitals were interviewed in 2008 about topics and functions of handover communication and duration in their unit. Interviews were content-analysed. Clinical units were classified into a theory-based typology (unit type) that gradually increases on task uncertainty. Quantitative analyses were performed. Unit type affected resource allocation. Unit types facing higher uncertainty had higher handover duration per patient. As expected, unit type also affected communication content. Clinical units facing higher uncertainty discussed fewer topics, discussing treatment and care and organization of work less frequently. Finally, unit type affected functions of handover: sharing emotions was less often mentioned in unit types facing higher uncertainty. Task uncertainty and its relationship with functions and topics of handover should be taken into account during the design of handover procedures. © 2011 Blackwell Publishing Ltd.
49 CFR 1544.231 - Airport-approved and exclusive area personnel identification systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... carry out a personnel identification system for identification media that are airport-approved, or identification media that are issued for use in an exclusive area. The system must include the following: (1) Personnel identification media that— (i) Convey a full face image, full name, employer, and identification...
Less impairment in face imagery than face perception in early prosopagnosia.
Michelon, Pascale; Biederman, Irving
2003-01-01
There have been a number of reports of preserved face imagery in prosopagnosia. We put this issue to experimental test by comparing the performance of MJH, a 34-year-old prosopagnosic since the age of 5, to controls on tasks where the participants had to judge faces of current celebrities, either in terms of overall similarity (Of Bette Midler, Hillary Clinton, and Diane Sawyer, whose face looks least like the other two?) or on individual features (Is Ronald Reagan's nose pointy?). For each task, a performance measure reflecting the degree of agreement of each participant with the average of the others (not including MJH) was calculated. On the imagery versions of these tasks, MJH was within the lower range of the controls for the agreement measure (though significantly below the mean of the controls). When the same tasks were performed from pictures, agreement among the controls markedly increased whereas MJH's performance was virtually unaffected, placing him well below the range of the controls. This pattern was also apparent with a test of facial features of emotion (Are the eyes wrinkled when someone is surprised?). On three non-face imagery tasks assessing color (What color is a football?), relative lengths of animal's tails (Is a bear's tail long in proportion to its body?), and mental size comparisons (What is bigger, a camel or a zebra?), MJH was within or close to the lower end of the normal range. As most of the celebrities became famous after the onset of MJH's prosopagnosia, our confirmation of the reports of less impaired face imagery in some prosopagnosics cannot be attributed to pre-lesion storage. We speculate that face recognition, in contrast to object recognition, relies more heavily on a representation that describes the initial spatial filter values so the metrics of the facial surface can be specified. If prosopagnosia is regarded as a form of simultanagnosia in which some of these filter values cannot be registered on any one encounter with a face, then multiple opportunities for repeated storage may partially compensate for the degraded representation on that single encounter. Imagery may allow access to this more complete representation.
Reduced beta connectivity during emotional face processing in adolescents with autism.
Leung, Rachel C; Ye, Annette X; Wong, Simeon M; Taylor, Margot J; Doesburg, Sam M
2014-01-01
Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by impairments in social cognition. The biological basis of deficits in social cognition in ASD, and their difficulty in processing emotional face information in particular, remains unclear. Atypical communication within and between brain regions has been reported in ASD. Interregional phase-locking is a neurophysiological mechanism mediating communication among brain areas and is understood to support cognitive functions. In the present study we investigated interregional magnetoencephalographic phase synchronization during the perception of emotional faces in adolescents with ASD. A total of 22 adolescents with ASD (18 males, mean age =14.2 ± 1.15 years, 22 right-handed) with mild to no cognitive delay and 17 healthy controls (14 males, mean age =14.4 ± 0.33 years, 16 right-handed) performed an implicit emotional processing task requiring perception of happy, angry and neutral faces while we recorded neuromagnetic signals. The faces were presented rapidly (80 ms duration) to the left or right of a central fixation cross and participants responded to a scrambled pattern that was presented concurrently on the opposite side of the fixation point. Task-dependent interregional phase-locking was calculated among source-resolved brain regions. Task-dependent increases in interregional beta synchronization were observed. Beta-band interregional phase-locking in adolescents with ASD was reduced, relative to controls, during the perception of angry faces in a distributed network involving the right fusiform gyrus and insula. No significant group differences were found for happy or neutral faces, or other analyzed frequency ranges. Significant reductions in task-dependent beta connectivity strength, clustering and eigenvector centrality (all P <0.001) in the right insula were found in adolescents with ASD, relative to controls. Reduced beta synchronization may reflect inadequate recruitment of task-relevant networks during emotional face processing in ASD. The right insula, specifically, was a hub of reduced functional connectivity and may play a prominent role in the inability to effectively extract emotional information from faces. These findings suggest that functional disconnection in brain networks mediating emotional processes may contribute to deficits in social cognition in this population.
Implementation of a high-speed face recognition system that uses an optical parallel correlator.
Watanabe, Eriko; Kodate, Kashiko
2005-02-10
We implement a fully automatic fast face recognition system by using a 1000 frame/s optical parallel correlator designed and assembled by us. The operational speed for the 1:N (i.e., matching one image against N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 s, including the preprocessing and postprocessing times. The binary real-only matched filter is devised for the sake of face recognition, and the system is optimized by the false-rejection rate (FRR) and the false-acceptance rate (FAR), according to 300 samples selected by the biometrics guideline. From trial 1:N identification experiments with the optical parallel correlator, we acquired low error rates of 2.6% FRR and 1.3% FAR. Facial images of people wearing thin glasses or heavy makeup that rendered identification difficult were identified with this system.
An odd manifestation of the Capgras syndrome: loss of familiarity even with the sexual partner.
Thomas Antérion, C; Convers, P; Desmales, S; Borg, C; Laurent, B
2008-06-01
We report the case of a patient who presented visual hallucinations and identification disorders associated with a Capgras syndrome. During the Capgras periods, there was not only a misidentification of his wife's face, but also a more global perceptive and emotional sexual identification disorder. Thus, he had sexual intercourse with his wife's "double" without having the slightest recollection feeling of familiarity towards his "wife" and even changed his sexual habits. To the best of our knowledge, he is the only neurological patient who made his wife a mistress. Starting from this global familiarity loss, we discuss the mechanism of Capgras delusion with reference to the role of the implicit system of face recognition. Such behavior of familiarity loss not only with face but also with all intimacy aspects argues for a specific disconnection between the ventral visual pathway of face identification and the limbic system involved in emotional and episodic memory contents.
Individual differences in perceiving and recognizing faces-One element of social cognition.
Wilhelm, Oliver; Herzmann, Grit; Kunina, Olga; Danthiir, Vanessa; Schacht, Annekathrin; Sommer, Werner
2010-09-01
Recognizing faces swiftly and accurately is of paramount importance to humans as a social species. Individual differences in the ability to perform these tasks may therefore reflect important aspects of social or emotional intelligence. Although functional models of face cognition based on group and single case studies postulate multiple component processes, little is known about the ability structure underlying individual differences in face cognition. In 2 large individual differences experiments (N = 151 and N = 209), a broad variety of face-cognition tasks were tested and the component abilities of face cognition-face perception, face memory, and the speed of face cognition-were identified and then replicated. Experiment 2 also showed that the 3 face-cognition abilities are clearly distinct from immediate and delayed memory, mental speed, general cognitive ability, and object cognition. These results converge with functional and neuroanatomical models of face cognition by demonstrating the difference between face perception and face memory. The results also underline the importance of distinguishing between speed and accuracy of face cognition. Together our results provide a first step toward establishing face-processing abilities as an independent ability reflecting elements of social intelligence. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
Effect of Modality and Task Type on Interlanguage Variation
ERIC Educational Resources Information Center
Kim, Hye Yeong
2017-01-01
An essential component for assessing the accuracy and fluency of language learners is understanding how mode of communication and task type affect performance in second-language (L2) acquisition. This study investigates how text-based synchronous computer-mediated communication (SCMC) and face-to-face (F2F) oral interaction can influence the…
Conceptual and Socio-Cognitive Support for Collaborative Learning in Videoconferencing Environments
ERIC Educational Resources Information Center
Ertl, Bernhard; Fischer, Frank; Mandl, Heinz
2006-01-01
Studies have shown that videoconferencing is an effective medium for facilitating communication between parties who are separated by distance, particularly when learners are engaged in complex collaborative learning tasks. However, as in face-to-face communication, learners benefit most when they receive additional support for such learning tasks.…
Building Face Composites Can Harm Lineup Identification Performance
ERIC Educational Resources Information Center
Wells, Gary L.; Charman, Steve D.; Olson, Elizabeth A.
2005-01-01
Face composite programs permit eyewitnesses to build likenesses of target faces by selecting facial features and combining them into an intact face. Research has shown that these composites are generally poor likenesses of the target face. Two experiments tested the proposition that this composite-building process could harm the builder's memory…
Yang, Tao; Penton, Tegan; Köybaşı, Şerife Leman; Banissy, Michael J
2017-09-01
Previous findings suggest that older adults show impairments in the social perception of faces, including the perception of emotion and facial identity. The majority of this work has tended to examine performance on tasks involving young adult faces and prototypical emotions. While useful, this can influence performance differences between groups due to perceptual biases and limitations on task performance. Here we sought to examine how typical aging is associated with the perception of subtle changes in facial happiness and facial identity in older adult faces. We developed novel tasks that permitted the ability to assess facial happiness, facial identity, and non-social perception (object perception) across similar task parameters. We observe that aging is linked with declines in the ability to make fine-grained judgements in the perception of facial happiness and facial identity (from older adult faces), but not for non-social (object) perception. This pattern of results is discussed in relation to mechanisms that may contribute to declines in facial perceptual processing in older adulthood. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Gaze control during face exploration in schizophrenia.
Delerue, Céline; Laprévote, Vincent; Verfaillie, Karl; Boucart, Muriel
2010-10-04
Patients with schizophrenia perform worse than controls on various face perception tasks. Studies monitoring eye movements have shown reduced scan paths and a lower number of fixations to relevant facial features (eyes, nose, mouth) than to other parts. We examine whether attentional control, through instructions, modulates visual scanning in schizophrenia. Visual scan paths were monitored in 20 patients with schizophrenia and 20 controls. Participants started with a "free viewing" task followed by tasks in which they were asked to determine the gender, identify the facial expression, estimate the age, or decide whether the face was known or unknown. Temporal and spatial characteristics of scan paths were compared for each group and task. Consistent with the literature, patients with schizophrenia showed reduced attention to salient facial features in the passive viewing. However, their scan paths did not differ from that of controls when asked to determine the facial expression, the gender, the age or the familiarity of the face. The results are interpreted in terms of attentional control and cognitive flexibility. (c) 2010 Elsevier Ireland Ltd. All rights reserved.
Visual attractiveness is leaky: the asymmetrical relationship between face and hair.
Saegusa, Chihiro; Intoy, Janis; Shimojo, Shinsuke
2015-01-01
Predicting personality is crucial when communicating with people. It has been revealed that the perceived attractiveness or beauty of the face is a cue. As shown in the well-known "what is beautiful is good" stereotype, perceived attractiveness is often associated with desirable personality. Although such research on attractiveness used mainly the face isolated from other body parts, the face is not always seen in isolation in the real world. Rather, it is surrounded by one's hairstyle, and is perceived as a part of total presence. In human vision, perceptual organization/integration occurs mostly in a bottom up, task-irrelevant fashion. This raises an intriguing possibility that task-irrelevant stimulus that is perceptually integrated with a target may influence our affective evaluation. In such a case, there should be a mutual influence between attractiveness perception of the face and surrounding hair, since they are assumed to share strong and unique perceptual organization. In the current study, we examined the influence of a task-irrelevant stimulus on our attractiveness evaluation, using face and hair as stimuli. The results revealed asymmetrical influences in the evaluation of one while ignoring the other. When hair was task-irrelevant, it still affected attractiveness of the face, but only if the hair itself had never been evaluated by the same evaluator. On the other hand, the face affected the hair regardless of whether the face itself was evaluated before. This has intriguing implications on the asymmetry between face and hair, and perceptual integration between them in general. Together with data from a post hoc questionnaire, it is suggested that both implicit non-selective and explicit selective processes contribute to attractiveness evaluation. The findings provide an understanding of attractiveness perception in real-life situations, as well as a new paradigm to reveal unknown implicit aspects of information integration for emotional judgment.
Visual attractiveness is leaky: the asymmetrical relationship between face and hair
Saegusa, Chihiro; Intoy, Janis; Shimojo, Shinsuke
2015-01-01
Predicting personality is crucial when communicating with people. It has been revealed that the perceived attractiveness or beauty of the face is a cue. As shown in the well-known “what is beautiful is good” stereotype, perceived attractiveness is often associated with desirable personality. Although such research on attractiveness used mainly the face isolated from other body parts, the face is not always seen in isolation in the real world. Rather, it is surrounded by one’s hairstyle, and is perceived as a part of total presence. In human vision, perceptual organization/integration occurs mostly in a bottom up, task-irrelevant fashion. This raises an intriguing possibility that task-irrelevant stimulus that is perceptually integrated with a target may influence our affective evaluation. In such a case, there should be a mutual influence between attractiveness perception of the face and surrounding hair, since they are assumed to share strong and unique perceptual organization. In the current study, we examined the influence of a task-irrelevant stimulus on our attractiveness evaluation, using face and hair as stimuli. The results revealed asymmetrical influences in the evaluation of one while ignoring the other. When hair was task-irrelevant, it still affected attractiveness of the face, but only if the hair itself had never been evaluated by the same evaluator. On the other hand, the face affected the hair regardless of whether the face itself was evaluated before. This has intriguing implications on the asymmetry between face and hair, and perceptual integration between them in general. Together with data from a post hoc questionnaire, it is suggested that both implicit non-selective and explicit selective processes contribute to attractiveness evaluation. The findings provide an understanding of attractiveness perception in real-life situations, as well as a new paradigm to reveal unknown implicit aspects of information integration for emotional judgment. PMID:25914656
Selecting a Subset of Stimulus-Response Pairs with Maximal Transmitted Information
1992-03-01
Maathe19tical 19 AT RCODS1.SBJCEM (continue on reverse if necessary and idertif by bock number) VSystem designer. are often faced with the task of...Gary K. Poock, Asoctate Advisor -- ~arlR’ Jones, Chairman Command, Co t and Communications Academic Group ii ABSTRACT System designers are often faced ...performance. 3. Stimulus-Response Pairs System designers are often faced with the task of choosing which of several stimuli should be used to represent 6 a
van Veluw, Susanne J; Chance, Steven A
2014-03-01
The perception of self and others is a key aspect of social cognition. In order to investigate the neurobiological basis of this distinction we reviewed two classes of task that study self-awareness and awareness of others (theory of mind, ToM). A reliable task to measure self-awareness is the recognition of one's own face in contrast to the recognition of others' faces. False-belief tasks are widely used to identify neural correlates of ToM as a measure of awareness of others. We performed an activation likelihood estimation meta-analysis, using the fMRI literature on self-face recognition and false-belief tasks. The brain areas involved in performing false-belief tasks were the medial prefrontal cortex (MPFC), bilateral temporo-parietal junction, precuneus, and the bilateral middle temporal gyrus. Distinct self-face recognition regions were the right superior temporal gyrus, the right parahippocampal gyrus, the right inferior frontal gyrus/anterior cingulate cortex, and the left inferior parietal lobe. Overlapping brain areas were the superior temporal gyrus, and the more ventral parts of the MPFC. We confirmed that self-recognition in contrast to recognition of others' faces, and awareness of others involves a network that consists of separate, distinct neural pathways, but also includes overlapping regions of higher order prefrontal cortex where these processes may be combined. Insights derived from the neurobiology of disorders such as autism and schizophrenia are consistent with this notion.
Faces in-between: evaluations reflect the interplay of facial features and task-dependent fluency.
Winkielman, Piotr; Olszanowski, Michal; Gola, Mateusz
2015-04-01
Facial features influence social evaluations. For example, faces are rated as more attractive and trustworthy when they have more smiling features and also more female features. However, the influence of facial features on evaluations should be qualified by the affective consequences of fluency (cognitive ease) with which such features are processed. Further, fluency (along with its affective consequences) should depend on whether the current task highlights conflict between specific features. Four experiments are presented. In 3 experiments, participants saw faces varying in expressions ranging from pure anger, through mixed expression, to pure happiness. Perceivers first categorized faces either on a control dimension, or an emotional dimension (angry/happy). Thus, the emotional categorization task made "pure" expressions fluent and "mixed" expressions disfluent. Next, participants made social evaluations. Results show that after emotional categorization, but not control categorization, targets with mixed expressions are relatively devalued. Further, this effect is mediated by categorization disfluency. Additional data from facial electromyography reveal that on a basic physiological level, affective devaluation of mixed expressions is driven by their objective ambiguity. The fourth experiment shows that the relative devaluation of mixed faces that vary in gender ambiguity requires a gender categorization task. Overall, these studies highlight that the impact of facial features on evaluation is qualified by their fluency, and that the fluency of features is a function of the current task. The discussion highlights the implications of these findings for research on emotional reactions to ambiguity. (c) 2015 APA, all rights reserved).
Bentley, P; Driver, J; Dolan, R J
2009-09-01
Cholinergic influences on memory are likely to be expressed at several processing stages, including via well-recognized effects of acetylcholine on stimulus processing during encoding. Since previous studies have shown that cholinesterase inhibition enhances visual extrastriate cortex activity during stimulus encoding, especially under attention-demanding tasks, we tested whether this effect correlates with improved subsequent memory. In a within-subject physostigmine versus placebo design, we measured brain activity with functional magnetic resonance imaging while healthy and mild Alzheimer's disease subjects performed superficial and deep encoding tasks on face (and building) visual stimuli. We explored regions in which physostigmine modulation of face-selective neural responses correlated with physostigmine effects on subsequent recognition performance. In healthy subjects physostigmine led to enhanced later recognition for deep- versus superficially-encoded faces, which correlated across subjects with a physostigmine-induced enhancement of face-selective responses in right fusiform cortex during deep- versus superficial-encoding tasks. In contrast, the Alzheimer's disease group showed neither a depth of processing effect nor restoration of this with physostigmine. Instead, patients showed a task-independent improvement in confident memory with physostigmine, an effect that correlated with enhancements in face-selective (but task-independent) responses in bilateral fusiform cortices. Our results indicate that one mechanism by which cholinesterase inhibitors can improve memory is by enhancing extrastriate cortex stimulus selectivity at encoding, in a manner that for healthy people but not in Alzheimer's disease is dependent upon depth of processing.
Tavitian, Lucy R; Ladouceur, Cecile D; Nahas, Ziad; Khater, Beatrice; Brent, David A; Maalouf, Fadi T
2014-08-01
The aim of the present study is to examine the effect of neutral and emotional facial expressions on voluntary attentional control using a working memory (WM) task in adolescents with major depressive disorder (MDD). We administered the Emotional Face n-back (EFNBACK) task, a visual WM task with neutral, happy and angry faces as distractors to 22 adolescents with MDD (mean age 15.7 years) and 21 healthy controls (HC) (mean age 14.7 years). There was a significant group by distractor type interaction (p = 0.045) for mean percent accuracy rates. Group comparisons showed that MDD youth were less accurate on neutral trials than HC (p = 0.027). The two groups did not differ on angry, happy and blank trials (p > 0.05). Reaction time did not differ across groups. In addition, when comparing the differences between accuracies on neutral trials and each of the happy and angry trials, respectively [(HAP-NEUT) and (ANG-NEUT)], there was a group effect on (HAP-NEUT) where the difference was larger in MDD than HC (p = 0.009) but not on ANG-NEUT (p > 0.05). Findings were independent of memory load. Findings indicate that attentional control to neutral faces is impaired and negatively affected performance on a WM task in adolescents with MDD. Such an impact of neutral faces on attentional control in MDD may be at the core of the social-cognitive impairment observed in this population.
Face identity recognition in autism spectrum disorders: a review of behavioral studies.
Weigelt, Sarah; Koldewyn, Kami; Kanwisher, Nancy
2012-03-01
Face recognition--the ability to recognize a person from their facial appearance--is essential for normal social interaction. Face recognition deficits have been implicated in the most common disorder of social interaction: autism. Here we ask: is face identity recognition in fact impaired in people with autism? Reviewing behavioral studies we find no strong evidence for a qualitative difference in how facial identity is processed between those with and without autism: markers of typical face identity recognition, such as the face inversion effect, seem to be present in people with autism. However, quantitatively--i.e., how well facial identity is remembered or discriminated--people with autism perform worse than typical individuals. This impairment is particularly clear in face memory and in face perception tasks in which a delay intervenes between sample and test, and less so in tasks with no memory demand. Although some evidence suggests that this deficit may be specific to faces, further evidence on this question is necessary. Copyright © 2011 Elsevier Ltd. All rights reserved.
Olivares, Ela I; Lage-Castellanos, Agustín; Bobes, María A; Iglesias, Jaime
2018-01-01
We investigated the neural correlates of the access to and retrieval of face structure information in contrast to those concerning the access to and retrieval of person-related verbal information, triggered by faces. We experimentally induced stimulus familiarity via a systematic learning procedure including faces with and without associated verbal information. Then, we recorded event-related potentials (ERPs) in both intra-domain (face-feature) and cross-domain (face-occupation) matching tasks while N400-like responses were elicited by incorrect eyes-eyebrows completions and occupations, respectively. A novel Bayesian source reconstruction approach plus conjunction analysis of group effects revealed that in both cases the generated N170s were of similar amplitude but had different neural origin. Thus, whereas the N170 of faces was associated predominantly to right fusiform and occipital regions (the so-called "Fusiform Face Area", "FFA" and "Occipital Face Area", "OFA", respectively), the N170 of occupations was associated to a bilateral very posterior activity, suggestive of basic perceptual processes. Importantly, the right-sided perceptual P200 and the face-related N250 were evoked exclusively in the intra-domain task, with sources in OFA and extensively in the fusiform region, respectively. Regarding later latencies, the intra-domain N400 seemed to be generated in right posterior brain regions encompassing mainly OFA and, to some extent, the FFA, likely reflecting neural operations triggered by structural incongruities. In turn, the cross-domain N400 was related to more anterior left-sided fusiform and temporal inferior sources, paralleling those described previously for the classic verbal N400. These results support the existence of differentiated neural streams for face structure and person-related verbal processing triggered by faces, which can be activated differentially according to specific task demands.
Neural Conflict–Control Mechanisms Improve Memory for Target Stimuli
Krebs, Ruth M.; Boehler, Carsten N.; De Belder, Maya; Egner, Tobias
2015-01-01
According to conflict-monitoring models, conflict serves as an internal signal for reinforcing top-down attention to task-relevant information. While evidence based on measures of ongoing task performance supports this idea, implications for long-term consequences, that is, memory, have not been tested yet. Here, we evaluated the prediction that conflict-triggered attentional enhancement of target-stimulus processing should be associated with superior subsequent memory for those stimuli. By combining functional magnetic resonance imaging (fMRI) with a novel variant of a face-word Stroop task that employed trial-unique face stimuli as targets, we were able to assess subsequent (incidental) memory for target faces as a function of whether a given face had previously been accompanied by congruent, neutral, or incongruent (conflicting) distracters. In line with our predictions, incongruent distracters not only induced behavioral conflict, but also gave rise to enhanced memory for target faces. Moreover, conflict-triggered neural activity in prefrontal and parietal regions was predictive of subsequent retrieval success, and displayed conflict-enhanced functional coupling with medial-temporal lobe regions. These data provide support for the proposal that conflict evokes enhanced top-down attention to task-relevant stimuli, thereby promoting their encoding into long-term memory. Our findings thus delineate the neural mechanisms of a novel link between cognitive control and memory. PMID:24108799
Validation of the Vanderbilt Holistic Face Processing Test.
Wang, Chao-Chih; Ross, David A; Gauthier, Isabel; Richler, Jennifer J
2016-01-01
The Vanderbilt Holistic Face Processing Test (VHPT-F) is a new measure of holistic face processing with better psychometric properties relative to prior measures developed for group studies (Richler et al., 2014). In fields where psychologists study individual differences, validation studies are commonplace and the concurrent validity of a new measure is established by comparing it to an older measure with established validity. We follow this approach and test whether the VHPT-F measures the same construct as the composite task, which is group-based measure at the center of the large literature on holistic face processing. In Experiment 1, we found a significant correlation between holistic processing measured in the VHPT-F and the composite task. Although this correlation was small, it was comparable to the correlation between holistic processing measured in the composite task with the same faces, but different target parts (top or bottom), which represents a reasonable upper limit for correlations between the composite task and another measure of holistic processing. These results confirm the validity of the VHPT-F by demonstrating shared variance with another measure of holistic processing based on the same operational definition. These results were replicated in Experiment 2, but only when the demographic profile of our sample matched that of Experiment 1.
Validation of the Vanderbilt Holistic Face Processing Test
Wang, Chao-Chih; Ross, David A.; Gauthier, Isabel; Richler, Jennifer J.
2016-01-01
The Vanderbilt Holistic Face Processing Test (VHPT-F) is a new measure of holistic face processing with better psychometric properties relative to prior measures developed for group studies (Richler et al., 2014). In fields where psychologists study individual differences, validation studies are commonplace and the concurrent validity of a new measure is established by comparing it to an older measure with established validity. We follow this approach and test whether the VHPT-F measures the same construct as the composite task, which is group-based measure at the center of the large literature on holistic face processing. In Experiment 1, we found a significant correlation between holistic processing measured in the VHPT-F and the composite task. Although this correlation was small, it was comparable to the correlation between holistic processing measured in the composite task with the same faces, but different target parts (top or bottom), which represents a reasonable upper limit for correlations between the composite task and another measure of holistic processing. These results confirm the validity of the VHPT-F by demonstrating shared variance with another measure of holistic processing based on the same operational definition. These results were replicated in Experiment 2, but only when the demographic profile of our sample matched that of Experiment 1. PMID:27933014
Framing faces: Frame alignment impacts holistic face perception.
Curby, Kim M; Entenman, Robert
2016-11-01
Traditional accounts of face perception emphasise the importance of the prototypical configuration of features within faces. However, here we probe influences of more general perceptual grouping mechanisms on holistic face perception. Participants made part-matching judgments about composite faces presented in intact external oval frames or frames made from misaligned oval parts. This manipulation served to disrupt basic perceptual grouping cues that facilitate the grouping of the two face halves together. This manipulation also produced an external face contour like that in the standard misaligned condition used within the classic composite face task. Notably, by introducing a discontinuity in the external contour, grouping of the face halves into a cohesive unit was discouraged, but face configuration was preserved. Conditions where both the face parts and the frames were misaligned together, as in the typical composite task paradigm, or where just the internal face parts where misaligned, were also included. Disrupting only the face frame similarly disrupted holistic face perception as disrupting both the frame and face configuration. However, misaligned face parts presented in aligned frames also incurred a cost to holistic perception. These findings provide support for the contribution of general-purpose perceptual grouping mechanisms to holistic face perception and are presented and discussed in the context of an enhanced object-based selection account of holistic perception.
Associative priming in a masked perceptual identification task: evidence for automatic processes.
Pecher, Diane; Zeelenberg, René; Raaijmakers, Jeroen G W
2002-10-01
Two experiments investigated the influence of automatic and strategic processes on associative priming effects in a perceptual identification task in which prime-target pairs are briefly presented and masked. In this paradigm, priming is defined as a higher percentage of correctly identified targets for related pairs than for unrelated pairs. In Experiment 1, priming was obtained for mediated word pairs. This mediated priming effect was affected neither by the presence of direct associations nor by the presentation time of the primes, indicating that automatic priming effects play a role in perceptual identification. Experiment 2 showed that the priming effect was not affected by the proportion (.90 vs. .10) of related pairs if primes were presented briefly to prevent their identification. However, a large proportion effect was found when primes were presented for 1000 ms so that they were clearly visible. These results indicate that priming in a masked perceptual identification task is the result of automatic processes and is not affected by strategies. The present paradigm provides a valuable alternative to more commonly used tasks such as lexical decision.
Fast and Famous: Looking for the Fastest Speed at Which a Face Can be Recognized
Barragan-Jason, Gladys; Besson, Gabriel; Ceccaldi, Mathieu; Barbeau, Emmanuel J.
2012-01-01
Face recognition is supposed to be fast. However, the actual speed at which faces can be recognized remains unknown. To address this issue, we report two experiments run with speed constraints. In both experiments, famous faces had to be recognized among unknown ones using a large set of stimuli to prevent pre-activation of features which would speed up recognition. In the first experiment (31 participants), recognition of famous faces was investigated using a rapid go/no-go task. In the second experiment, 101 participants performed a highly time constrained recognition task using the Speed and Accuracy Boosting procedure. Results indicate that the fastest speed at which a face can be recognized is around 360–390 ms. Such latencies are about 100 ms longer than the latencies recorded in similar tasks in which subjects have to detect faces among other stimuli. We discuss which model of activation of the visual ventral stream could account for such latencies. These latencies are not consistent with a purely feed-forward pass of activity throughout the visual ventral stream. An alternative is that face recognition relies on the core network underlying face processing identified in fMRI studies (OFA, FFA, and pSTS) and reentrant loops to refine face representation. However, the model of activation favored is that of an activation of the whole visual ventral stream up to anterior areas, such as the perirhinal cortex, combined with parallel and feed-back processes. Further studies are needed to assess which of these three models of activation can best account for face recognition. PMID:23460051
Waters, Allison M; Valvoi, Jaya S
2009-06-01
The present study examined contextual modulation of attentional control processes in paediatric anxiety disorders. Anxious children (N=20) and non-anxious controls (N=20) completed an emotional Go/No Go task in which they responded on some trials (i.e., Go trials) when neutral faces were presented amongst either angry or happy faces to which children avoided responding (i.e., No Go trials) or when angry and happy faces were presented as Go trials and children avoided responding to neutral faces. Anxious girls were slower responding to neutral faces with embedded angry compared with happy face No Go trials whereas non-anxious girls were slower responding to neutral faces with embedded happy versus angry face No Go trials. Anxious and non-anxious boys showed the same basic pattern as non-anxious girls. There were no significant group differences on No Go trials or when the emotional faces were presented as Go trials. Results are discussed in terms of selective interference by angry faces in the control of attention in anxious girls.
Searching for emotion or race: task-irrelevant facial cues have asymmetrical effects.
Lipp, Ottmar V; Craig, Belinda M; Frost, Mareka J; Terry, Deborah J; Smith, Joanne R
2014-01-01
Facial cues of threat such as anger and other race membership are detected preferentially in visual search tasks. However, it remains unclear whether these facial cues interact in visual search. If both cues equally facilitate search, a symmetrical interaction would be predicted; anger cues should facilitate detection of other race faces and cues of other race membership should facilitate detection of anger. Past research investigating this race by emotional expression interaction in categorisation tasks revealed an asymmetrical interaction. This suggests that cues of other race membership may facilitate the detection of angry faces but not vice versa. Utilising the same stimuli and procedures across two search tasks, participants were asked to search for targets defined by either race or emotional expression. Contrary to the results revealed in the categorisation paradigm, cues of anger facilitated detection of other race faces whereas differences in race did not differentially influence detection of emotion targets.
The cognitive neuroscience of person identification.
Biederman, Irving; Shilowich, Bryan E; Herald, Sarah B; Margalit, Eshed; Maarek, Rafael; Meschke, Emily X; Hacker, Catrina M
2018-02-14
We compare and contrast five differences between person identification by voice and face. 1. There is little or no cost when a familiar face is to be recognized from an unrestricted set of possible faces, even at Rapid Serial Visual Presentation (RSVP) rates, but the accuracy of familiar voice recognition declines precipitously when the set of possible speakers is increased from one to a mere handful. 2. Whereas deficits in face recognition are typically perceptual in origin, those with normal perception of voices can manifest severe deficits in their identification. 3. Congenital prosopagnosics (CPros) and congenital phonagnosics (CPhon) are generally unable to imagine familiar faces and voices, respectively. Only in CPros, however, is this deficit a manifestation of a general inability to form visual images of any kind. CPhons report no deficit in imaging non-voice sounds. 4. The prevalence of CPhons of 3.2% is somewhat higher than the reported prevalence of approximately 2.0% for CPros in the population. There is evidence that CPhon represents a distinct condition statistically and not just normal variation. 5. Face and voice recognition proficiency are uncorrelated rather than reflecting limitations of a general capacity for person individuation. Copyright © 2018 Elsevier Ltd. All rights reserved.
Effect of perceptual load on semantic access by speech in children
Jerger, Susan; Damian, Markus F.; Mills, Candice; Bartlett, James; Tye-Murray, Nancy; Abdi, Hervè
2013-01-01
Purpose To examine whether semantic access by speech requires attention in children. Method Children (N=200) named pictures and ignored distractors on a cross-modal (distractors: auditory-no face) or multi-modal (distractors: auditory-static face and audiovisual-dynamic face) picture word task. The cross-modal had a low load, and the multi-modal had a high load [i.e., respectively naming pictures displayed 1) on a blank screen vs 2) below the talker’s face on his T-shirt]. Semantic content of distractors was manipulated to be related vs unrelated to picture (e.g., picture dog with distractors bear vs cheese). Lavie's (2005) perceptual load model proposes that semantic access is independent of capacity limited attentional resources if irrelevant semantic-content manipulation influences naming times on both tasks despite variations in loads but dependent on attentional resources exhausted by higher load task if irrelevant content influences naming only on cross-modal (low load). Results Irrelevant semantic content affected performance for both tasks in 6- to 9-year-olds, but only on cross-modal in 4–5-year-olds. The addition of visual speech did not influence results on the multi-modal task. Conclusion Younger and older children differ in dependence on attentional resources for semantic access by speech. PMID:22896045
McGugin, Rankin Williams; Tanaka, James W.; Lebrecht, Sophie; Tarr, Michael J.; Gauthier, Isabel
2010-01-01
This study explores the effect of individuation training on the acquisition of race-specific expertise. First, we investigated whether practice individuating other-race faces yields improvement in perceptual discrimination for novel faces of that race. Second, we asked whether there was similar improvement for novel faces of a different race for which participants received equal practice, but in an orthogonal task that did not require individuation. Caucasian participants were trained to individuate faces of one race (African American or Hispanic) and to make difficult eye luminance judgments on faces of the other race. By equating these tasks we are able to rule out raw experience, visual attention or performance/success-induced positivity as the critical factors that produce race-specific improvements. These results indicate that individuation practice is one mechanism through which cognitive, perceptual, and/or social processes promote growth of the own-race face recognition advantage. PMID:21429002
Social incentives improve deliberative but not procedural learning in older adults.
Gorlick, Marissa A; Maddox, W Todd
2015-01-01
Age-related deficits are seen across tasks where learning depends on asocial feedback processing, however plasticity has been observed in some of the same tasks in social contexts suggesting a novel way to attenuate deficits. Socioemotional selectivity theory suggests this plasticity is due to a deliberative motivational shift toward achieving well-being with age (positivity effect) that reverses when executive processes are limited (negativity effect). The present study examined the interaction of feedback valence (positive, negative) and social salience (emotional face feedback - happy; angry, asocial point feedback - gain; loss) on learning in a deliberative task that challenges executive processes and a procedural task that does not. We predict that angry face feedback will improve learning in a deliberative task when executive function is challenged. We tested two competing hypotheses regarding the interactive effects of deliberative emotional biases on automatic feedback processing: (1) If deliberative emotion regulation and automatic feedback are interactive we expect happy face feedback to improve learning and angry face feedback to impair learning in older adults because cognitive control is available. (2) If deliberative emotion regulation and automatic feedback are not interactive we predict that emotional face feedback will not improve procedural learning regardless of valence. Results demonstrate that older adults show persistent deficits relative to younger adults during procedural category learning suggesting that deliberative emotional biases do not interact with automatic feedback processing. Interestingly, a subgroup of older adults identified as potentially using deliberative strategies tended to learn as well as younger adults with angry relative to happy feedback, matching the pattern observed in the deliberative task. Results suggest that deliberative emotional biases can improve deliberative learning, but have no effect on procedural learning.
Medial temporal lobe contributions to short-term memory for faces.
Race, Elizabeth; LaRocque, Karen F; Keane, Margaret M; Verfaellie, Mieke
2013-11-01
The role of the medial temporal lobes (MTL) in short-term memory (STM) remains a matter of debate. Whereas imaging studies commonly show hippocampal activation during short-delay memory tasks, evidence from amnesic patients with MTL lesions is mixed. It has been argued that apparent STM impairments in amnesia may reflect long-term memory (LTM) contributions to performance. We challenge this conclusion by demonstrating that MTL amnesic patients show impaired delayed matching-to-sample (DMS) for faces in a task that meets both a traditional delay-based and a recently proposed distractor-based criterion for classification as an STM task. In Experiment 1, we demonstrate that our face DMS task meets the proposed distractor-based criterion for STM classification, in that extensive processing of delay-period distractor stimuli disrupts performance of healthy individuals. In Experiment 2, MTL amnesic patients with lesions extending into anterior subhippocampal cortex, but not patients with lesions limited to the hippocampus, show impaired performance on this task without distraction at delays as short as 8 s, within temporal range of delay-based STM classification, in the context of intact perceptual matching performance. Experiment 3 provides support for the hypothesis that STM for faces relies on configural processing by showing that the extent to which healthy participants' performance is disrupted by interference depends on the configural demands of the distractor task. Together, these findings are consistent with the notion that the amnesic impairment in STM for faces reflects a deficit in configural processing associated with subhippocampal cortices and provide novel evidence that the MTL supports cognition beyond the LTM domain. PsycINFO Database Record (c) 2013 APA, all rights reserved.
ERIC Educational Resources Information Center
Lawrence, Jason S.; Charbonneau, Joseph
2009-01-01
Two studies showed that the link between how much students base their self-worth on academics and their math performance depends on whether their identification with math was statistically controlled and whether the task measured ability or not. Study 1 showed that, when math identification was uncontrolled and the task was ability-diagnostic,…
Can we match ultraviolet face images against their visible counterparts?
NASA Astrophysics Data System (ADS)
Narang, Neeru; Bourlai, Thirimachos; Hornak, Lawrence A.
2015-05-01
In law enforcement and security applications, the acquisition of face images is critical in producing key trace evidence for the successful identification of potential threats. However, face recognition (FR) for face images captured using different camera sensors, and under variable illumination conditions, and expressions is very challenging. In this paper, we investigate the advantages and limitations of the heterogeneous problem of matching ultra violet (from 100 nm to 400 nm in wavelength) or UV, face images against their visible (VIS) counterparts, when all face images are captured under controlled conditions. The contributions of our work are three-fold; (i) We used a camera sensor designed with the capability to acquire UV images at short-ranges, and generated a dual-band (VIS and UV) database that is composed of multiple, full frontal, face images of 50 subjects. Two sessions were collected that span over the period of 2 months. (ii) For each dataset, we determined which set of face image pre-processing algorithms are more suitable for face matching, and, finally, (iii) we determined which FR algorithm better matches cross-band face images, resulting in high rank-1 identification rates. Experimental results show that our cross spectral matching (the heterogeneous problem, where gallery and probe sets consist of face images acquired in different spectral bands) algorithms achieve sufficient identification performance. However, we also conclude that the problem under study, is very challenging, and it requires further investigation to address real-world law enforcement or military applications. To the best of our knowledge, this is first time in the open literature the problem of cross-spectral matching of UV against VIS band face images is being investigated.
Perception-based road hazard identification with Internet support.
Tarko, Andrew P; DeSalle, Brian R
2003-01-01
One of the most important tasks faced by highway agencies is identifying road hazards. Agencies use crash statistics to detect road intersections and segments where the frequency of crashes is excessive. With the crash-based method, a dangerous intersection or segment can be pointed out only after a sufficient number of crashes occur. A more proactive method is needed, and motorist complaints may be able to assist agencies in detecting road hazards before crashes occur. This paper investigates the quality of safety information reported by motorists and the effectiveness of hazard identification based on motorist reports, which were collected with an experimental Internet website. It demonstrates that the intersections pointed out by motorists tended to have more crashes than other intersections. The safety information collected through the website was comparable to 2-3 months of crash data. It was concluded that although the Internet-based method could not substitute for the traditional crash-based methods, its joint use with crash statistics might be useful in detecting new hazards where crash data had been collected for a short time.
Kliemann, Dorit; Rosenblau, Gabriela; Bölte, Sven; Heekeren, Hauke R.; Dziobek, Isabel
2013-01-01
Recognizing others' emotional states is crucial for effective social interaction. While most facial emotion recognition tasks use explicit prompts that trigger consciously controlled processing, emotional faces are almost exclusively processed implicitly in real life. Recent attempts in social cognition suggest a dual process perspective, whereby explicit and implicit processes largely operate independently. However, due to differences in methodology the direct comparison of implicit and explicit social cognition has remained a challenge. Here, we introduce a new tool to comparably measure implicit and explicit processing aspects comprising basic and complex emotions in facial expressions. We developed two video-based tasks with similar answer formats to assess performance in respective facial emotion recognition processes: Face Puzzle, implicit and explicit. To assess the tasks' sensitivity to atypical social cognition and to infer interrelationship patterns between explicit and implicit processes in typical and atypical development, we included healthy adults (NT, n = 24) and adults with autism spectrum disorder (ASD, n = 24). Item analyses yielded good reliability of the new tasks. Group-specific results indicated sensitivity to subtle social impairments in high-functioning ASD. Correlation analyses with established implicit and explicit socio-cognitive measures were further in favor of the tasks' external validity. Between group comparisons provide first hints of differential relations between implicit and explicit aspects of facial emotion recognition processes in healthy compared to ASD participants. In addition, an increased magnitude of between group differences in the implicit task was found for a speed-accuracy composite measure. The new Face Puzzle tool thus provides two new tasks to separately assess explicit and implicit social functioning, for instance, to measure subtle impairments as well as potential improvements due to social cognitive interventions. PMID:23805122
Fradcourt, B; Peyrin, C; Baciu, M; Campagne, A
2013-10-01
Previous studies performed on visual processing of emotional stimuli have revealed preference for a specific type of visual spatial frequencies (high spatial frequency, HSF; low spatial frequency, LSF) according to task demands. The majority of studies used a face and focused on the appraisal of the emotional state of others. The present behavioral study investigates the relative role of spatial frequencies on processing emotional natural scenes during two explicit cognitive appraisal tasks, one emotional, based on the self-emotional experience and one motivational, based on the tendency to action. Our results suggest that HSF information was the most relevant to rapidly identify the self-emotional experience (unpleasant, pleasant, and neutral) while LSF was required to rapidly identify the tendency to action (avoidance, approach, and no action). The tendency to action based on LSF analysis showed a priority for unpleasant stimuli whereas the identification of emotional experience based on HSF analysis showed a priority for pleasant stimuli. The present study confirms the interest of considering both emotional and motivational characteristics of visual stimuli. Copyright © 2013 Elsevier Inc. All rights reserved.
Dissociating 'what' and 'how' in visual form agnosia: a computational investigation.
Vecera, S P
2002-01-01
Patients with visual form agnosia exhibit a profound impairment in shape perception (what an object is) coupled with intact visuomotor functions (how to act on an object), demonstrating a dissociation between visual perception and action. How can these patients act on objects that they cannot perceive? Although two explanations of this 'what-how' dissociation have been offered, each explanation has shortcomings. A 'pathway information' account of the 'what-how' dissociation is presented in this paper. This account hypothesizes that 'where' and 'how' tasks require less information than 'what' tasks, thereby allowing 'where/how' to remain relatively spared in the face of neurological damage. Simulations with a neural network model test the predictions of the pathway information account. Following damage to an input layer common to the 'what' and 'where/how' pathways, the model performs object identification more poorly than spatial localization. Thus, the model offers a parsimonious explanation of differential 'what-how' performance in visual form agnosia. The simulation results are discussed in terms of their implications for visual form agnosia and other neuropsychological syndromes.
Schall, Sonja; von Kriegstein, Katharina
2014-01-01
It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas. PMID:24466026
Sandgren, Olof; Andersson, Richard; van de Weijer, Joost; Hansson, Kristina; Sahlén, Birgitta
2012-01-01
This study investigates gaze behaviour in child dialogues. In earlier studies the authors have investigated the use of requests for clarification and responses in order to study the co-creation of understanding in a referential communication task. By adding eye tracking, this line of research is now expanded to include non-verbal contributions in conversation. To investigate the timing of gazes in face-to-face interaction and to relate the gaze behaviour to the use of requests for clarification. Eight conversational pairs of typically developing 10-15 year olds participated. The pairs (director and executor) performed a referential communication task requiring the description of faces. During the dialogues both participants wore head-mounted eye trackers. All gazes were recorded and categorized according to the area fixated (Task, Face, Off). The verbal context for all instances of gaze at the partner's face was identified and categorized using time-course analysis. The results showed that the executor spends almost 90% of the time fixating the gaze on the task, 10% on the director's face and less than 0.5% elsewhere. Turn shift, primarily requests for clarification, and back channelling significantly predicted the executors' gaze to the face of the task director. The distribution of types of requests showed that requests for previously unmentioned information were significantly more likely to be associated with gaze at the director. The study shows that the executors' gaze at the director accompanies important dynamic shifts in the dialogue. The association with requests for clarification indicates that gaze at the director can be used to monitor the response with two modalities. Furthermore, the significantly higher association with requests for previously unmentioned information indicates that gaze may be used to emphasize the verbal content. The results will be used as a reference for studies of gaze behaviour in clinical populations with hearing and language impairments. © 2012 Royal College of Speech and Language Therapists.
Human perceptual decision making: disentangling task onset and stimulus onset.
Cardoso-Leite, Pedro; Waszak, Florian; Lepsien, Jöran
2014-07-01
The left dorsolateral prefrontal cortex (ldlPFC) has been highlighted as a key actor in human perceptual decision-making (PDM): It is theorized to support decision-formation independently of stimulus type or motor response. PDM studies however generally confound stimulus onset and task onset: when the to-be-recognized stimulus is presented, subjects know that a stimulus is shown and can set up processing resources-even when they do not know which stimulus is shown. We hypothesized that the ldlPFC might be involved in task preparation rather than decision-formation. To test this, we asked participants to report whether sequences of noisy images contained a face or a house within an experimental design that decorrelates stimulus and task onset. Decision-related processes should yield a sustained response during the task, whereas preparation-related areas should yield transient responses at its beginning. The results show that the brain activation pattern at task onset is strikingly similar to that observed in previous PDM studies. In particular, they contradict the idea that ldlPFC forms an abstract decision and suggest instead that its activation reflects preparation for the upcoming task. We further investigated the role of the fusiform face areas and parahippocampal place areas which are thought to be face and house detectors, respectively, that feed their signals to higher level decision areas. The response patterns within these areas suggest that this interpretation is unlikely and that the decisions about the presence of a face or a house in a noisy image might instead already be computed within these areas without requiring higher-order areas. Copyright © 2013 Wiley Periodicals, Inc.
Effects of spatial frequency content on classification of face gender and expression.
Aguado, Luis; Serrano-Pedraza, Ignacio; Rodríguez, Sonia; Román, Francisco J
2010-11-01
The role of different spatial frequency bands on face gender and expression categorization was studied in three experiments. Accuracy and reaction time were measured for unfiltered, low-pass (cut-off frequency of 1 cycle/deg) and high-pass (cutoff frequency of 3 cycles/deg) filtered faces. Filtered and unfiltered faces were equated in root-mean-squared contrast. For low-pass filtered faces reaction times were higher than unfiltered and high-pass filtered faces in both categorization tasks. In the expression task, these results were obtained with expressive faces presented in isolation (Experiment 1) and also with neutral-expressive dynamic sequences where each expressive face was preceded by a briefly presented neutral version of the same face (Experiment 2). For high-pass filtered faces different effects were observed on gender and expression categorization. While both speed and accuracy of gender categorization were reduced comparing to unfiltered faces, the efficiency of expression classification remained similar. Finally, we found no differences between expressive and non expressive faces in the effects of spatial frequency filtering on gender categorization (Experiment 3). These results show a common role of information from the high spatial frequency band in the categorization of face gender and expression.
ERIC Educational Resources Information Center
Vanmarcke, Steven; Wagemans, Johan
2017-01-01
Adolescents with and without autism spectrum disorder (ASD) performed two priming experiments in which they implicitly processed a prime stimulus, containing high and/or low spatial frequency information, and then explicitly categorized a target face either as male/female (gender task) or as positive/negative (Valence task). Adolescents with ASD…
Dong, Debo; Wang, Yulin; Jia, Xiaoyan; Li, Yingjia; Chang, Xuebin; Vandekerckhove, Marie; Luo, Cheng; Yao, Dezhong
2017-11-15
Impairment of face perception in schizophrenia is a core aspect of social cognitive dysfunction. This impairment is particularly marked in threatening face processing. Identifying reliable neural correlates of the impairment of threatening face processing is crucial for targeting more effective treatments. However, neuroimaging studies have not yet obtained robust conclusions. Through comprehensive literature search, twenty-one whole brain datasets were included in this meta-analysis. Using seed-based d-Mapping, in this voxel-based meta-analysis, we aimed to: 1) establish the most consistent brain dysfunctions related to threating face processing in schizophrenia; 2) address task-type heterogeneity in this impairment; 3) explore the effect of potential demographic or clinical moderator variables on this impairment. Main meta-analysis indicated that patients with chronic schizophrenia demonstrated attenuated activations in limbic emotional system along with compensatory over-activation in medial prefrontal cortex (MPFC) during threatening faces processing. Sub-task analyses revealed under-activations in right amygdala and left fusiform gyrus in both implicit and explicit tasks. The remaining clusters were found to be differently involved in different types of tasks. Moreover, meta-regression analyses showed brain abnormalities in schizophrenia were partly modulated by age, gender, medication and severity of symptoms. Our results highlighted breakdowns in limbic-MPFC circuit in schizophrenia, suggesting general inability to coordinate and contextualize salient threat stimuli. These findings provide potential targets for neurotherapeutic and pharmacological interventions for schizophrenia. Copyright © 2017 Elsevier B.V. All rights reserved.
Standoff imaging of a masked human face using a 670 GHz high resolution radar
NASA Astrophysics Data System (ADS)
Kjellgren, Jan; Svedin, Jan; Cooper, Ken B.
2011-11-01
This paper presents an exploratory attempt to use high-resolution radar measurements for face identification in forensic applications. An imaging radar system developed by JPL was used to measure a human face at 670 GHz. Frontal views of the face were measured both with and without a ski mask at a range of 25 m. The realized spatial resolution was roughly 1 cm in all three dimensions. The surfaces of the ski mask and the face were detected by using the two dominating reflections from amplitude data. Various methods for visualization of these surfaces are presented. The possibility to use radar data to determine certain face distance measures between well-defined face landmarks, typically used for anthropometric statistics, was explored. The measures used here were face length, frontal breadth and interpupillary distance. In many cases the radar system seems to provide sufficient information to exclude an innocent subject from suspicion. For an accurate identification it is believed that a system must provide significantly more information.
Radke, Sina; Kalt, Theresa; Wagels, Lisa; Derntl, Birgit
2018-01-01
Motivational tendencies to happy and angry faces are well-established, e.g., in the form of aggression. Approach-avoidance reactions are not only elicited by emotional expressions, but also linked to the evaluation of stable, social characteristics of faces. Grounded in the two fundamental dimensions of face-based evaluations proposed by Oosterhof and Todorov (2008), the current study tested whether emotionally neutral faces varying in trustworthiness and dominance potentiate approach-avoidance in 50 healthy male participants. Given that evaluations of social traits are influenced by testosterone, we further tested for associations of approach-avoidance tendencies with endogenous and prenatal indicators of testosterone. Computer-generated faces signaling high and low trustworthiness and dominance were used to elicit motivational reactions in three approach-avoidance tasks, i.e., one implicit and one explicit joystick-based paradigm, and an additional rating task. When participants rated their behavioral tendencies, highly trustworthy faces evoked approach, and highly dominant faces evoked avoidance. This pattern, however, did not translate to faster initiation times of corresponding approach-avoidance movements. Instead, the joystick tasks revealed general effects, such as faster reactions to faces signaling high trustworthiness or high dominance. These findings partially support the framework of Oosterhof and Todorov (2008) in guiding approach-avoidance decisions, but not behavioral tendencies. Contrary to our expectations, neither endogenous nor prenatal indicators of testosterone were associated with motivational tendencies. Future studies should investigate the contexts in which testosterone influences social motivation. PMID:29410619
Chen, Jiu; Ma, Wentao; Zhang, Yan; Wu, Xingqu; Wei, Dunhong; Liu, Guangxiong; Deng, Zihe; Yang, Laiqi; Zhang, Zhijun
2014-01-01
Background States of depression are associated with increased sensitivity to negative events. For this novel study, we have assessed the relationship between the number of depressive episodes and the dysfunctional processing of emotional facial expressions. Methodology/Principal Findings We used a visual emotional oddball paradigm to manipulate the processing of emotional information while event-related brain potentials were recorded in 45 patients with first episode major depression (F-MD), 40 patients with recurrent major depression (R-MD), and 46 healthy controls (HC). Compared with the HC group, F-MD patients had lower N170 amplitudes when identifying happy, neutral, and sad faces; R-MD patients had lower N170 amplitudes when identifying happy and neutral faces, but higher N170 amplitudes when identifying sad faces. F-MD patients had longer N170 latencies when identifying happy, neutral, and sad faces relative to the HC group, and R-MD patients had longer N170 latencies when identifying happy and neutral faces, but shorter N170 latencies when identifying sad faces compared with F-MD patients. Interestingly, a negative relationship was observed between N170 amplitude and the depressive severity score for identification of happy faces in R-MD patients while N170 amplitude was positively correlated with the depressive severity score for identification of sad faces in F-MD and R-MD patients. Additionally, the deficits of N170 amplitude for sad faces positively correlated with the number of depressive episodes in R-MD patients. Conclusion/Significance These results provide new evidence that having more recurrent depressive episodes and serious depressive states are likely to aggravate the already abnormal processing of emotional facial expressions in patients with depression. Moreover, it further suggests that the impaired processing as indexed by N170 amplitude for positive face identification may be a potentially useful biomarker for predicting propagation of depression while N170 amplitude for negative face identification could be a potential biomarker for depression recurrence. PMID:25314024
King, Andy J; Gehl, Robert W; Grossman, Douglas; Jensen, Jakob D
2013-12-01
Skin self-examination (SSE) is one method for identifying atypical nevi among members of the general public. Unfortunately, past research has shown that SSE has low sensitivity in detecting atypical nevi. The current study investigates whether crowdsourcing (collective effort) can improve SSE identification accuracy. Collective effort is potentially useful for improving people's visual identification of atypical nevi during SSE because, even when a single person has low reliability at a task, the pattern of the group can overcome the limitations of each individual. Adults (N=500) were recruited from a shopping mall in the Midwest. Participants viewed educational pamphlets about SSE and then completed a mole identification task. For the task, participants were asked to circle mole images that appeared atypical. Forty nevi images were provided; nine of the images were of nevi that were later diagnosed as melanoma. Consistent with past research, individual effort exhibited modest sensitivity (.58) for identifying atypical nevi in the mole identification task. As predicted, collective effort overcame the limitations of individual effort. Specifically, a 19% collective effort identification threshold exhibited superior sensitivity (.90). The results of the current study suggest that limitations of SSE can be countered by collective effort, a finding that supports the pursuit of interventions promoting early melanoma detection that contain crowdsourced visual identification components. Copyright © 2013 Elsevier Ltd. All rights reserved.
DriveID: safety innovation through individuation.
Sawyer, Ben; Teo, Grace; Mouloua, Mustapha
2012-01-01
The driving task is highly complex and places considerable perceptual, physical and cognitive demands on the driver. As driving is fundamentally an information processing activity, distracted or impaired drivers have diminished safety margins compared with non- distracted drivers (Hancock and Parasuraman, 1992; TRB 1998 a & b). This competition for sensory and decision making capacities can lead to failures that cost lives. Some groups, teens and elderly drivers for example, have patterns of systematically poor perceptual, physical and cognitive performance while driving. Although there are technologies developed to aid these different drivers, these systems are often misused and underutilized. The DriveID project aims to design and develop a passive, automated face identification system capable of robustly identifying the driver of the vehicle, retrieve a stored profile, and intelligently prescribing specific accident prevention systems and driving environment customizations.
Passing the Baton: An Experimental Study of Shift Handover
NASA Technical Reports Server (NTRS)
Parke, Bonny; Hobbs, Alan; Kanki, Barbara
2010-01-01
Shift handovers occur in many safety-critical environments, including aviation maintenance, medicine, air traffic control, and mission control for space shuttle and space station operations. Shift handovers are associated with increased risk of communication failures and human error. In dynamic industries, errors and accidents occur disproportionately after shift handover. Typical shift handovers involve transferring information from an outgoing shift to an incoming shift via written logs, or in some cases, face-to-face briefings. The current study explores the possibility of improving written communication with the support modalities of audio and video recordings, as well as face-to-face briefings. Fifty participants participated in an experimental task which mimicked some of the critical challenges involved in transferring information between shifts in industrial settings. All three support modalities, face-to-face, video, and audio recordings, reduced task errors significantly over written communication alone. The support modality most preferred by participants was face-to-face communication; the least preferred was written communication alone.
Pehlivanoglu, Didem; Jain, Shivangi; Ariel, Robert; Verhaeghen, Paul
2014-01-01
In the present study, we investigated age-related differences in the processing of emotional stimuli. Specifically, we were interested in whether older adults would show deficits in unbinding emotional expression (i.e., either no emotion, happiness, anger, or disgust) from bound stimuli (i.e., photographs of faces expressing these emotions), as a hyper-binding account of age-related differences in working memory would predict. Younger and older adults completed different N-Back tasks (side-by-side 0-Back, 1-Back, 2-Back) under three conditions: match/mismatch judgments based on either the identity of the face (identity condition), the face’s emotional expression (expression condition), or both identity and expression of the face (both condition). The two age groups performed more slowly and with lower accuracy in the expression condition than in the both condition, indicating the presence of an unbinding process. This unbinding effect was more pronounced in older adults than in younger adults, but only in the 2-Back task. Thus, older adults seemed to have a specific deficit in unbinding in working memory. Additionally, no age-related differences were found in accuracy in the 0-Back task, but such differences emerged in the 1-Back task, and were further magnified in the 2-Back task, indicating independent age-related differences in attention/STM and working memory. Pupil dilation data confirmed that the attention/STM version of the task (1-Back) is more effortful for older adults than younger adults. PMID:24795660
Resting-state functional connectivity indexes reading competence in children and adults.
Koyama, Maki S; Di Martino, Adriana; Zuo, Xi-Nian; Kelly, Clare; Mennes, Maarten; Jutagir, Devika R; Castellanos, F Xavier; Milham, Michael P
2011-06-08
Task-based neuroimaging studies face the challenge of developing tasks capable of equivalently probing reading networks across different age groups. Resting-state fMRI, which requires no specific task, circumvents these difficulties. Here, in 25 children (8-14 years) and 25 adults (21-46 years), we examined the extent to which individual differences in reading competence can be related to resting-state functional connectivity (RSFC) of regions implicated in reading. In both age groups, reading standard scores correlated positively with RSFC between the left precentral gyrus and other motor regions, and between Broca's and Wernicke's areas. This suggests that, regardless of age group, stronger coupling among motor regions, as well as between language/speech regions, subserves better reading, presumably reflecting automatized articulation. We also observed divergent RSFC-behavior relationships in children and adults, particularly those anchored in the left fusiform gyrus (FFG) (the visual word form area). In adults, but not children, better reading performance was associated with stronger positive correlations between FFG and phonology-related regions (Broca's area and the left inferior parietal lobule), and with stronger negative relationships between FFG and regions of the "task-negative" default network. These results suggest that both positive RSFC (functional coupling) between reading regions and negative RSFC (functional segregation) between a reading region and default network regions are important for automatized reading, characteristic of adult readers. Together, our task-independent RSFC findings highlight the importance of appreciating developmental changes in the neural correlates of reading competence, and suggest that RSFC may serve to facilitate the identification of reading disorders in different age groups.
Guan, Lili; Qi, Mingming; Zhang, Qinglin; Yang, Juan
2014-01-01
The implicit positive association (IPA) theory attributed self-face advantage to the IPA with self-concept. Previous behavioral study has found that self-concept threat (SCT) could eliminate the self-advantage in face recognition over familiar-face, without taking levels of facial familiarity into account. The current event-related potential study aimed to investigate whether SCT could eliminate the self-face advantage over stranger-face. Fifteen participants completed a "self-friend" comparison task in which participants identified the face orientation of self-face and friend-face after SCT and non-self-concept threat (NSCT) priming, and a "self-stranger" comparison task was also completed in which participants identified the face orientation of self-face and stranger-face after SCT and NSCT priming. The results showed that the N2 amplitudes were more negative for processing friend-face than self-face after NSCT priming, but there was no significant difference between them after SCT priming. Moreover, the N2 amplitudes were more negative for processing stranger-face than self-face both after SCT priming and after NSCT priming. Furthermore, SCT manipulated the N2 amplitudes of friend-face rather than self-face. Overall, the present study made a supplementary to the current IPA theory and further indicated that SCT would only eliminate this self-face recognition advantage when comparing with important others.
Li, Tianbi; Wang, Xueqin; Pan, Junhao; Feng, Shuyuan; Gong, Mengyuan; Wu, Yaxue; Li, Guoxiang; Li, Sheng; Yi, Li
2017-11-01
The processing of social stimuli, such as human faces, is impaired in individuals with autism spectrum disorder (ASD), which could be accounted for by their lack of social motivation. The current study examined how the attentional processing of faces in children with ASD could be modulated by the learning of face-reward associations. Sixteen high-functioning children with ASD and 20 age- and ability-matched typically developing peers participated in the experiments. All children started with a reward learning task, in which the children were presented with three female faces that were attributed with positive, negative, and neutral values, and were required to remember the faces and their associated values. After this, they were tested on the recognition of the learned faces and a visual search task in which the learned faces served as the distractor. We found a modulatory effect of the face-reward associations on the visual search but not the recognition performance in both groups despite the lower efficacy among children with ASD in learning the face-reward associations. Specifically, both groups responded faster when one of the distractor faces was associated with positive or negative values than when the distractor face was neutral, suggesting an efficient attentional processing of these reward-associated faces. Our findings provide direct evidence for the perceptual-level modulatory effect of reward learning on the attentional processing of faces in individuals with ASD. Autism Res 2017, 10: 1797-1807. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. In our study, we tested whether the face processing of individuals with ASD could be changed when the faces were associated with different social meanings. We found no effect of social meanings on face recognition, but both groups responded faster in the visual search task when one of the distractor faces was associated with positive or negative values than when the neutral face. The findings suggest that children with ASD could efficiently process faces associated with different values like typical children. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
Lagattuta, Kristin Hansen; Sayfan, Liat; Monsour, Michael
2011-05-01
Two experiments examined 4- to 11-year-olds' and adults' performance (N = 350) on two variants of a Stroop-like card task: the day-night task (say 'day' when shown a moon and 'night' when shown a sun) and a new happy-sad task (say 'happy' for a sad face and 'sad' for a happy face). Experiment 1 featured colored cartoon drawings. In Experiment 2, the happy-sad task featured photographs, and pictures for both measures were gray scale. All age groups made more errors and took longer to respond to the happy-sad versus the day-night versions. Unlike the day-night task, the happy-sad task did not suffer from ceiling effects, even in adults. The happy-sad task provides a methodological advance for measuring executive function across a wide age range.
Multi-Task Learning with Low Rank Attribute Embedding for Multi-Camera Person Re-Identification.
Su, Chi; Yang, Fan; Zhang, Shiliang; Tian, Qi; Davis, Larry Steven; Gao, Wen
2018-05-01
We propose Multi-Task Learning with Low Rank Attribute Embedding (MTL-LORAE) to address the problem of person re-identification on multi-cameras. Re-identifications on different cameras are considered as related tasks, which allows the shared information among different tasks to be explored to improve the re-identification accuracy. The MTL-LORAE framework integrates low-level features with mid-level attributes as the descriptions for persons. To improve the accuracy of such description, we introduce the low-rank attribute embedding, which maps original binary attributes into a continuous space utilizing the correlative relationship between each pair of attributes. In this way, inaccurate attributes are rectified and missing attributes are recovered. The resulting objective function is constructed with an attribute embedding error and a quadratic loss concerning class labels. It is solved by an alternating optimization strategy. The proposed MTL-LORAE is tested on four datasets and is validated to outperform the existing methods with significant margins.
To hear or not to hear: Voice processing under visual load.
Zäske, Romi; Perlich, Marie-Christin; Schweinberger, Stefan R
2016-07-01
Adaptation to female voices causes subsequent voices to be perceived as more male, and vice versa. This contrastive aftereffect disappears under spatial inattention to adaptors, suggesting that voices are not encoded automatically. According to Lavie, Hirst, de Fockert, and Viding (2004), the processing of task-irrelevant stimuli during selective attention depends on perceptual resources and working memory. Possibly due to their social significance, faces may be an exceptional domain: That is, task-irrelevant faces can escape perceptual load effects. Here we tested voice processing, to study whether voice gender aftereffects (VGAEs) depend on low or high perceptual (Exp. 1) or working memory (Exp. 2) load in a relevant visual task. Participants adapted to irrelevant voices while either searching digit displays for a target (Exp. 1) or recognizing studied digits (Exp. 2). We found that the VGAE was unaffected by perceptual load, indicating that task-irrelevant voices, like faces, can also escape perceptual-load effects. Intriguingly, the VGAE was increased under high memory load. Therefore, visual working memory load, but not general perceptual load, determines the processing of task-irrelevant voices.
Who is who: areas of the brain associated with recognizing and naming famous faces.
Giussani, Carlo; Roux, Franck-Emmanuel; Bello, Lorenzo; Lauwers-Cances, Valérie; Papagno, Costanza; Gaini, Sergio M; Puel, Michelle; Démonet, Jean-François
2009-02-01
It has been hypothesized that specific brain regions involved in face naming may exist in the brain. To spare these areas and to gain a better understanding of their organization, the authors studied patients who underwent surgery by using direct electrical stimulation mapping for brain tumors, and they compared an object-naming task to a famous face-naming task. Fifty-six patients with brain tumors (39 and 17 in the left and right hemispheres, respectively) and with no significant preoperative overall language deficit were prospectively studied over a 2-year period. Four patients who had a partially selective famous face anomia and 2 with prosopagnosia were not included in the final analysis. Face-naming interferences were exclusively localized in small cortical areas (< 1 cm2). Among 35 patients whose dominant left hemisphere was studied, 26 face-naming specific areas (that is, sites of interference in face naming only and not in object naming) were found. These face naming-specific sites were significantly detected in 2 regions: in the left frontal areas of the superior, middle, and inferior frontal gyri (p < 0.001) and in the anterior part of the superior and middle temporal gyri (p < 0.01). Variable patterns of interference were observed (speech arrest, anomia, phonemic, or semantic paraphasia) probably related to the different stages in famous face processing. Only 4 famous face-naming interferences were found in the right hemisphere. Relative anatomical segregation of naming categories within language areas was detected. This study showed that famous face naming was preferentially processed in the left frontal and anterior temporal gyri. The authors think it is necessary to adapt naming tasks in neurosurgical patients to the brain region studied.
ERIC Educational Resources Information Center
Greenberg, Seth N.; Goshen-Gottstein, Yonatan
2009-01-01
The present work considers the mental imaging of faces, with a focus in own-face imaging. Experiments 1 and 3 demonstrated an own-face disadvantage, with slower generation of mental images of one's own face than of other familiar faces. In contrast, Experiment 2 demonstrated that mental images of facial parts are generated more quickly for one's…
Linking information needs with evaluation: the role of task identification.
Weir, C. R.
1998-01-01
Action Identification Theory was used to explore user's subjective constructions of information tasks in a primary care setting. The first part of the study involved collecting clinician's descriptions of their information tasks. These items were collated and then rated by another larger group of clinicians. Results clearly identified 6 major information tasks, including communication, patient assessment, work monitoring, seeking science information, compliance with policies and procedures, and data integration. Results discussed in terms of implications for evaluation and assessing information needs in a clinical setting. PMID:9929232
Golan, Ofer; Baron-Cohen, Simon; Hill, Jacqueline
2006-02-01
Adults with Asperger Syndrome (AS) can recognise simple emotions and pass basic theory of mind tasks, but have difficulties recognising more complex emotions and mental states. This study describes a new battery of tasks, testing recognition of 20 complex emotions and mental states from faces and voices. The battery was given to males and females with AS and matched controls. Results showed the AS group performed worse than controls overall, on emotion recognition from faces and voices and on 12/20 specific emotions. Females recognised faces better than males regardless of diagnosis, and males with AS had more difficulties recognising emotions from faces than from voices. The implications of these results are discussed in relation to social functioning in AS.
An Eye Tracking Investigation of Attentional Biases towards Affect in Young Children
ERIC Educational Resources Information Center
Burris, Jessica L.; Barry-Anwar, Ryan A.; Rivera, Susan M.
2017-01-01
This study examines attentional biases in the presence of angry, happy and neutral faces using a modified eye tracking version of the dot probe task (DPT). Participants were 111 young children between 9 and 48 months. Children passively viewed an affective attention bias task that consisted of a face pairing (neutral paired with either neutral,…
Limitations in 4-Year-Old Children's Sensitivity to the Spacing among Facial Features
ERIC Educational Resources Information Center
Mondloch, Catherine J.; Thomson, Kendra
2008-01-01
Four-year-olds' sensitivity to differences among faces in the spacing of features was tested under 4 task conditions: judging distinctiveness when the external contour was visible and when it was occluded, simultaneous match-to-sample, and recognizing the face of a friend. In each task, the foil differed only in the spacing of features, and…
ERIC Educational Resources Information Center
Rellecke, Julian; Palazova, Marina; Sommer, Werner; Schacht, Annekathrin
2011-01-01
The degree to which emotional aspects of stimuli are processed automatically is controversial. Here, we assessed the automatic elicitation of emotion-related brain potentials (ERPs) to positive, negative, and neutral words and facial expressions in an easy and superficial face-word discrimination task, for which the emotional valence was…
Graphite Girls in a Gigabyte World: Managing the World Wide Web in 700 Square Feet
ERIC Educational Resources Information Center
Ogletree, Tamra; Saurino, Penelope; Johnson, Christie
2009-01-01
Our action research project examined the on-task and off-task behaviors of university-level student, use of wireless laptops in face-to-face classes in order to establish rules of wireless laptop etiquette in classroom settings. Participants in the case study of three university classrooms included undergraduate, graduate, and doctoral students.…
Masking release for words in amplitude-modulated noise as a function of modulation rate and task
Buss, Emily; Whittle, Lisa N.; Grose, John H.; Hall, Joseph W.
2009-01-01
For normal-hearing listeners, masked speech recognition can improve with the introduction of masker amplitude modulation. The present experiments tested the hypothesis that this masking release is due in part to an interaction between the temporal distribution of cues necessary to perform the task and the probability of those cues temporally coinciding with masker modulation minima. Stimuli were monosyllabic words masked by speech-shaped noise, and masker modulation was introduced via multiplication with a raised sinusoid of 2.5–40 Hz. Tasks included detection, three-alternative forced-choice identification, and open-set identification. Overall, there was more masking release associated with the closed than the open-set tasks. The best rate of modulation also differed as a function of task; whereas low modulation rates were associated with best performance for the detection and three-alternative identification tasks, performance improved with modulation rate in the open-set task. This task-by-rate interaction was also observed when amplitude-modulated speech was presented in a steady masker, and for low- and high-pass filtered speech presented in modulated noise. These results were interpreted as showing that the optimal rate of amplitude modulation depends on the temporal distribution of speech cues and the information required to perform a particular task. PMID:19603883
Neural conflict-control mechanisms improve memory for target stimuli.
Krebs, Ruth M; Boehler, Carsten N; De Belder, Maya; Egner, Tobias
2015-03-01
According to conflict-monitoring models, conflict serves as an internal signal for reinforcing top-down attention to task-relevant information. While evidence based on measures of ongoing task performance supports this idea, implications for long-term consequences, that is, memory, have not been tested yet. Here, we evaluated the prediction that conflict-triggered attentional enhancement of target-stimulus processing should be associated with superior subsequent memory for those stimuli. By combining functional magnetic resonance imaging (fMRI) with a novel variant of a face-word Stroop task that employed trial-unique face stimuli as targets, we were able to assess subsequent (incidental) memory for target faces as a function of whether a given face had previously been accompanied by congruent, neutral, or incongruent (conflicting) distracters. In line with our predictions, incongruent distracters not only induced behavioral conflict, but also gave rise to enhanced memory for target faces. Moreover, conflict-triggered neural activity in prefrontal and parietal regions was predictive of subsequent retrieval success, and displayed conflict-enhanced functional coupling with medial-temporal lobe regions. These data provide support for the proposal that conflict evokes enhanced top-down attention to task-relevant stimuli, thereby promoting their encoding into long-term memory. Our findings thus delineate the neural mechanisms of a novel link between cognitive control and memory. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Crop Identification Technolgy Assessment for Remote Sensing (CITARS). Volume 1: Task design plan
NASA Technical Reports Server (NTRS)
Hall, F. G.; Bizzell, R. M.
1975-01-01
A plan for quantifying the crop identification performances resulting from the remote identification of corn, soybeans, and wheat is described. Steps for the conversion of multispectral data tapes to classification results are specified. The crop identification performances resulting from the use of several basic types of automatic data processing techniques are compared and examined for significant differences. The techniques are evaluated also for changes in geographic location, time of the year, management practices, and other physical factors. The results of the Crop Identification Technology Assessment for Remote Sensing task will be applied extensively in the Large Area Crop Inventory Experiment.
Power effects on implicit prejudice and stereotyping: The role of intergroup face processing.
Schmid, Petra C; Amodio, David M
2017-04-01
Power is thought to increase discrimination toward subordinate groups, yet its effect on different forms of implicit bias remains unclear. We tested whether power enhances implicit racial stereotyping, in addition to implicit prejudice (i.e., evaluative associations), and examined the effect of power on the automatic processing of faces during implicit tasks. Study 1 showed that manipulated high power increased both forms of implicit bias, relative to low power. Using a neural index of visual face processing (the N170 component of the ERP), Study 2 revealed that power affected the encoding of White ingroup vs. Black outgroup faces. Whereas high power increased the relative processing of outgroup faces during evaluative judgments in the prejudice task, it decreased the relative processing of outgroup faces during stereotype trait judgments. An indirect effect of power on implicit prejudice through enhanced processing of outgroup versus ingroup faces suggested a potential link between face processing and implicit bias. Together, these findings demonstrate that power can affect implicit prejudice and stereotyping as well as early processing of racial ingroup and outgroup faces.
Verbal overshadowing of face memory does occur in children too!
Dehon, Hedwige; Vanootighem, Valentine; Brédart, Serge
2013-01-01
Verbal descriptions of unfamiliar faces have been found to impair later identification of these faces in adults, a phenomenon known as the “verbal overshadowing effect” (VOE). Although determining whether children are good at describing unfamiliar individuals and whether these descriptions impair their recognition performance is critical to gaining a better understanding children's eyewitness ability, only a couple of studies have examined this dual issue in children and these found no evidence of VOE. However, as there are some methodological criticisms of these studies, we decided to conduct two further experiments in 7–8, 10–11, and 13–14-year-old children and in adults using a more optimal method for the VOE to be observed. Evidence of the VOE on face identification was found in both children and adults. Moreover, neither the accuracy of descriptions, nor delay nor target presence in the lineup was found to be associated with identification accuracy. The theoretical and developmental implications of these findings are discussed. PMID:24399985
Social presence and the composite face effect.
Garcia-Marques, Teresa; Fernandes, Alexandre; Fonseca, Ricardo; Prada, Marilia
2015-06-01
A robust finding in social psychology research is that performance is modulated by the social nature of a given context, promoting social inhibition or facilitation effects. In the present experiment, we examined if and how social presence impacts holistic face perception processes by asking participants, in the presence of others and alone, to perform the composite face task. Results suggest that completing the task in the presence of others (i.e., mere co-action) is associated with better performance in face recognition (less bias and higher discrimination between presented and non-presented targets) and with a reduction in the composite face effect. These results make clear that social presence impact on the composite face effect does not occur because presence increases reliance on holistic processing as a "dominant" well-learned response, but instead, because it increases monitoring of the interference produced by automatic response. Copyright © 2015. Published by Elsevier B.V.
Fitousi, Daniel
2016-03-01
Composite faces combine the top half of one face with the bottom half of another to create a compelling illusion of a new face. Evidence for holistic processing with composite faces comes primarily from a matching procedure in a selective attention task. In the present study, a dual-task approach has been employed to study whether composite faces reflect genuine holistic (i.e., fusion of parts) or non-holistic processing strategies (i.e., switching, resource sharing). This has been accomplished by applying the Attention Operation Characteristic methodology (AOC, Sperling & Melchner, 1978a, 1978b) and cross-contingency correlations (Bonnel & Prinzmetal, 1998) to composite faces. Overall, the results converged on the following conclusions: (a) observers can voluntarily allocate differential amounts of attention to the top and bottom parts in both spatially aligned and misaligned composite faces, (b) the interaction between composite face halves is due to attentional limitations, not due to switching or fusion strategies, and (c) the processing of aligned and misaligned composite faces is quantitatively and qualitatively similar. Taken together, these results challenge the holistic interpretation of the composite face illusion. Copyright © 2015 Elsevier B.V. All rights reserved.
Zhang, Yan; Kong, Fanchang; Chen, Hong; Jackson, Todd; Han, Li; Meng, Jing; Yang, Zhou; Gao, Jianguo; Najam ul Hasan, Abbasi
2011-11-01
In this experiment, sensitivity to female facial attractiveness was examined by comparing event-related potentials (ERPs) in response to attractive and unattractive female faces within a study-test paradigm. Fourteen heterosexual participants (age range 18-24 years, mean age 21.67 years) were required to judge 84 attractive and 84 unattractive face images as either "attractive" or "unattractive." They were then asked whether they had previously viewed each face in a recognition task in which 50% of the images were novel. Analyses indicated that attractive faces elicited more enhanced ERP amplitudes than did unattractive faces in judgment (N300 and P350-550 msec) and recognition (P160 and N250-400 msec and P400-700 msec) tasks on anterior locations. Moreover, longer reaction times and higher accuracy rate were observed in identifying attractive faces than unattractive faces. In sum, this research identified neural and behavioral bases related to cognitive preferences for judging and recognizing attractive female faces. Explanations for the results are that attractive female faces arouse more intense positive emotions in participants than do unattractive faces, and they also represent reproductive fitness and mating value from the evolutionary perspective. Copyright © 2011 Wiley-Liss, Inc.
Enhanced Visual Short-Term Memory for Angry Faces
ERIC Educational Resources Information Center
Jackson, Margaret C.; Wu, Chia-Yun; Linden, David E. J.; Raymond, Jane E.
2009-01-01
Although some views of face perception posit independent processing of face identity and expression, recent studies suggest interactive processing of these 2 domains. The authors examined expression-identity interactions in visual short-term memory (VSTM) by assessing recognition performance in a VSTM task in which face identity was relevant and…
ERIC Educational Resources Information Center
Cabaroglu, Nese; Basaran, Suleyman; Roberts, Jon
2010-01-01
This study compares pauses, repetitions and recasts in matched task interactions under face-to-face and computer-mediated conditions. Six first-year English undergraduates at a Turkish University took part in Skype-based voice chat with a native speaker and face-to-face with their instructor. Preliminary quantitative analysis of transcripts showed…
Guillaume, Fabrice; Etienne, Yann
2015-03-01
Using two exclusion tasks, the present study examined how the ERP correlates of face recognition are affected by the nature of the information to be retrieved. Intrinsic (facial expression) and extrinsic (background scene) visual information were paired with face identity and constituted the exclusion criterion at test time. Although perceptual information had to be taken into account in both situations, the FN400 old-new effect was observed only for old target faces on the expression-exclusion task, whereas it was found for both old target and old non-target faces in the background-exclusion situation. These results reveal that the FN400, which is generally interpreted as a correlate of familiarity, was modulated by the retrieval of intra-item and intrinsic face information, but not by the retrieval of extrinsic information. The observed effects on the FN400 depended on the nature of the information to be retrieved and its relationship (unitization) to the recognition target. On the other hand, the parietal old-new effect (generally described as an ERP correlate of recollection) reflected the retrieval of both types of contextual features equivalently. The current findings are discussed in relation to recent controversies about the nature of the recognition processes reflected by the ERP correlates of face recognition. Copyright © 2015 Elsevier B.V. All rights reserved.
Yang, Ping; Wang, Min; Jin, Zhenlan; Li, Ling
2015-01-01
The ability to focus on task-relevant information, while suppressing distraction, is critical for human cognition and behavior. Using a delayed-match-to-sample (DMS) task, we investigated the effects of emotional face distractors (positive, negative, and neutral faces) on early and late phases of visual short-term memory (VSTM) maintenance intervals, using low and high VSTM loads. Behavioral results showed decreased accuracy and delayed reaction times (RTs) for high vs. low VSTM load. Event-related potentials (ERPs) showed enhanced frontal N1 and occipital P1 amplitudes for negative faces vs. neutral or positive faces, implying rapid attentional alerting effects and early perceptual processing of negative distractors. However, high VSTM load appeared to inhibit face processing in general, showing decreased N1 amplitudes and delayed P1 latencies. An inverse correlation between the N1 activation difference (high-load minus low-load) and RT costs (high-load minus low-load) was found at left frontal areas when viewing negative distractors, suggesting that the greater the inhibition the lower the RT cost for negative faces. Emotional interference effect was not found in the late VSTM-related parietal P300, frontal positive slow wave (PSW) and occipital negative slow wave (NSW) components. In general, our findings suggest that the VSTM load modulates the early attention and perception of emotional distractors. PMID:26388763
Prasad, Raghu; Muniyandi, Manivannan; Manoharan, Govindan; Chandramohan, Servarayan M
2018-05-01
The purpose of this study was to examine the face and construct validity of a custom-developed bimanual laparoscopic force-skills trainer with haptics feedback. The study also examined the effect of handedness on fundamental and complex tasks. Residents (n = 25) and surgeons (n = 25) performed virtual reality-based bimanual fundamental and complex tasks. Tool-tissue reaction forces were summed, recorded, and analysed. Seven different force-based measures and a 1-time measure were used as metrics. Subsequently, participants filled out face validity and demographic questionnaires. Residents and surgeons were positive on the design, workspace, and usefulness of the simulator. Construct validity results showed significant differences between residents and experts during the execution of fundamental and complex tasks. In both tasks, residents applied large forces with higher coefficient of variation and force jerks (P < .001). Experts, with their dominant hand, applied lower forces in complex tasks and higher forces in fundamental tasks (P < .001). The coefficients of force variation (CoV) of residents and experts were higher in complex tasks (P < .001). Strong correlations were observed between CoV and task time for fundamental (r = 0.70) and complex tasks (r = 0.85). Range of smoothness of force was higher for the non-dominant hand in both fundamental and complex tasks. The simulator was able to differentiate the force-skills of residents and surgeons, and objectively evaluate the effects of handedness on laparoscopic force-skills. Competency-based laparoscopic skills assessment curriculum should be updated to meet the requirements of bimanual force-based training.
Multiple independent identification decisions: a method of calibrating eyewitness identifications.
Pryke, Sean; Lindsay, R C L; Dysart, Jennifer E; Dupuis, Paul
2004-02-01
Two experiments (N = 147 and N = 90) explored the use of multiple independent lineups to identify a target seen live. In Experiment 1, simultaneous face, body, and sequential voice lineups were used. In Experiment 2, sequential face, body, voice, and clothing lineups were used. Both studies demonstrated that multiple identifications (by the same witness) from independent lineups of different features are highly diagnostic of suspect guilt (G. L. Wells & R. C. L. Lindsay, 1980). The number of suspect and foil selections from multiple independent lineups provides a powerful method of calibrating the accuracy of eyewitness identification. Implications for use of current methods are discussed. ((c) 2004 APA, all rights reserved)
Cultural differences in on-line sensitivity to emotional voices: comparing East and West
Liu, Pan; Rigoulot, Simon; Pell, Marc D.
2015-01-01
Evidence that culture modulates on-line neural responses to the emotional meanings encoded by vocal and facial expressions was demonstrated recently in a study comparing English North Americans and Chinese (Liu et al., 2015). Here, we compared how individuals from these two cultures passively respond to emotional cues from faces and voices using an Oddball task. Participants viewed in-group emotional faces, with or without simultaneous vocal expressions, while performing a face-irrelevant visual task as the EEG was recorded. A significantly larger visual Mismatch Negativity (vMMN) was observed for Chinese vs. English participants when faces were accompanied by voices, suggesting that Chinese were influenced to a larger extent by task-irrelevant vocal cues. These data highlight further differences in how adults from East Asian vs. Western cultures process socio-emotional cues, arguing that distinct cultural practices in communication (e.g., display rules) shape neurocognitive activity associated with the early perception and integration of multi-sensory emotional cues. PMID:26074808
García-Blanco, Ana C; Perea, Manuel; Salmerón, Ladislao
2013-12-01
An antisaccade experiment, using happy, sad, and neutral faces, was conducted to examine the effect of mood-congruent information on inhibitory control (antisaccade task) and attentional orienting (prosaccade task) during the different episodes of bipolar disorder (BD) - manic (n=22), depressive (n=25), and euthymic (n=24). A group of 28 healthy controls was also included. Results revealed that symptomatic patients committed more antisaccade errors than healthy individuals, especially with mood-congruent faces. The manic group committed more antisaccade errors in response to happy faces, while the depressed group tended to commit more antisaccade errors in response to sad faces. Additionally, antisaccade latencies were slower in BD patients than in healthy individuals, whereas prosaccade latencies were slower in symptomatic patients. Taken together, these findings revealed the following: (a) slow inhibitory control in BD patients, regardless of their episode (i.e., a trait), and (b) impaired inhibitory control restricted to symptomatic patients (i.e., a state). Copyright © 2013 Elsevier B.V. All rights reserved.
Family environment influences emotion recognition following paediatric traumatic brain injury.
Schmidt, Adam T; Orsten, Kimberley D; Hanten, Gerri R; Li, Xiaoqi; Levin, Harvey S
2010-01-01
This study investigated the relationship between family functioning and performance on two tasks of emotion recognition (emotional prosody and face emotion recognition) and a cognitive control procedure (the Flanker task) following paediatric traumatic brain injury (TBI) or orthopaedic injury (OI). A total of 142 children (75 TBI, 67 OI) were assessed on three occasions: baseline, 3 months and 1 year post-injury on the two emotion recognition tasks and the Flanker task. Caregivers also completed the Life Stressors and Resources Scale (LISRES) on each occasion. Growth curve analysis was used to analyse the data. Results indicated that family functioning influenced performance on the emotional prosody and Flanker tasks but not on the face emotion recognition task. Findings on both the emotional prosody and Flanker tasks were generally similar across groups. However, financial resources emerged as significantly related to emotional prosody performance in the TBI group only (p = 0.0123). Findings suggest family functioning variables--especially financial resources--can influence performance on an emotional processing task following TBI in children.
The Occipital Face Area Is Causally Involved in Facial Viewpoint Perception
Poltoratski, Sonia; König, Peter; Blake, Randolph; Tong, Frank; Ling, Sam
2015-01-01
Humans reliably recognize faces across a range of viewpoints, but the neural substrates supporting this ability remain unclear. Recent work suggests that neural selectivity to mirror-symmetric viewpoints of faces, found across a large network of visual areas, may constitute a key computational step in achieving full viewpoint invariance. In this study, we used repetitive transcranial magnetic stimulation (rTMS) to test the hypothesis that the occipital face area (OFA), putatively a key node in the face network, plays a causal role in face viewpoint symmetry perception. Each participant underwent both offline rTMS to the right OFA and sham stimulation, preceding blocks of behavioral trials. After each stimulation period, the participant performed one of two behavioral tasks involving presentation of faces in the peripheral visual field: (1) judging the viewpoint symmetry; or (2) judging the angular rotation. rTMS applied to the right OFA significantly impaired performance in both tasks when stimuli were presented in the contralateral, left visual field. Interestingly, however, rTMS had a differential effect on the two tasks performed ipsilaterally. Although viewpoint symmetry judgments were significantly disrupted, we observed no effect on the angle judgment task. This interaction, caused by ipsilateral rTMS, provides support for models emphasizing the role of interhemispheric crosstalk in the formation of viewpoint-invariant face perception. SIGNIFICANCE STATEMENT Faces are among the most salient objects we encounter during our everyday activities. Moreover, we are remarkably adept at identifying people at a glance, despite the diversity of viewpoints during our social encounters. Here, we investigate the cortical mechanisms underlying this ability by focusing on effects of viewpoint symmetry, i.e., the invariance of neural responses to mirror-symmetric facial viewpoints. We did this by temporarily disrupting neural processing in the occipital face area (OFA) using transcranial magnetic stimulation. Our results demonstrate that the OFA causally contributes to judgments facial viewpoints and suggest that effects of viewpoint symmetry, previously observed using fMRI, arise from an interhemispheric integration of visual information even when only one hemisphere receives direct visual stimulation. PMID:26674865
The Occipital Face Area Is Causally Involved in Facial Viewpoint Perception.
Kietzmann, Tim C; Poltoratski, Sonia; König, Peter; Blake, Randolph; Tong, Frank; Ling, Sam
2015-12-16
Humans reliably recognize faces across a range of viewpoints, but the neural substrates supporting this ability remain unclear. Recent work suggests that neural selectivity to mirror-symmetric viewpoints of faces, found across a large network of visual areas, may constitute a key computational step in achieving full viewpoint invariance. In this study, we used repetitive transcranial magnetic stimulation (rTMS) to test the hypothesis that the occipital face area (OFA), putatively a key node in the face network, plays a causal role in face viewpoint symmetry perception. Each participant underwent both offline rTMS to the right OFA and sham stimulation, preceding blocks of behavioral trials. After each stimulation period, the participant performed one of two behavioral tasks involving presentation of faces in the peripheral visual field: (1) judging the viewpoint symmetry; or (2) judging the angular rotation. rTMS applied to the right OFA significantly impaired performance in both tasks when stimuli were presented in the contralateral, left visual field. Interestingly, however, rTMS had a differential effect on the two tasks performed ipsilaterally. Although viewpoint symmetry judgments were significantly disrupted, we observed no effect on the angle judgment task. This interaction, caused by ipsilateral rTMS, provides support for models emphasizing the role of interhemispheric crosstalk in the formation of viewpoint-invariant face perception. Faces are among the most salient objects we encounter during our everyday activities. Moreover, we are remarkably adept at identifying people at a glance, despite the diversity of viewpoints during our social encounters. Here, we investigate the cortical mechanisms underlying this ability by focusing on effects of viewpoint symmetry, i.e., the invariance of neural responses to mirror-symmetric facial viewpoints. We did this by temporarily disrupting neural processing in the occipital face area (OFA) using transcranial magnetic stimulation. Our results demonstrate that the OFA causally contributes to judgments facial viewpoints and suggest that effects of viewpoint symmetry, previously observed using fMRI, arise from an interhemispheric integration of visual information even when only one hemisphere receives direct visual stimulation. Copyright © 2015 the authors 0270-6474/15/3516398-06$15.00/0.
Tso, Ivy F; Calwas, Anita M; Chun, Jinsoo; Mueller, Savanna A; Taylor, Stephan F; Deldin, Patricia J
2015-08-01
Using gaze information to orient attention and guide behavior is critical to social adaptation. Previous studies have suggested that abnormal gaze perception in schizophrenia (SCZ) may originate in abnormal early attentional and perceptual processes and may be related to paranoid symptoms. Using event-related brain potentials (ERPs), this study investigated altered early attentional and perceptual processes during gaze perception and their relationship to paranoid delusions in SCZ. Twenty-eight individuals with SCZ or schizoaffective disorder and 32 demographically matched healthy controls (HCs) completed a gaze-discrimination task with face stimuli varying in gaze direction (direct, averted), head orientation (forward, deviated), and emotion (neutral, fearful). ERPs were recorded during the task. Participants rated experienced threat from each face after the task. Participants with SCZ were as accurate as, though slower than, HCs on the task. Participants with SCZ displayed enlarged N170 responses over the left hemisphere to averted gaze presented in fearful relative to neutral faces, indicating a heightened encoding sensitivity to faces signaling external threat. This abnormality was correlated with increased perceived threat and paranoid delusions. Participants with SCZ also showed a reduction of N170 modulation by head orientation (normally increased amplitude to deviated faces relative to forward faces), suggesting less integration of contextual cues of head orientation in gaze perception. The psychophysiological deviations observed during gaze discrimination in SCZ underscore the role of early attentional and perceptual abnormalities in social information processing and paranoid symptoms of SCZ. (c) 2015 APA, all rights reserved).
Rogers, Wendy A.
2015-01-01
Ample research in social psychology has highlighted the importance of the human face in human–human interactions. However, there is a less clear understanding of how a humanoid robot's face is perceived by humans. One of the primary goals of this study was to investigate how initial perceptions of robots are influenced by the extent of human-likeness of the robot's face, particularly when the robot is intended to provide assistance with tasks in the home that are traditionally carried out by humans. Moreover, although robots have the potential to help both younger and older adults, there is limited knowledge of whether the two age groups' perceptions differ. In this study, younger (N = 32) and older adults (N = 32) imagined interacting with a robot in four different task contexts and rated robot faces of varying levels of human-likeness. Participants were also interviewed to assess their reasons for particular preferences. This multi-method approach identified patterns of perceptions across different appearances as well as reasons that influence the formation of such perceptions. Overall, the results indicated that people's perceptions of robot faces vary as a function of robot human-likeness. People tended to over-generalize their understanding of humans to build expectations about a human-looking robot's behavior and capabilities. Additionally, preferences for humanoid robots depended on the task although younger and older adults differed in their preferences for certain humanoid appearances. The results of this study have implications both for advancing theoretical understanding of robot perceptions and for creating and applying guidelines for the design of robots. PMID:26294936
Prakash, Akanksha; Rogers, Wendy A
2015-04-01
Ample research in social psychology has highlighted the importance of the human face in human-human interactions. However, there is a less clear understanding of how a humanoid robot's face is perceived by humans. One of the primary goals of this study was to investigate how initial perceptions of robots are influenced by the extent of human-likeness of the robot's face, particularly when the robot is intended to provide assistance with tasks in the home that are traditionally carried out by humans. Moreover, although robots have the potential to help both younger and older adults, there is limited knowledge of whether the two age groups' perceptions differ. In this study, younger ( N = 32) and older adults ( N = 32) imagined interacting with a robot in four different task contexts and rated robot faces of varying levels of human-likeness. Participants were also interviewed to assess their reasons for particular preferences. This multi-method approach identified patterns of perceptions across different appearances as well as reasons that influence the formation of such perceptions. Overall, the results indicated that people's perceptions of robot faces vary as a function of robot human-likeness. People tended to over-generalize their understanding of humans to build expectations about a human-looking robot's behavior and capabilities. Additionally, preferences for humanoid robots depended on the task although younger and older adults differed in their preferences for certain humanoid appearances. The results of this study have implications both for advancing theoretical understanding of robot perceptions and for creating and applying guidelines for the design of robots.
Acoustic cue integration in speech intonation recognition with cochlear implants.
Peng, Shu-Chen; Chatterjee, Monita; Lu, Nelson
2012-06-01
The present article reports on the perceptual weighting of prosodic cues in question-statement identification by adult cochlear implant (CI) listeners. Acoustic analyses of normal-hearing (NH) listeners' production of sentences spoken as questions or statements confirmed that in English the last bisyllabic word in a sentence carries the dominant cues (F0, duration, and intensity patterns) for the contrast. Furthermore, these analyses showed that the F0 contour is the primary cue for the question-statement contrast, with intensity and duration changes conveying important but less reliable information. On the basis of these acoustic findings, the authors examined adult CI listeners' performance in two question-statement identification tasks. In Task 1, 13 CI listeners' question-statement identification accuracy was measured using naturally uttered sentences matched for their syntactic structures. In Task 2, the same listeners' perceptual cue weighting in question-statement identification was assessed using resynthesized single-word stimuli, within which fundamental frequency (F0), intensity, and duration properties were systematically manipulated. Both tasks were also conducted with four NH listeners with full-spectrum and noise-band-vocoded stimuli. Perceptual cue weighting was assessed by comparing the estimated coefficients in logistic models fitted to the data. Of the 13 CI listeners, 7 achieved high performance levels in Task 1. The results of Task 2 indicated that multiple sources of acoustic cues for question-statement identification were utilized to different extents depending on the listening conditions (e.g., full spectrum vs. spectrally degraded) or the listeners' hearing and amplification status (e.g., CI vs. NH).
Lazar, Steven M; Evans, David W; Myers, Scott M; Moreno-De Luca, Andres; Moore, Gregory J
2014-04-15
Social cognition is an important aspect of social behavior in humans. Social cognitive deficits are associated with neurodevelopmental and neuropsychiatric disorders. In this study we examine the neural substrates of social cognition and face processing in a group of healthy young adults to examine the neural substrates of social cognition. Fifty-seven undergraduates completed a battery of social cognition tasks and were assessed with electroencephalography (EEG) during a face-perception task. A subset (N=22) were administered a face-perception task during functional magnetic resonance imaging. Variance in the N170 EEG was predicted by social attribution performance and by a quantitative measure of empathy. Neurally, face processing was more bilateral in females than in males. Variance in fMRI voxel count in the face-sensitive fusiform gyrus was predicted by quantitative measures of social behavior, including the Social Responsiveness Scale (SRS) and the Empathizing Quotient. When measured as a quantitative trait, social behaviors in typical and pathological populations share common neural pathways. The results highlight the importance of viewing neurodevelopmental and neuropsychiatric disorders as spectrum phenomena that may be informed by studies of the normal distribution of relevant traits in the general population. Copyright © 2014 Elsevier B.V. All rights reserved.
The impact of self-avatars on trust and collaboration in shared virtual environments.
Pan, Ye; Steed, Anthony
2017-01-01
A self-avatar is known to have a potentially significant impact on the user's experience of the immersive content but it can also affect how users interact with each other in a shared virtual environment (SVE). We implemented an SVE for a consumer virtual reality system where each user's body could be represented by a jointed self-avatar that was dynamically controlled by head and hand controllers. We investigated the impact of a self-avatar on collaborative outcomes such as completion time and trust formation during competitive and cooperative tasks. We used two different embodiment levels: no self-avatar and self-avatar, and compared these to an in-person face to face version of the tasks. We found that participants could finish the task more quickly when they cooperated than when they competed, for both the self-avatar condition and the face to face condition, but not for the no self-avatar condition. In terms of trust formation, both the self-avatar condition and the face to face condition led to higher scores than the no self-avatar condition; however, collaboration style had no significant effect on trust built between partners. The results are further evidence of the importance of a self-avatar representation in immersive virtual reality.
The impact of self-avatars on trust and collaboration in shared virtual environments
Steed, Anthony
2017-01-01
A self-avatar is known to have a potentially significant impact on the user’s experience of the immersive content but it can also affect how users interact with each other in a shared virtual environment (SVE). We implemented an SVE for a consumer virtual reality system where each user’s body could be represented by a jointed self-avatar that was dynamically controlled by head and hand controllers. We investigated the impact of a self-avatar on collaborative outcomes such as completion time and trust formation during competitive and cooperative tasks. We used two different embodiment levels: no self-avatar and self-avatar, and compared these to an in-person face to face version of the tasks. We found that participants could finish the task more quickly when they cooperated than when they competed, for both the self-avatar condition and the face to face condition, but not for the no self-avatar condition. In terms of trust formation, both the self-avatar condition and the face to face condition led to higher scores than the no self-avatar condition; however, collaboration style had no significant effect on trust built between partners. The results are further evidence of the importance of a self-avatar representation in immersive virtual reality. PMID:29240837
Rudner, Mary; Mishra, Sushmit; Stenfelt, Stefan; Lunner, Thomas; Rönnberg, Jerker
2016-06-01
Seeing the talker's face improves speech understanding in noise, possibly releasing resources for cognitive processing. We investigated whether it improves free recall of spoken two-digit numbers. Twenty younger adults with normal hearing and 24 older adults with hearing loss listened to and subsequently recalled lists of 13 two-digit numbers, with alternating male and female talkers. Lists were presented in quiet as well as in stationary and speech-like noise at a signal-to-noise ratio giving approximately 90% intelligibility. Amplification compensated for loss of audibility. Seeing the talker's face improved free recall performance for the younger but not the older group. Poorer performance in background noise was contingent on individual differences in working memory capacity. The effect of seeing the talker's face did not differ in quiet and noise. We have argued that the absence of an effect of seeing the talker's face for older adults with hearing loss may be due to modulation of audiovisual integration mechanisms caused by an interaction between task demands and participant characteristics. In particular, we suggest that executive task demands and interindividual executive skills may play a key role in determining the benefit of seeing the talker's face during a speech-based cognitive task.