Super-recognizers: People with extraordinary face recognition ability
Russell, Richard; Duchaine, Brad; Nakayama, Ken
2014-01-01
We tested four people who claimed to have significantly better than ordinary face recognition ability. Exceptional ability was confirmed in each case. On two very different tests of face recognition, all four experimental subjects performed beyond the range of control subject performance. They also scored significantly better than average on a perceptual discrimination test with faces. This effect was larger with upright than inverted faces, and the four subjects showed a larger ‘inversion effect’ than control subjects, who in turn showed a larger inversion effect than developmental prosopagnosics. This indicates an association between face recognition ability and the magnitude of the inversion effect. Overall, these ‘super-recognizers’ are about as good at face recognition and perception as developmental prosopagnosics are bad. Our findings demonstrate the existence of people with exceptionally good face recognition ability, and show that the range of face recognition and face perception ability is wider than previously acknowledged. PMID:19293090
Super-recognizers: people with extraordinary face recognition ability.
Russell, Richard; Duchaine, Brad; Nakayama, Ken
2009-04-01
We tested 4 people who claimed to have significantly better than ordinary face recognition ability. Exceptional ability was confirmed in each case. On two very different tests of face recognition, all 4 experimental subjects performed beyond the range of control subject performance. They also scored significantly better than average on a perceptual discrimination test with faces. This effect was larger with upright than with inverted faces, and the 4 subjects showed a larger "inversion effect" than did control subjects, who in turn showed a larger inversion effect than did developmental prosopagnosics. This result indicates an association between face recognition ability and the magnitude of the inversion effect. Overall, these "super-recognizers" are about as good at face recognition and perception as developmental prosopagnosics are bad. Our findings demonstrate the existence of people with exceptionally good face recognition ability and show that the range of face recognition and face perception ability is wider than has been previously acknowledged.
Russell, Richard; Chatterjee, Garga; Nakayama, Ken
2011-01-01
Face recognition by normal subjects depends in roughly equal proportions on shape and surface reflectance cues, while object recognition depends predominantly on shape cues. It is possible that developmental prosopagnosics are deficient not in their ability to recognize faces per se, but rather in their ability to use reflectance cues. Similarly, super-recognizers’ exceptional ability with face recognition may be a result of superior surface reflectance perception and memory. We tested this possibility by administering tests of face perception and face recognition in which only shape or reflectance cues are available to developmental prosopagnosics, super-recognizers, and control subjects. Face recognition ability and the relative use of shape and pigmentation were unrelated in all the tests. Subjects who were better at using shape or reflectance cues were also better at using the other type of cue. These results do not support the proposal that variation in surface reflectance perception ability is the underlying cause of variation in face recognition ability. Instead, these findings support the idea that face recognition ability is related to neural circuits using representations that integrate shape and pigmentation information. PMID:22192636
Humphries, Joyce E; Flowe, Heather D; Hall, Louise C; Williams, Louise C; Ryder, Hannah L
2016-01-01
This study examined whether beliefs about face recognition ability differentially influence memory retrieval in older compared to young adults. Participants evaluated their ability to recognise faces and were also given information about their ability to perceive and recognise faces. The information was ostensibly based on an objective measure of their ability, but in actuality, participants had been randomly assigned the information they received (high ability, low ability or no information control). Following this information, face recognition accuracy for a set of previously studied faces was measured using a remember-know memory paradigm. Older adults rated their ability to recognise faces as poorer compared to young adults. Additionally, negative information about face recognition ability improved only older adults' ability to recognise a previously seen face. Older adults were also found to engage in more familiarity than item-specific processing than young adults, but information about their face recognition ability did not affect face processing style. The role that older adults' memory beliefs have in the meta-cognitive strategies they employ is discussed.
Turano, Maria Teresa; Viggiano, Maria Pia
2017-11-01
The relationship between face recognition ability and socioemotional functioning has been widely explored. However, how aging modulates this association regarding both objective performance and subjective-perception is still neglected. Participants, aged between 18 and 81 years, performed a face memory test and completed subjective face recognition and socioemotional questionnaires. General and social anxiety, and neuroticism traits account for the individual variation in face recognition abilities during adulthood. Aging modulates these relationships because as they age, individuals that present a higher level of these traits also show low-level face recognition ability. Intriguingly, the association between depression and face recognition abilities is evident with increasing age. Overall, the present results emphasize the importance of embedding face metacognition measurement into the context of these studies and suggest that aging is an important factor to be considered, which seems to contribute to the relationship between socioemotional and face-cognitive functioning.
Genetic specificity of face recognition.
Shakeshaft, Nicholas G; Plomin, Robert
2015-10-13
Specific cognitive abilities in diverse domains are typically found to be highly heritable and substantially correlated with general cognitive ability (g), both phenotypically and genetically. Recent twin studies have found the ability to memorize and recognize faces to be an exception, being similarly heritable but phenotypically substantially uncorrelated both with g and with general object recognition. However, the genetic relationships between face recognition and other abilities (the extent to which they share a common genetic etiology) cannot be determined from phenotypic associations. In this, to our knowledge, first study of the genetic associations between face recognition and other domains, 2,000 18- and 19-year-old United Kingdom twins completed tests assessing their face recognition, object recognition, and general cognitive abilities. Results confirmed the substantial heritability of face recognition (61%), and multivariate genetic analyses found that most of this genetic influence is unique and not shared with other cognitive abilities.
Genetic specificity of face recognition
Shakeshaft, Nicholas G.; Plomin, Robert
2015-01-01
Specific cognitive abilities in diverse domains are typically found to be highly heritable and substantially correlated with general cognitive ability (g), both phenotypically and genetically. Recent twin studies have found the ability to memorize and recognize faces to be an exception, being similarly heritable but phenotypically substantially uncorrelated both with g and with general object recognition. However, the genetic relationships between face recognition and other abilities (the extent to which they share a common genetic etiology) cannot be determined from phenotypic associations. In this, to our knowledge, first study of the genetic associations between face recognition and other domains, 2,000 18- and 19-year-old United Kingdom twins completed tests assessing their face recognition, object recognition, and general cognitive abilities. Results confirmed the substantial heritability of face recognition (61%), and multivariate genetic analyses found that most of this genetic influence is unique and not shared with other cognitive abilities. PMID:26417086
Russell, Richard; Chatterjee, Garga; Nakayama, Ken
2012-01-01
Face recognition by normal subjects depends in roughly equal proportions on shape and surface reflectance cues, while object recognition depends predominantly on shape cues. It is possible that developmental prosopagnosics are deficient not in their ability to recognize faces per se, but rather in their ability to use reflectance cues. Similarly, super-recognizers' exceptional ability with face recognition may be a result of superior surface reflectance perception and memory. We tested this possibility by administering tests of face perception and face recognition in which only shape or reflectance cues are available to developmental prosopagnosics, super-recognizers, and control subjects. Face recognition ability and the relative use of shape and pigmentation were unrelated in all the tests. Subjects who were better at using shape or reflectance cues were also better at using the other type of cue. These results do not support the proposal that variation in surface reflectance perception ability is the underlying cause of variation in face recognition ability. Instead, these findings support the idea that face recognition ability is related to neural circuits using representations that integrate shape and pigmentation information. Copyright © 2011 Elsevier Ltd. All rights reserved.
Bennetts, Rachel J; Mole, Joseph; Bate, Sarah
2017-09-01
Face recognition abilities vary widely. While face recognition deficits have been reported in children, it is unclear whether superior face recognition skills can be encountered during development. This paper presents O.B., a 14-year-old female with extraordinary face recognition skills: a "super-recognizer" (SR). O.B. demonstrated exceptional face-processing skills across multiple tasks, with a level of performance that is comparable to adult SRs. Her superior abilities appear to be specific to face identity: She showed an exaggerated face inversion effect and her superior abilities did not extend to object processing or non-identity aspects of face recognition. Finally, an eye-movement task demonstrated that O.B. spent more time than controls examining the nose - a pattern previously reported in adult SRs. O.B. is therefore particularly skilled at extracting and using identity-specific facial cues, indicating that face and object recognition are dissociable during development, and that super recognition can be detected in adolescence.
Experience moderates overlap between object and face recognition, suggesting a common ability
Gauthier, Isabel; McGugin, Rankin W.; Richler, Jennifer J.; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E.
2014-01-01
Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. PMID:24993021
Experience moderates overlap between object and face recognition, suggesting a common ability.
Gauthier, Isabel; McGugin, Rankin W; Richler, Jennifer J; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E
2014-07-03
Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. © 2014 ARVO.
Do people have insight into their face recognition abilities?
Palermo, Romina; Rossion, Bruno; Rhodes, Gillian; Laguesse, Renaud; Tez, Tolga; Hall, Bronwyn; Albonico, Andrea; Malaspina, Manuela; Daini, Roberta; Irons, Jessica; Al-Janabi, Shahd; Taylor, Libby C; Rivolta, Davide; McKone, Elinor
2017-02-01
Diagnosis of developmental or congenital prosopagnosia (CP) involves self-report of everyday face recognition difficulties, which are corroborated with poor performance on behavioural tests. This approach requires accurate self-evaluation. We examine the extent to which typical adults have insight into their face recognition abilities across four experiments involving nearly 300 participants. The experiments used five tests of face recognition ability: two that tap into the ability to learn and recognize previously unfamiliar faces [the Cambridge Face Memory Test, CFMT; Duchaine, B., & Nakayama, K. (2006). The Cambridge Face Memory Test: Results for neurologically intact individuals and an investigation of its validity using inverted face stimuli and prosopagnosic participants. Neuropsychologia, 44(4), 576-585. doi:10.1016/j.neuropsychologia.2005.07.001; and a newly devised test based on the CFMT but where the study phases involve watching short movies rather than viewing static faces-the CFMT-Films] and three that tap face matching [Benton Facial Recognition Test, BFRT; Benton, A., Sivan, A., Hamsher, K., Varney, N., & Spreen, O. (1983). Contribution to neuropsychological assessment. New York: Oxford University Press; and two recently devised sequential face matching tests]. Self-reported ability was measured with the 15-item Kennerknecht et al. questionnaire [Kennerknecht, I., Ho, N. Y., & Wong, V. C. (2008). Prevalence of hereditary prosopagnosia (HPA) in Hong Kong Chinese population. American Journal of Medical Genetics Part A, 146A(22), 2863-2870. doi:10.1002/ajmg.a.32552]; two single-item questions assessing face recognition ability; and a new 77-item meta-cognition questionnaire. Overall, we find that adults with typical face recognition abilities have only modest insight into their ability to recognize faces on behavioural tests. In a fifth experiment, we assess self-reported face recognition ability in people with CP and find that some people who expect to perform poorly on behavioural tests of face recognition do indeed perform poorly. However, it is not yet clear whether individuals within this group of poor performers have greater levels of insight (i.e., into their degree of impairment) than those with more typical levels of performance.
Face recognition ability matures late: evidence from individual differences in young adults.
Susilo, Tirta; Germine, Laura; Duchaine, Bradley
2013-10-01
Does face recognition ability mature early in childhood (early maturation hypothesis) or does it continue to develop well into adulthood (late maturation hypothesis)? This fundamental issue in face recognition is typically addressed by comparing child and adult participants. However, the interpretation of such studies is complicated by children's inferior test-taking abilities and general cognitive functions. Here we examined the developmental trajectory of face recognition ability in an individual differences study of 18-33 year-olds (n = 2,032), an age interval in which participants are competent test takers with comparable general cognitive functions. We found a positive association between age and face recognition, controlling for nonface visual recognition, verbal memory, sex, and own-race bias. Our study supports the late maturation hypothesis in face recognition, and illustrates how individual differences investigations of young adults can address theoretical issues concerning the development of perceptual and cognitive abilities. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Romani, Maria; Vigliante, Miriam; Faedda, Noemi; Rossetti, Serena; Pezzuti, Lina; Guidetti, Vincenzo; Cardona, Francesco
2018-06-01
This review focuses on facial recognition abilities in children and adolescents with attention deficit hyperactivity disorder (ADHD). A systematic review, using PRISMA guidelines, was conducted to identify original articles published prior to May 2017 pertaining to memory, face recognition, affect recognition, facial expression recognition and recall of faces in children and adolescents with ADHD. The qualitative synthesis based on different studies shows a particular focus of the research on facial affect recognition without paying similar attention to the structural encoding of facial recognition. In this review, we further investigate facial recognition abilities in children and adolescents with ADHD, providing synthesis of the results observed in the literature, while detecting face recognition tasks used on face processing abilities in ADHD and identifying aspects not yet explored. Copyright © 2018 Elsevier Ltd. All rights reserved.
Halliday, Drew W R; MacDonald, Stuart W S; Scherf, K Suzanne; Sherf, Suzanne K; Tanaka, James W
2014-01-01
Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ)). We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals.
Halliday, Drew W. R.; MacDonald, Stuart W. S.; Sherf, Suzanne K.; Tanaka, James W.
2014-01-01
Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ)). We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals. PMID:24853862
[Comparative studies of face recognition].
Kawai, Nobuyuki
2012-07-01
Every human being is proficient in face recognition. However, the reason for and the manner in which humans have attained such an ability remain unknown. These questions can be best answered-through comparative studies of face recognition in non-human animals. Studies in both primates and non-primates show that not only primates, but also non-primates possess the ability to extract information from their conspecifics and from human experimenters. Neural specialization for face recognition is shared with mammals in distant taxa, suggesting that face recognition evolved earlier than the emergence of mammals. A recent study indicated that a social insect, the golden paper wasp, can distinguish their conspecific faces, whereas a closely related species, which has a less complex social lifestyle with just one queen ruling a nest of underlings, did not show strong face recognition for their conspecifics. Social complexity and the need to differentiate between one another likely led humans to evolve their face recognition abilities.
Lewis, Amelia K; Porter, Melanie A; Williams, Tracey A; Bzishvili, Samantha; North, Kathryn N; Payne, Jonathan M
2017-05-01
This study aimed to investigate face scan paths and face perception abilities in children with Neurofibromatosis Type 1 (NF1) and how these might relate to emotion recognition abilities in this population. The authors investigated facial emotion recognition, face scan paths, and face perception in 29 children with NF1 compared to 29 chronological age-matched typically developing controls. Correlations between facial emotion recognition, face scan paths, and face perception in children with NF1 were examined. Children with NF1 displayed significantly poorer recognition of fearful expressions compared to controls, as well as a nonsignificant trend toward poorer recognition of anger. Although there was no significant difference between groups in time spent viewing individual core facial features (eyes, nose, mouth, and nonfeature regions), children with NF1 spent significantly less time than controls viewing the face as a whole. Children with NF1 also displayed significantly poorer face perception abilities than typically developing controls. Facial emotion recognition deficits were not significantly associated with aberrant face scan paths or face perception abilities in the NF1 group. These results suggest that impairments in the perception, identification, and interpretation of information from faces are important aspects of the social-cognitive phenotype of NF1. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Capturing specific abilities as a window into human individuality: the example of face recognition.
Wilmer, Jeremy B; Germine, Laura; Chabris, Christopher F; Chatterjee, Garga; Gerbasi, Margaret; Nakayama, Ken
2012-01-01
Proper characterization of each individual's unique pattern of strengths and weaknesses requires good measures of diverse abilities. Here, we advocate combining our growing understanding of neural and cognitive mechanisms with modern psychometric methods in a renewed effort to capture human individuality through a consideration of specific abilities. We articulate five criteria for the isolation and measurement of specific abilities, then apply these criteria to face recognition. We cleanly dissociate face recognition from more general visual and verbal recognition. This dissociation stretches across ability as well as disability, suggesting that specific developmental face recognition deficits are a special case of a broader specificity that spans the entire spectrum of human face recognition performance. Item-by-item results from 1,471 web-tested participants, included as supplementary information, fuel item analyses, validation, norming, and item response theory (IRT) analyses of our three tests: (a) the widely used Cambridge Face Memory Test (CFMT); (b) an Abstract Art Memory Test (AAMT), and (c) a Verbal Paired-Associates Memory Test (VPMT). The availability of this data set provides a solid foundation for interpreting future scores on these tests. We argue that the allied fields of experimental psychology, cognitive neuroscience, and vision science could fuel the discovery of additional specific abilities to add to face recognition, thereby providing new perspectives on human individuality.
Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise
2014-06-01
Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Capturing specific abilities as a window into human individuality: The example of face recognition
Wilmer, Jeremy B.; Germine, Laura; Chabris, Christopher F.; Chatterjee, Garga; Gerbasi, Margaret; Nakayama, Ken
2013-01-01
Proper characterization of each individual's unique pattern of strengths and weaknesses requires good measures of diverse abilities. Here, we advocate combining our growing understanding of neural and cognitive mechanisms with modern psychometric methods in a renewed effort to capture human individuality through a consideration of specific abilities. We articulate five criteria for the isolation and measurement of specific abilities, then apply these criteria to face recognition. We cleanly dissociate face recognition from more general visual and verbal recognition. This dissociation stretches across ability as well as disability, suggesting that specific developmental face recognition deficits are a special case of a broader specificity that spans the entire spectrum of human face recognition performance. Item-by-item results from 1,471 web-tested participants, included as supplementary information, fuel item analyses, validation, norming, and item response theory (IRT) analyses of our three tests: (a) the widely used Cambridge Face Memory Test (CFMT); (b) an Abstract Art Memory Test (AAMT), and (c) a Verbal Paired-Associates Memory Test (VPMT). The availability of this data set provides a solid foundation for interpreting future scores on these tests. We argue that the allied fields of experimental psychology, cognitive neuroscience, and vision science could fuel the discovery of additional specific abilities to add to face recognition, thereby providing new perspectives on human individuality. PMID:23428079
Bayesian Face Recognition and Perceptual Narrowing in Face-Space
ERIC Educational Resources Information Center
Balas, Benjamin
2012-01-01
During the first year of life, infants' face recognition abilities are subject to "perceptual narrowing", the end result of which is that observers lose the ability to distinguish previously discriminable faces (e.g. other-race faces) from one another. Perceptual narrowing has been reported for faces of different species and different races, in…
Covert face recognition in congenital prosopagnosia: a group study.
Rivolta, Davide; Palermo, Romina; Schmalzl, Laura; Coltheart, Max
2012-03-01
Even though people with congenital prosopagnosia (CP) never develop a normal ability to "overtly" recognize faces, some individuals show indices of "covert" (or implicit) face recognition. The aim of this study was to demonstrate covert face recognition in CP when participants could not overtly recognize the faces. Eleven people with CP completed three tasks assessing their overt face recognition ability, and three tasks assessing their "covert" face recognition: a Forced choice familiarity task, a Forced choice cued task, and a Priming task. Evidence of covert recognition was observed with the Forced choice familiarity task, but not the Priming task. In addition, we propose that the Forced choice cued task does not measure covert processing as such, but instead "provoked-overt" recognition. Our study clearly shows that people with CP demonstrate covert recognition for faces that they cannot overtly recognize, and that behavioural tasks vary in their sensitivity to detect covert recognition in CP. Copyright © 2011 Elsevier Srl. All rights reserved.
The Role of Higher Level Adaptive Coding Mechanisms in the Development of Face Recognition
ERIC Educational Resources Information Center
Pimperton, Hannah; Pellicano, Elizabeth; Jeffery, Linda; Rhodes, Gillian
2009-01-01
DevDevelopmental improvements in face identity recognition ability are widely documented, but the source of children's immaturity in face recognition remains unclear. Differences in the way in which children and adults visually represent faces might underlie immaturities in face recognition. Recent evidence of a face identity aftereffect (FIAE),…
Face recognition: a model specific ability.
Wilmer, Jeremy B; Germine, Laura T; Nakayama, Ken
2014-01-01
In our everyday lives, we view it as a matter of course that different people are good at different things. It can be surprising, in this context, to learn that most of what is known about cognitive ability variation across individuals concerns the broadest of all cognitive abilities; an ability referred to as general intelligence, general mental ability, or just g. In contrast, our knowledge of specific abilities, those that correlate little with g, is severely constrained. Here, we draw upon our experience investigating an exceptionally specific ability, face recognition, to make the case that many specific abilities could easily have been missed. In making this case, we derive key insights from earlier false starts in the measurement of face recognition's variation across individuals, and we highlight the convergence of factors that enabled the recent discovery that this variation is specific. We propose that the case of face recognition ability illustrates a set of tools and perspectives that could accelerate fruitful work on specific cognitive abilities. By revealing relatively independent dimensions of human ability, such work would enhance our capacity to understand the uniqueness of individual minds.
Croydon, Abigail; Pimperton, Hannah; Ewing, Louise; Duchaine, Brad C; Pellicano, Elizabeth
2014-09-01
Face recognition ability follows a lengthy developmental course, not reaching maturity until well into adulthood. Valid and reliable assessments of face recognition memory ability are necessary to examine patterns of ability and disability in face processing, yet there is a dearth of such assessments for children. We modified a well-known test of face memory in adults, the Cambridge Face Memory Test (Duchaine & Nakayama, 2006, Neuropsychologia, 44, 576-585), to make it developmentally appropriate for children. To establish its utility, we administered either the upright or inverted versions of the computerised Cambridge Face Memory Test - Children (CFMT-C) to 401 children aged between 5 and 12 years. Our results show that the CFMT-C is sufficiently sensitive to demonstrate age-related gains in the recognition of unfamiliar upright and inverted faces, does not suffer from ceiling or floor effects, generates robust inversion effects, and is capable of detecting difficulties in face memory in children diagnosed with autism. Together, these findings indicate that the CFMT-C constitutes a new valid assessment tool for children's face recognition skills. Copyright © 2014 Elsevier Ltd. All rights reserved.
Developmental Changes in Face Recognition during Childhood: Evidence from Upright and Inverted Faces
ERIC Educational Resources Information Center
de Heering, Adelaide; Rossion, Bruno; Maurer, Daphne
2012-01-01
Adults are experts at recognizing faces but there is controversy about how this ability develops with age. We assessed 6- to 12-year-olds and adults using a digitized version of the Benton Face Recognition Test, a sensitive tool for assessing face perception abilities. Children's response times for correct responses did not decrease between ages 6…
Davis, Joshua M; McKone, Elinor; Dennett, Hugh; O'Connor, Kirsty B; O'Kearney, Richard; Palermo, Romina
2011-01-01
Previous research has been concerned with the relationship between social anxiety and the recognition of face expression but the question of whether there is a relationship between social anxiety and the recognition of face identity has been neglected. Here, we report the first evidence that social anxiety is associated with recognition of face identity, across the population range of individual differences in recognition abilities. Results showed poorer face identity recognition (on the Cambridge Face Memory Test) was correlated with a small but significant increase in social anxiety (Social Interaction Anxiety Scale) but not general anxiety (State-Trait Anxiety Inventory). The correlation was also independent of general visual memory (Cambridge Car Memory Test) and IQ. Theoretically, the correlation could arise because correct identification of people, typically achieved via faces, is important for successful social interactions, extending evidence that individuals with clinical-level deficits in face identity recognition (prosopagnosia) often report social stress due to their inability to recognise others. Equally, the relationship could arise if social anxiety causes reduced exposure or attention to people's faces, and thus to poor development of face recognition mechanisms.
Davis, Joshua M.; McKone, Elinor; Dennett, Hugh; O'Connor, Kirsty B.; O'Kearney, Richard; Palermo, Romina
2011-01-01
Previous research has been concerned with the relationship between social anxiety and the recognition of face expression but the question of whether there is a relationship between social anxiety and the recognition of face identity has been neglected. Here, we report the first evidence that social anxiety is associated with recognition of face identity, across the population range of individual differences in recognition abilities. Results showed poorer face identity recognition (on the Cambridge Face Memory Test) was correlated with a small but significant increase in social anxiety (Social Interaction Anxiety Scale) but not general anxiety (State-Trait Anxiety Inventory). The correlation was also independent of general visual memory (Cambridge Car Memory Test) and IQ. Theoretically, the correlation could arise because correct identification of people, typically achieved via faces, is important for successful social interactions, extending evidence that individuals with clinical-level deficits in face identity recognition (prosopagnosia) often report social stress due to their inability to recognise others. Equally, the relationship could arise if social anxiety causes reduced exposure or attention to people's faces, and thus to poor development of face recognition mechanisms. PMID:22194916
Seymour, Karen E; Jones, Richard N; Cushman, Grace K; Galvan, Thania; Puzia, Megan E; Kim, Kerri L; Spirito, Anthony; Dickstein, Daniel P
2016-03-01
Little is known about the bio-behavioral mechanisms underlying and differentiating suicide attempts from non-suicidal self-injury (NSSI) in adolescents. Adolescents who attempt suicide or engage in NSSI often report significant interpersonal and social difficulties. Emotional face recognition ability is a fundamental skill required for successful social interactions, and deficits in this ability may provide insight into the unique brain-behavior interactions underlying suicide attempts versus NSSI in adolescents. Therefore, we examined emotional face recognition ability among three mutually exclusive groups: (1) inpatient adolescents who attempted suicide (SA, n = 30); (2) inpatient adolescents engaged in NSSI (NSSI, n = 30); and (3) typically developing controls (TDC, n = 30) without psychiatric illness. Participants included adolescents aged 13-17 years, matched on age, gender and full-scale IQ. Emotional face recognition was evaluated using the diagnostic assessment of nonverbal accuracy (DANVA-2). Compared to TDC youth, adolescents with NSSI made more errors on child fearful and adult sad face recognition while controlling for psychopathology and medication status (ps < 0.05). No differences were found on emotional face recognition between NSSI and SA groups. Secondary analyses showed that compared to inpatients without major depression, those with major depression made fewer errors on adult sad face recognition even when controlling for group status (p < 0.05). Further, compared to inpatients without generalized anxiety, those with generalized anxiety made fewer recognition errors on adult happy faces even when controlling for group status (p < 0.05). Adolescent inpatients engaged in NSSI showed greater deficits in emotional face recognition than TDC, but not inpatient adolescents who attempted suicide. Further results suggest the importance of psychopathology in emotional face recognition. Replication of these preliminary results and examination of the role of context-dependent emotional processing are needed moving forward.
Huang, Lijie; Song, Yiying; Li, Jingguang; Zhen, Zonglei; Yang, Zetian; Liu, Jia
2014-01-01
In functional magnetic resonance imaging studies, object selectivity is defined as a higher neural response to an object category than other object categories. Importantly, object selectivity is widely considered as a neural signature of a functionally-specialized area in processing its preferred object category in the human brain. However, the behavioral significance of the object selectivity remains unclear. In the present study, we used the individual differences approach to correlate participants' face selectivity in the face-selective regions with their behavioral performance in face recognition measured outside the scanner in a large sample of healthy adults. Face selectivity was defined as the z score of activation with the contrast of faces vs. non-face objects, and the face recognition ability was indexed as the normalized residual of the accuracy in recognizing previously-learned faces after regressing out that for non-face objects in an old/new memory task. We found that the participants with higher face selectivity in the fusiform face area (FFA) and the occipital face area (OFA), but not in the posterior part of the superior temporal sulcus (pSTS), possessed higher face recognition ability. Importantly, the association of face selectivity in the FFA and face recognition ability cannot be accounted for by FFA response to objects or behavioral performance in object recognition, suggesting that the association is domain-specific. Finally, the association is reliable, confirmed by the replication from another independent participant group. In sum, our finding provides empirical evidence on the validity of using object selectivity as a neural signature in defining object-selective regions in the human brain. PMID:25071513
Functional architecture of visual emotion recognition ability: A latent variable approach.
Lewis, Gary J; Lefevre, Carmen E; Young, Andrew W
2016-05-01
Emotion recognition has been a focus of considerable attention for several decades. However, despite this interest, the underlying structure of individual differences in emotion recognition ability has been largely overlooked and thus is poorly understood. For example, limited knowledge exists concerning whether recognition ability for one emotion (e.g., disgust) generalizes to other emotions (e.g., anger, fear). Furthermore, it is unclear whether emotion recognition ability generalizes across modalities, such that those who are good at recognizing emotions from the face, for example, are also good at identifying emotions from nonfacial cues (such as cues conveyed via the body). The primary goal of the current set of studies was to address these questions through establishing the structure of individual differences in visual emotion recognition ability. In three independent samples (Study 1: n = 640; Study 2: n = 389; Study 3: n = 303), we observed that the ability to recognize visually presented emotions is based on different sources of variation: a supramodal emotion-general factor, supramodal emotion-specific factors, and face- and within-modality emotion-specific factors. In addition, we found evidence that general intelligence and alexithymia were associated with supramodal emotion recognition ability. Autism-like traits, empathic concern, and alexithymia were independently associated with face-specific emotion recognition ability. These results (a) provide a platform for further individual differences research on emotion recognition ability, (b) indicate that differentiating levels within the architecture of emotion recognition ability is of high importance, and (c) show that the capacity to understand expressions of emotion in others is linked to broader affective and cognitive processes. (c) 2016 APA, all rights reserved).
de Klerk, Carina C J M; Gliga, Teodora; Charman, Tony; Johnson, Mark H
2014-07-01
Face recognition difficulties are frequently documented in children with autism spectrum disorders (ASD). It has been hypothesized that these difficulties result from a reduced interest in faces early in life, leading to decreased cortical specialization and atypical development of the neural circuitry for face processing. However, a recent study by our lab demonstrated that infants at increased familial risk for ASD, irrespective of their diagnostic status at 3 years, exhibit a clear orienting response to faces. The present study was conducted as a follow-up on the same cohort to investigate how measures of early engagement with faces relate to face-processing abilities later in life. We also investigated whether face recognition difficulties are specifically related to an ASD diagnosis, or whether they are present at a higher rate in all those at familial risk. At 3 years we found a reduced ability to recognize unfamiliar faces in the high-risk group that was not specific to those children who received an ASD diagnosis, consistent with face recognition difficulties being an endophenotype of the disorder. Furthermore, we found that longer looking at faces at 7 months was associated with poorer performance on the face recognition task at 3 years in the high-risk group. These findings suggest that longer looking at faces in infants at risk for ASD might reflect early face-processing difficulties and predicts difficulties with recognizing faces later in life. © 2013 The Authors. Developmental Science Published by John Wiley & Sons Ltd.
Face recognition and description abilities in people with mild intellectual disabilities.
Gawrylowicz, Julie; Gabbert, Fiona; Carson, Derek; Lindsay, William R; Hancock, Peter J B
2013-09-01
People with intellectual disabilities (ID) are as likely as the general population to find themselves in the situation of having to identify and/or describe a perpetrator's face to the police. However, limited verbal and memory abilities in people with ID might prevent them to engage in standard police procedures. Two experiments examined face recognition and description abilities in people with mild intellectual disabilities (mID) and compared their performance with that of people without ID. Experiment 1 used three old/new face recognition tasks. Experiment 2 consisted of two face description tasks, during which participants had to verbally describe faces from memory and with the target in view. Participants with mID performed significantly poorer on both recognition and recall tasks than control participants. However, their group performance was better than chance and they showed variability in performance depending on the measures introduced. The practical implications of these findings in forensic settings are discussed. © 2013 John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
DeGutis, Joseph; Wilmer, Jeremy; Mercado, Rogelio J.; Cohan, Sarah
2013-01-01
Although holistic processing is thought to underlie normal face recognition ability, widely discrepant reports have recently emerged about this link in an individual differences context. Progress in this domain may have been impeded by the widespread use of subtraction scores, which lack validity due to their contamination with control condition…
Tanaka, James W; Wolf, Julie M; Klaiman, Cheryl; Koenig, Kathleen; Cockburn, Jeffrey; Herlihy, Lauren; Brown, Carla; Stahl, Sherin; Kaiser, Martha D; Schultz, Robert T
2010-08-01
An emerging body of evidence indicates that relative to typically developing children, children with autism are selectively impaired in their ability to recognize facial identity. A critical question is whether face recognition skills can be enhanced through a direct training intervention. In a randomized clinical trial, children diagnosed with autism spectrum disorder were pre-screened with a battery of subtests (the Let's Face It! Skills battery) examining face and object processing abilities. Participants who were significantly impaired in their face processing abilities were assigned to either a treatment or a waitlist group. Children in the treatment group (N = 42) received 20 hours of face training with the Let's Face It! (LFI!) computer-based intervention. The LFI! program is comprised of seven interactive computer games that target the specific face impairments associated with autism, including the recognition of identity across image changes in expression, viewpoint and features, analytic and holistic face processing strategies and attention to information in the eye region. Time 1 and Time 2 performance for the treatment and waitlist groups was assessed with the Let's Face It! Skills battery. The main finding was that relative to the control group (N = 37), children in the face training group demonstrated reliable improvements in their analytic recognition of mouth features and holistic recognition of a face based on its eyes features. These results indicate that a relatively short-term intervention program can produce measurable improvements in the face recognition skills of children with autism. As a treatment for face processing deficits, the Let's Face It! program has advantages of being cost-free, adaptable to the specific learning needs of the individual child and suitable for home and school applications.
ERIC Educational Resources Information Center
Parker, Alison E.; Mathis, Erin T.; Kupersmidt, Janis B.
2013-01-01
Research Findings: The study examined children's recognition of emotion from faces and body poses, as well as gender differences in these recognition abilities. Preschool-aged children ("N" = 55) and their parents and teachers participated in the study. Preschool-aged children completed a web-based measure of emotion recognition skills…
Laurence, Sarah; Mondloch, Catherine J
2016-03-01
Most previous research on the development of face recognition has focused on recognition of highly controlled images. One of the biggest challenges of face recognition is to identify an individual across images that capture natural variability in appearance. We created a child-friendly version of Jenkins, White, Van Montford, and Burton's sorting task (Cognition, 2011, Vol. 121, pp. 313-323) to investigate children's recognition of personally familiar and unfamiliar faces. Children between 4 and 12years of age were presented with a familiar/unfamiliar teacher's house and a pile of face photographs (nine pictures each of the teacher and another identity). Each child was asked to put all the pictures of the teacher inside the house while keeping the other identity out. Children over 6years of age showed adult-like familiar face recognition. Unfamiliar face recognition improved across the entire age range, with considerable variability in children's performance. These findings suggest that children's ability to tolerate within-person variability improves with age and support a face-space framework in which faces are represented as regions, the size of which increases with age. Copyright © 2015 Elsevier Inc. All rights reserved.
Meinhardt-Injac, Bozana; Daum, Moritz M.; Meinhardt, Günter; Persike, Malte
2018-01-01
According to the two-systems account of theory of mind (ToM), understanding mental states of others involves both fast social-perceptual processes, as well as slower, reflexive cognitive operations (Frith and Frith, 2008; Apperly and Butterfill, 2009). To test the respective roles of specific abilities in either of these processes we administered 15 experimental procedures to a large sample of 343 participants, testing ability in face recognition and holistic perception, language, and reasoning. ToM was measured by a set of tasks requiring ability to track and to infer complex emotional and mental states of others from faces, eyes, spoken language, and prosody. We used structural equation modeling to test the relative strengths of a social-perceptual (face processing related) and reflexive-cognitive (language and reasoning related) path in predicting ToM ability. The two paths accounted for 58% of ToM variance, thus validating a general two-systems framework. Testing specific predictor paths revealed language and face recognition as strong and significant predictors of ToM. For reasoning, there were neither direct nor mediated effects, albeit reasoning was strongly associated with language. Holistic face perception also failed to show a direct link with ToM ability, while there was a mediated effect via face recognition. These results highlight the respective roles of face recognition and language for the social brain, and contribute closer empirical specification of the general two-systems account. PMID:29445336
Dissociable roles of internal feelings and face recognition ability in facial expression decoding.
Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia
2016-05-15
The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding. Copyright © 2016 Elsevier Inc. All rights reserved.
Bayesian Face Recognition and Perceptual Narrowing in Face-Space
Balas, Benjamin
2012-01-01
During the first year of life, infants’ face recognition abilities are subject to “perceptual narrowing,” the end result of which is that observers lose the ability to distinguish previously discriminable faces (e.g. other-race faces) from one another. Perceptual narrowing has been reported for faces of different species and different races, in developing humans and primates. Though the phenomenon is highly robust and replicable, there have been few efforts to model the emergence of perceptual narrowing as a function of the accumulation of experience with faces during infancy. The goal of the current study is to examine how perceptual narrowing might manifest as statistical estimation in “face space,” a geometric framework for describing face recognition that has been successfully applied to adult face perception. Here, I use a computer vision algorithm for Bayesian face recognition to study how the acquisition of experience in face space and the presence of race categories affect performance for own and other-race faces. Perceptual narrowing follows from the establishment of distinct race categories, suggesting that the acquisition of category boundaries for race is a key computational mechanism in developing face expertise. PMID:22709406
Can Massive but Passive Exposure to Faces Contribute to Face Recognition Abilities?
ERIC Educational Resources Information Center
Yovel, Galit; Halsband, Keren; Pelleg, Michel; Farkash, Naomi; Gal, Bracha; Goshen-Gottstein, Yonatan
2012-01-01
Recent studies have suggested that individuation of other-race faces is more crucial for enhancing recognition performance than exposure that involves categorization of these faces to an identity-irrelevant criterion. These findings were primarily based on laboratory training protocols that dissociated exposure and individuation by using…
Contextual modulation of biases in face recognition.
Felisberti, Fatima Maria; Pavey, Louisa
2010-09-23
The ability to recognize the faces of potential cooperators and cheaters is fundamental to social exchanges, given that cooperation for mutual benefit is expected. Studies addressing biases in face recognition have so far proved inconclusive, with reports of biases towards faces of cheaters, biases towards faces of cooperators, or no biases at all. This study attempts to uncover possible causes underlying such discrepancies. Four experiments were designed to investigate biases in face recognition during social exchanges when behavioral descriptors (prosocial, antisocial or neutral) embedded in different scenarios were tagged to faces during memorization. Face recognition, measured as accuracy and response latency, was tested with modified yes-no, forced-choice and recall tasks (N = 174). An enhanced recognition of faces tagged with prosocial descriptors was observed when the encoding scenario involved financial transactions and the rules of the social contract were not explicit (experiments 1 and 2). Such bias was eliminated or attenuated by making participants explicitly aware of "cooperative", "cheating" and "neutral/indifferent" behaviors via a pre-test questionnaire and then adding such tags to behavioral descriptors (experiment 3). Further, in a social judgment scenario with descriptors of salient moral behaviors, recognition of antisocial and prosocial faces was similar, but significantly better than neutral faces (experiment 4). The results highlight the relevance of descriptors and scenarios of social exchange in face recognition, when the frequency of prosocial and antisocial individuals in a group is similar. Recognition biases towards prosocial faces emerged when descriptors did not state the rules of a social contract or the moral status of a behavior, and they point to the existence of broad and flexible cognitive abilities finely tuned to minor changes in social context.
Recognition of own-race and other-race faces by three-month-old infants.
Sangrigoli, Sandy; De Schonen, Scania
2004-10-01
People are better at recognizing faces of their own race than faces of another race. Such race specificity may be due to differential expertise in the two races. In order to find out whether this other-race effect develops as early as face-recognition skills or whether it is a long-term effect of acquired expertise, we tested face recognition in 3-month-old Caucasian infants by conducting two experiments using Caucasian and Asiatic faces and a visual pair-comparison task. We hypothesized that if the other race effect develops together with face processing skills during the first months of life, the ability to recognize own-race faces will be greater than the ability to recognize other-race faces: 3-month-old Caucasian infants should be better at recognizing Caucasian faces than Asiatic faces. If, on the contrary, the other-race effect is the long-term result of acquired expertise, no difference between recognizing own- and other-race faces will be observed at that age. In Experiment 1, Caucasian infants were habituated to a single face. Recognition was assessed by a novelty preference paradigm. The infants' recognition performance was better for Caucasian than for Asiatic faces. In Experiment 2, Caucasian infants were familiarized with three individual faces. Recognition was demonstrated with both Caucasian and Asiatic faces. These results suggest that (i) the representation of face information by 3-month-olds may be race-experience-dependent (Experiment 1), and (ii) short-term familiarization with exemplars of another race group is sufficient to reduce the other-race effect and to extend the power of face processing (Experiment 2).
Orientation and Affective Expression Effects on Face Recognition in Williams Syndrome and Autism
ERIC Educational Resources Information Center
Rose, Fredric E.; Lincoln, Alan J.; Lai, Zona; Ene, Michaela; Searcy, Yvonne M.; Bellugi, Ursula
2007-01-01
We sought to clarify the nature of the face processing strength commonly observed in individuals with Williams syndrome (WS) by comparing the face recognition ability of persons with WS to that of persons with autism and to healthy controls under three conditions: Upright faces with neutral expressions, upright faces with varying affective…
ERIC Educational Resources Information Center
de Klerk, Carina C. J. M.; Gliga, Teodora; Charman, Tony; Johnson, Mark H.
2014-01-01
Face recognition difficulties are frequently documented in children with autism spectrum disorders (ASD). It has been hypothesized that these difficulties result from a reduced interest in faces early in life, leading to decreased cortical specialization and atypical development of the neural circuitry for face processing. However, a recent study…
Newborns' Face Recognition: Role of Inner and Outer Facial Features
ERIC Educational Resources Information Center
Turati, Chiara; Macchi Cassia, Viola; Simion, Francesca; Leo, Irene
2006-01-01
Existing data indicate that newborns are able to recognize individual faces, but little is known about what perceptual cues drive this ability. The current study showed that either the inner or outer features of the face can act as sufficient cues for newborns' face recognition (Experiment 1), but the outer part of the face enjoys an advantage…
ERIC Educational Resources Information Center
Spangler, Sibylle M.; Freitag, Claudia; Schwarzer, Gudrun; Vierhaus, Marc; Teubert, Manuel; Lamm, Bettina; Kolling, Thorsten; Graf, Frauke; Goertz, Claudia; Fassbender, Ina; Lohaus, Arnold; Knopf, Monika; Keller, Heidi
2011-01-01
The aim of the present study was to investigate whether temperament and cognitive abilities are related to recognition performance of Caucasian and African faces and of a nonfacial stimulus class, Greebles. Seventy Caucasian infants were tested at 3 months with a habituation/dishabituation paradigm and their temperament and cognitive abilities…
Albonico, Andrea; Malaspina, Manuela; Daini, Roberta
2017-09-01
The Benton Facial Recognition Test (BFRT) and Cambridge Face Memory Test (CFMT) are two of the most common tests used to assess face discrimination and recognition abilities and to identify individuals with prosopagnosia. However, recent studies highlighted that participant-stimulus match ethnicity, as much as gender, has to be taken into account in interpreting results from these tests. Here, in order to obtain more appropriate normative data for an Italian sample, the CFMT and BFRT were administered to a large cohort of young adults. We found that scores from the BFRT are not affected by participants' gender and are only slightly affected by participant-stimulus ethnicity match, whereas both these factors seem to influence the scores of the CFMT. Moreover, the inclusion of a sample of individuals with suspected face recognition impairment allowed us to show that the use of more appropriate normative data can increase the BFRT efficacy in identifying individuals with face discrimination impairments; by contrast, the efficacy of the CFMT in classifying individuals with a face recognition deficit was confirmed. Finally, our data show that the lack of inversion effect (the difference between the total score of the upright and inverted versions of the CFMT) could be used as further index to assess congenital prosopagnosia. Overall, our results confirm the importance of having norms derived from controls with a similar experience of faces as the "potential" prosopagnosic individuals when assessing face recognition abilities.
Detecting Superior Face Recognition Skills in a Large Sample of Young British Adults
Bobak, Anna K.; Pampoulov, Philip; Bate, Sarah
2016-01-01
The Cambridge Face Memory Test Long Form (CFMT+) and Cambridge Face Perception Test (CFPT) are typically used to assess the face processing ability of individuals who believe they have superior face recognition skills. Previous large-scale studies have presented norms for the CFPT but not the CFMT+. However, previous research has also highlighted the necessity for establishing country-specific norms for these tests, indicating that norming data is required for both tests using young British adults. The current study addressed this issue in 254 British participants. In addition to providing the first norm for performance on the CFMT+ in any large sample, we also report the first UK specific cut-off for superior face recognition on the CFPT. Further analyses identified a small advantage for females on both tests, and only small associations between objective face recognition skills and self-report measures. A secondary aim of the study was to examine the relationship between trait or social anxiety and face processing ability, and no associations were noted. The implications of these findings for the classification of super-recognizers are discussed. PMID:27713706
Schizotypy and impaired basic face recognition? Another non-confirmatory study.
Bell, Vaughan; Halligan, Peter
2015-12-01
Although schizotypy has been found to be reliably associated with a reduced recognition of facial affect, the few studies that have tested the association between basic face recognition abilities and schizotypy have found mixed results. This study formally tested the association in a large non-clinical sample with established neurological measures of face recognition. Two hundred and twenty-seven participants completed the Oxford-Liverpool Inventory of Feelings and Experiences schizotypy scale and completed the Famous Faces Test and the Cardiff Repeated Recognition Test for Faces. No association between any schizotypal dimension and performance on either of the facial recognition and learning tests was found. The null results can be accepted with a high degree of confidence. Further additional evidence is provided for a lack of association between schizotypy and basic face recognition deficits. © 2014 Wiley Publishing Asia Pty Ltd.
Normal composite face effects in developmental prosopagnosia.
Biotti, Federica; Wu, Esther; Yang, Hua; Jiahui, Guo; Duchaine, Bradley; Cook, Richard
2017-10-01
Upright face perception is thought to involve holistic processing, whereby local features are integrated into a unified whole. Consistent with this view, the top half of one face appears to fuse perceptually with the bottom half of another, when aligned spatially and presented upright. This 'composite face effect' reveals a tendency to integrate information from disparate regions when faces are presented canonically. In recent years, the relationship between susceptibility to the composite effect and face recognition ability has received extensive attention both in participants with normal face recognition and participants with developmental prosopagnosia. Previous results suggest that individuals with developmental prosopagnosia may show reduced susceptibility to the effect suggestive of diminished holistic face processing. Here we describe two studies that examine whether developmental prosopagnosia is associated with reduced composite face effects. Despite using independent samples of developmental prosopagnosics and different composite procedures, we find no evidence for reduced composite face effects. The experiments yielded similar results; highly significant composite effects in both prosopagnosic groups that were similar in magnitude to the effects found in participants with normal face processing. The composite face effects exhibited by both samples and the controls were greatly diminished when stimulus arrangements were inverted. Our finding that the whole-face binding process indexed by the composite effect is intact in developmental prosopagnosia indicates that other factors are responsible for developmental prosopagnosia. These results are also inconsistent with suggestions that susceptibility to the composite face effect and face recognition ability are tightly linked. While the holistic process revealed by the composite face effect may be necessary for typical face perception, it is not sufficient; individual differences in face recognition ability likely reflect variability in multiple sequential processes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Schelinski, Stefanie; Riedel, Philipp; von Kriegstein, Katharina
2014-12-01
In auditory-only conditions, for example when we listen to someone on the phone, it is essential to fast and accurately recognize what is said (speech recognition). Previous studies have shown that speech recognition performance in auditory-only conditions is better if the speaker is known not only by voice, but also by face. Here, we tested the hypothesis that such an improvement in auditory-only speech recognition depends on the ability to lip-read. To test this we recruited a group of adults with autism spectrum disorder (ASD), a condition associated with difficulties in lip-reading, and typically developed controls. All participants were trained to identify six speakers by name and voice. Three speakers were learned by a video showing their face and three others were learned in a matched control condition without face. After training, participants performed an auditory-only speech recognition test that consisted of sentences spoken by the trained speakers. As a control condition, the test also included speaker identity recognition on the same auditory material. The results showed that, in the control group, performance in speech recognition was improved for speakers known by face in comparison to speakers learned in the matched control condition without face. The ASD group lacked such a performance benefit. For the ASD group auditory-only speech recognition was even worse for speakers known by face compared to speakers not known by face. In speaker identity recognition, the ASD group performed worse than the control group independent of whether the speakers were learned with or without face. Two additional visual experiments showed that the ASD group performed worse in lip-reading whereas face identity recognition was within the normal range. The findings support the view that auditory-only communication involves specific visual mechanisms. Further, they indicate that in ASD, speaker-specific dynamic visual information is not available to optimize auditory-only speech recognition. Copyright © 2014 Elsevier Ltd. All rights reserved.
How a Hat May Affect 3-Month-Olds' Recognition of a Face: An Eye-Tracking Study
Bulf, Hermann; Valenza, Eloisa; Turati, Chiara
2013-01-01
Recent studies have shown that infants’ face recognition rests on a robust face representation that is resilient to a variety of facial transformations such as rotations in depth, motion, occlusion or deprivation of inner/outer features. Here, we investigated whether 3-month-old infants’ ability to represent the invariant aspects of a face is affected by the presence of an external add-on element, i.e. a hat. Using a visual habituation task, three experiments were carried out in which face recognition was investigated by manipulating the presence/absence of a hat during face encoding (i.e. habituation phase) and face recognition (i.e. test phase). An eye-tracker system was used to record the time infants spent looking at face-relevant information compared to the hat. The results showed that infants’ face recognition was not affected by the presence of the external element when the type of the hat did not vary between the habituation and test phases, and when both the novel and the familiar face wore the same hat during the test phase (Experiment 1). Infants’ ability to recognize the invariant aspects of a face was preserved also when the hat was absent in the habituation phase and the same hat was shown only during the test phase (Experiment 2). Conversely, when the novel face identity competed with a novel hat, the hat triggered the infants’ attention, interfering with the recognition process and preventing the infants’ preference for the novel face during the test phase (Experiment 3). Findings from the current study shed light on how faces and objects are processed when they are simultaneously presented in the same visual scene, contributing to an understanding of how infants respond to the multiple and composite information available in their surrounding environment. PMID:24349378
How a hat may affect 3-month-olds' recognition of a face: an eye-tracking study.
Bulf, Hermann; Valenza, Eloisa; Turati, Chiara
2013-01-01
Recent studies have shown that infants' face recognition rests on a robust face representation that is resilient to a variety of facial transformations such as rotations in depth, motion, occlusion or deprivation of inner/outer features. Here, we investigated whether 3-month-old infants' ability to represent the invariant aspects of a face is affected by the presence of an external add-on element, i.e. a hat. Using a visual habituation task, three experiments were carried out in which face recognition was investigated by manipulating the presence/absence of a hat during face encoding (i.e. habituation phase) and face recognition (i.e. test phase). An eye-tracker system was used to record the time infants spent looking at face-relevant information compared to the hat. The results showed that infants' face recognition was not affected by the presence of the external element when the type of the hat did not vary between the habituation and test phases, and when both the novel and the familiar face wore the same hat during the test phase (Experiment 1). Infants' ability to recognize the invariant aspects of a face was preserved also when the hat was absent in the habituation phase and the same hat was shown only during the test phase (Experiment 2). Conversely, when the novel face identity competed with a novel hat, the hat triggered the infants' attention, interfering with the recognition process and preventing the infants' preference for the novel face during the test phase (Experiment 3). Findings from the current study shed light on how faces and objects are processed when they are simultaneously presented in the same visual scene, contributing to an understanding of how infants respond to the multiple and composite information available in their surrounding environment.
Prevalence of face recognition deficits in middle childhood.
Bennetts, Rachel J; Murray, Ebony; Boyce, Tian; Bate, Sarah
2017-02-01
Approximately 2-2.5% of the adult population is believed to show severe difficulties with face recognition, in the absence of any neurological injury-a condition known as developmental prosopagnosia (DP). However, to date no research has attempted to estimate the prevalence of face recognition deficits in children, possibly because there are very few child-friendly, well-validated tests of face recognition. In the current study, we examined face and object recognition in a group of primary school children (aged 5-11 years), to establish whether our tests were suitable for children and to provide an estimate of face recognition difficulties in children. In Experiment 1 (n = 184), children completed a pre-existing test of child face memory, the Cambridge Face Memory Test-Kids (CFMT-K), and a bicycle test with the same format. In Experiment 2 (n = 413), children completed three-alternative forced-choice matching tasks with faces and bicycles. All tests showed good psychometric properties. The face and bicycle tests were well matched for difficulty and showed a similar developmental trajectory. Neither the memory nor the matching tests were suitable to detect impairments in the youngest groups of children, but both tests appear suitable to screen for face recognition problems in middle childhood. In the current sample, 1.2-5.2% of children showed difficulties with face recognition; 1.2-4% showed face-specific difficulties-that is, poor face recognition with typical object recognition abilities. This is somewhat higher than previous adult estimates: It is possible that face matching tests overestimate the prevalence of face recognition difficulties in children; alternatively, some children may "outgrow" face recognition difficulties.
Connors, Michael H.; Barnier, Amanda J.; Coltheart, Max; Langdon, Robyn; Cox, Rochelle E.; Rivolta, Davide; Halligan, Peter W.
2014-01-01
Mirrored-self misidentification delusion is the belief that one’s reflection in the mirror is not oneself. This experiment used hypnotic suggestion to impair normal face processing in healthy participants and recreate key aspects of the delusion in the laboratory. From a pool of 439 participants, 22 high hypnotisable participants (“highs”) and 20 low hypnotisable participants were selected on the basis of their extreme scores on two separately administered measures of hypnotisability. These participants received a hypnotic induction and a suggestion for either impaired (i) self-face recognition or (ii) impaired recognition of all faces. Participants were tested on their ability to recognize themselves in a mirror and other visual media – including a photograph, live video, and handheld mirror – and their ability to recognize other people, including the experimenter and famous faces. Both suggestions produced impaired self-face recognition and recreated key aspects of the delusion in highs. However, only the suggestion for impaired other-face recognition disrupted recognition of other faces, albeit in a minority of highs. The findings confirm that hypnotic suggestion can disrupt face processing and recreate features of mirrored-self misidentification. The variability seen in participants’ responses also corresponds to the heterogeneity seen in clinical patients. An important direction for future research will be to examine sources of this variability within both clinical patients and the hypnotic model. PMID:24994973
Brief Report: Face-Specific Recognition Deficits in Young Children with Autism Spectrum Disorders
ERIC Educational Resources Information Center
Bradshaw, Jessica; Shic, Frederick; Chawarska, Katarzyna
2011-01-01
This study used eyetracking to investigate the ability of young children with autism spectrum disorders (ASD) to recognize social (faces) and nonsocial (simple objects and complex block patterns) stimuli using the visual paired comparison (VPC) paradigm. Typically developing (TD) children showed evidence for recognition of faces and simple…
ERIC Educational Resources Information Center
Leonard, Hayley C.; Annaz, Dagmara; Karmiloff-Smith, Annette; Johnson, Mark H.
2011-01-01
The current study investigated whether contrasting face recognition abilities in autism and Williams syndrome could be explained by different spatial frequency biases over developmental time. Typically-developing children and groups with Williams syndrome and autism were asked to recognise faces in which low, middle and high spatial frequency…
Color constancy in 3D-2D face recognition
NASA Astrophysics Data System (ADS)
Meyer, Manuel; Riess, Christian; Angelopoulou, Elli; Evangelopoulos, Georgios; Kakadiaris, Ioannis A.
2013-05-01
Face is one of the most popular biometric modalities. However, up to now, color is rarely actively used in face recognition. Yet, it is well-known that when a person recognizes a face, color cues can become as important as shape, especially when combined with the ability of people to identify the color of objects independent of illuminant color variations. In this paper, we examine the feasibility and effect of explicitly embedding illuminant color information in face recognition systems. We empirically examine the theoretical maximum gain of including known illuminant color to a 3D-2D face recognition system. We also investigate the impact of using computational color constancy methods for estimating the illuminant color, which is then incorporated into the face recognition framework. Our experiments show that under close-to-ideal illumination estimates, one can improve face recognition rates by 16%. When the illuminant color is algorithmically estimated, the improvement is approximately 5%. These results suggest that color constancy has a positive impact on face recognition, but the accuracy of the illuminant color estimate has a considerable effect on its benefits.
Children's ability to recognize other children's faces.
Feinman, S; Entwisle, D R
1976-06-01
Facial recognition ability was studied with 288 children from 4 grades--first, second, third, and sixth--who also varied by sex race, and school type, the last being segregated or integrated. Children judged whether each of 40 pictures of children's faces had been present in a set of 20 pictures viewed earlier. Facial recognition ability increased significantly with each grade but leveled off between ages 8 and 11. Blacks' performance is significantly better than whites', and blacks are better at recognizing faces of whites than whites are at recognizing blacks. Children from an integrated school show smaller differences recognizing black or white faces than children from segregated schools, but the effect appears only for children of the integrated school who also live in mixed-race neighborhoods.
About-face on face recognition ability and holistic processing
Richler, Jennifer J.; Floyd, R. Jackie; Gauthier, Isabel
2015-01-01
Previous work found a small but significant relationship between holistic processing measured with the composite task and face recognition ability measured by the Cambridge Face Memory Test (CFMT; Duchaine & Nakayama, 2006). Surprisingly, recent work using a different measure of holistic processing (Vanderbilt Holistic Face Processing Test [VHPT-F]; Richler, Floyd, & Gauthier, 2014) and a larger sample found no evidence for such a relationship. In Experiment 1 we replicate this unexpected result, finding no relationship between holistic processing (VHPT-F) and face recognition ability (CFMT). A key difference between the VHPT-F and other holistic processing measures is that unique face parts are used on each trial in the VHPT-F, unlike in other tasks where a small set of face parts repeat across the experiment. In Experiment 2, we test the hypothesis that correlations between the CFMT and holistic processing tasks are driven by stimulus repetition that allows for learning during the composite task. Consistent with our predictions, CFMT performance was correlated with holistic processing in the composite task when a small set of face parts repeated over trials, but not when face parts did not repeat. A meta-analysis confirms that relationships between the CFMT and holistic processing depend on stimulus repetition. These results raise important questions about what is being measured by the CFMT, and challenge current assumptions about why faces are processed holistically. PMID:26223027
About-face on face recognition ability and holistic processing.
Richler, Jennifer J; Floyd, R Jackie; Gauthier, Isabel
2015-01-01
Previous work found a small but significant relationship between holistic processing measured with the composite task and face recognition ability measured by the Cambridge Face Memory Test (CFMT; Duchaine & Nakayama, 2006). Surprisingly, recent work using a different measure of holistic processing (Vanderbilt Holistic Face Processing Test [VHPT-F]; Richler, Floyd, & Gauthier, 2014) and a larger sample found no evidence for such a relationship. In Experiment 1 we replicate this unexpected result, finding no relationship between holistic processing (VHPT-F) and face recognition ability (CFMT). A key difference between the VHPT-F and other holistic processing measures is that unique face parts are used on each trial in the VHPT-F, unlike in other tasks where a small set of face parts repeat across the experiment. In Experiment 2, we test the hypothesis that correlations between the CFMT and holistic processing tasks are driven by stimulus repetition that allows for learning during the composite task. Consistent with our predictions, CFMT performance was correlated with holistic processing in the composite task when a small set of face parts repeated over trials, but not when face parts did not repeat. A meta-analysis confirms that relationships between the CFMT and holistic processing depend on stimulus repetition. These results raise important questions about what is being measured by the CFMT, and challenge current assumptions about why faces are processed holistically.
Fusiform gyrus face selectivity relates to individual differences in facial recognition ability.
Furl, Nicholas; Garrido, Lúcia; Dolan, Raymond J; Driver, Jon; Duchaine, Bradley
2011-07-01
Regions of the occipital and temporal lobes, including a region in the fusiform gyrus (FG), have been proposed to constitute a "core" visual representation system for faces, in part because they show face selectivity and face repetition suppression. But recent fMRI studies of developmental prosopagnosics (DPs) raise questions about whether these measures relate to face processing skills. Although DPs manifest deficient face processing, most studies to date have not shown unequivocal reductions of functional responses in the proposed core regions. We scanned 15 DPs and 15 non-DP control participants with fMRI while employing factor analysis to derive behavioral components related to face identification or other processes. Repetition suppression specific to facial identities in FG or to expression in FG and STS did not show compelling relationships with face identification ability. However, we identified robust relationships between face selectivity and face identification ability in FG across our sample for several convergent measures, including voxel-wise statistical parametric mapping, peak face selectivity in individually defined "fusiform face areas" (FFAs), and anatomical extents (cluster sizes) of those FFAs. None of these measures showed associations with behavioral expression or object recognition ability. As a group, DPs had reduced face-selective responses in bilateral FFA when compared with non-DPs. Individual DPs were also more likely than non-DPs to lack expected face-selective activity in core regions. These findings associate individual differences in face processing ability with selectivity in core face processing regions. This confirms that face selectivity can provide a valid marker for neural mechanisms that contribute to face identification ability.
Face identity recognition in autism spectrum disorders: a review of behavioral studies.
Weigelt, Sarah; Koldewyn, Kami; Kanwisher, Nancy
2012-03-01
Face recognition--the ability to recognize a person from their facial appearance--is essential for normal social interaction. Face recognition deficits have been implicated in the most common disorder of social interaction: autism. Here we ask: is face identity recognition in fact impaired in people with autism? Reviewing behavioral studies we find no strong evidence for a qualitative difference in how facial identity is processed between those with and without autism: markers of typical face identity recognition, such as the face inversion effect, seem to be present in people with autism. However, quantitatively--i.e., how well facial identity is remembered or discriminated--people with autism perform worse than typical individuals. This impairment is particularly clear in face memory and in face perception tasks in which a delay intervenes between sample and test, and less so in tasks with no memory demand. Although some evidence suggests that this deficit may be specific to faces, further evidence on this question is necessary. Copyright © 2011 Elsevier Ltd. All rights reserved.
[Face recognition in patients with autism spectrum disorders].
Kita, Yosuke; Inagaki, Masumi
2012-07-01
The present study aimed to review previous research conducted on face recognition in patients with autism spectrum disorders (ASD). Face recognition is a key question in the ASD research field because it can provide clues for elucidating the neural substrates responsible for the social impairment of these patients. Historically, behavioral studies have reported low performance and/or unique strategies of face recognition among ASD patients. However, the performance and strategy of ASD patients is comparable to those of the control group, depending on the experimental situation or developmental stage, suggesting that face recognition of ASD patients is not entirely impaired. Recent brain function studies, including event-related potential and functional magnetic resonance imaging studies, have investigated the cognitive process of face recognition in ASD patients, and revealed impaired function in the brain's neural network comprising the fusiform gyrus and amygdala. This impaired function is potentially involved in the diminished preference for faces, and in the atypical development of face recognition, eliciting symptoms of unstable behavioral characteristics in these patients. Additionally, face recognition in ASD patients is examined from a different perspective, namely self-face recognition, and facial emotion recognition. While the former topic is intimately linked to basic social abilities such as self-other discrimination, the latter is closely associated with mentalizing. Further research on face recognition in ASD patients should investigate the connection between behavioral and neurological specifics in these patients, by considering developmental changes and the spectrum clinical condition of ASD.
The role of experience-based perceptual learning in the face inversion effect.
Civile, Ciro; Obhi, Sukhvinder S; McLaren, I P L
2018-04-03
Perceptual learning of the type we consider here is a consequence of experience with a class of stimuli. It amounts to an enhanced ability to discriminate between stimuli. We argue that it contributes to the ability to distinguish between faces and recognize individuals, and in particular contributes to the face inversion effect (better recognition performance for upright vs inverted faces). Previously, we have shown that experience with a prototype defined category of checkerboards leads to perceptual learning, that this produces an inversion effect, and that this effect can be disrupted by Anodal tDCS to Fp3 during pre-exposure. If we can demonstrate that the same tDCS manipulation also disrupts the inversion effect for faces, then this will strengthen the claim that perceptual learning contributes to that effect. The important question, then, is whether this tDCS procedure would significantly reduce the inversion effect for faces; stimuli that we have lifelong expertise with and for which perceptual learning has already occurred. Consequently, in the experiment reported here we investigated the effects of anodal tDCS at Fp3 during an old/new recognition task for upright and inverted faces. Our results show that stimulation significantly reduced the face inversion effect compared to controls. The effect was one of reducing recognition performance for upright faces. This result is the first to show that tDCS affects perceptual learning that has already occurred, disrupting individuals' ability to recognize upright faces. It provides further support for our account of perceptual learning and its role as a key factor in face recognition. Copyright © 2018 Elsevier Ltd. All rights reserved.
Sheehan, Michael J; Nachman, Michael W
2014-09-16
Facial recognition plays a key role in human interactions, and there has been great interest in understanding the evolution of human abilities for individual recognition and tracking social relationships. Individual recognition requires sufficient cognitive abilities and phenotypic diversity within a population for discrimination to be possible. Despite the importance of facial recognition in humans, the evolution of facial identity has received little attention. Here we demonstrate that faces evolved to signal individual identity under negative frequency-dependent selection. Faces show elevated phenotypic variation and lower between-trait correlations compared with other traits. Regions surrounding face-associated single nucleotide polymorphisms show elevated diversity consistent with frequency-dependent selection. Genetic variation maintained by identity signalling tends to be shared across populations and, for some loci, predates the origin of Homo sapiens. Studies of human social evolution tend to emphasize cognitive adaptations, but we show that social evolution has shaped patterns of human phenotypic and genetic diversity as well.
McGugin, Rankin W.; Richler, Jennifer J.; Herzmann, Grit; Speegle, Magen; Gauthier, Isabel
2012-01-01
Individual differences in face recognition are often contrasted with differences in object recognition using a single object category. Likewise, individual differences in perceptual expertise for a given object domain have typically been measured relative to only a single category baseline. In Experiment 1, we present a new test of object recognition, the Vanderbilt Expertise Test (VET), which is comparable in methods to the Cambridge Face Memory Task (CFMT) but uses eight different object categories. Principal component analysis reveals that the underlying structure of the VET can be largely explained by two independent factors, which demonstrate good reliability and capture interesting sex differences inherent in the VET structure. In Experiment 2, we show how the VET can be used to separate domain-specific from domain-general contributions to a standard measure of perceptual expertise. While domain-specific contributions are found for car matching for both men and women and for plane matching in men, women in this sample appear to use more domain-general strategies to match planes. In Experiment 3, we use the VET to demonstrate that holistic processing of faces predicts face recognition independently of general object recognition ability, which has a sex-specific contribution to face recognition. Overall, the results suggest that the VET is a reliable and valid measure of object recognition abilities and can measure both domain-general skills and domain-specific expertise, which were both found to depend on the sex of observers. PMID:22877929
Image dependency in the recognition of newly learnt faces.
Longmore, Christopher A; Santos, Isabel M; Silva, Carlos F; Hall, Abi; Faloyin, Dipo; Little, Emily
2017-05-01
Research investigating the effect of lighting and viewpoint changes on unfamiliar and newly learnt faces has revealed that such recognition is highly image dependent and that changes in either of these leads to poor recognition accuracy. Three experiments are reported to extend these findings by examining the effect of apparent age on the recognition of newly learnt faces. Experiment 1 investigated the ability to generalize to novel ages of a face after learning a single image. It was found that recognition was best for the learnt image with performance falling the greater the dissimilarity between the study and test images. Experiments 2 and 3 examined whether learning two images aids subsequent recognition of a novel image. The results indicated that interpolation between two studied images (Experiment 2) provided some additional benefit over learning a single view, but that this did not extend to extrapolation (Experiment 3). The results from all studies suggest that recognition was driven primarily by pictorial codes and that the recognition of faces learnt from a limited number of sources operates on stored images of faces as opposed to more abstract, structural, representations.
ERIC Educational Resources Information Center
Tanaka, James W.; Wolf, Julie M.; Klaiman, Cheryl; Koenig, Kathleen; Cockburn, Jeffrey; Herlihy, Lauren; Brown, Carla; Stahl, Sherin; Kaiser, Martha D.; Schultz, Robert T.
2010-01-01
Background: An emerging body of evidence indicates that relative to typically developing children, children with autism are selectively impaired in their ability to recognize facial identity. A critical question is whether face recognition skills can be enhanced through a direct training intervention. Methods: In a randomized clinical trial,…
Sunday, Mackenzie A; Richler, Jennifer J; Gauthier, Isabel
2017-07-01
The part-whole paradigm was one of the first measures of holistic processing and it has been used to address several topics in face recognition, including its development, other-race effects, and more recently, whether holistic processing is correlated with face recognition ability. However the task was not designed to measure individual differences and it has produced measurements with low reliability. We created a new holistic processing test designed to measure individual differences based on the part-whole paradigm, the Vanderbilt Part Whole Test (VPWT). Measurements in the part and whole conditions were reliable, but, surprisingly, there was no evidence for reliable individual differences in the part-whole index (how well a person can take advantage of a face part presented within a whole face context compared to the part presented without a whole face) because part and whole conditions were strongly correlated. The same result was obtained in a version of the original part-whole task that was modified to increase its reliability. Controlling for object recognition ability, we found that variance in the whole condition does not predict any additional variance in face recognition over what is already predicted by performance in the part condition.
Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Ewing, Louise
2013-11-01
Our ability to discriminate and recognize thousands of faces despite their similarity as visual patterns relies on adaptive, norm-based, coding mechanisms that are continuously updated by experience. Reduced adaptive coding of face identity has been proposed as a neurocognitive endophenotype for autism, because it is found in autism and in relatives of individuals with autism. Autistic traits can also extend continuously into the general population, raising the possibility that reduced adaptive coding of face identity may be more generally associated with autistic traits. In the present study, we investigated whether adaptive coding of face identity decreases as autistic traits increase in an undergraduate population. Adaptive coding was measured using face identity aftereffects, and autistic traits were measured using the Autism-Spectrum Quotient (AQ) and its subscales. We also measured face and car recognition ability to determine whether autistic traits are selectively related to face recognition difficulties. We found that men who scored higher on levels of autistic traits related to social interaction had reduced adaptive coding of face identity. This result is consistent with the idea that atypical adaptive face-coding mechanisms are an endophenotype for autism. Autistic traits were also linked with face-selective recognition difficulties in men. However, there were some unexpected sex differences. In women, autistic traits were linked positively, rather than negatively, with adaptive coding of identity, and were unrelated to face-selective recognition difficulties. These sex differences indicate that autistic traits can have different neurocognitive correlates in men and women and raise the intriguing possibility that endophenotypes of autism can differ in males and females. © 2013 Elsevier Ltd. All rights reserved.
The recognition of emotional expression in prosopagnosia: decoding whole and part faces.
Stephan, Blossom Christa Maree; Breen, Nora; Caine, Diana
2006-11-01
Prosopagnosia is currently viewed within the constraints of two competing theories of face recognition, one highlighting the analysis of features, the other focusing on configural processing of the whole face. This study investigated the role of feature analysis versus whole face configural processing in the recognition of facial expression. A prosopagnosic patient, SC made expression decisions from whole and incomplete (eyes-only and mouth-only) faces where features had been obscured. SC was impaired at recognizing some (e.g., anger, sadness, and fear), but not all (e.g., happiness) emotional expressions from the whole face. Analyses of his performance on incomplete faces indicated that his recognition of some expressions actually improved relative to his performance on the whole face condition. We argue that in SC interference from damaged configural processes seem to override an intact ability to utilize part-based or local feature cues.
Impaired face detection may explain some but not all cases of developmental prosopagnosia.
Dalrymple, Kirsten A; Duchaine, Brad
2016-05-01
Developmental prosopagnosia (DP) is defined by severe face recognition difficulties due to the failure to develop the visual mechanisms for processing faces. The two-process theory of face recognition (Morton & Johnson, 1991) implies that DP could result from a failure of an innate face detection system; this failure could prevent an individual from then tuning higher-level processes for face recognition (Johnson, 2005). Work with adults indicates that some individuals with DP have normal face detection whereas others are impaired. However, face detection has not been addressed in children with DP, even though their results may be especially informative because they have had less opportunity to develop strategies that could mask detection deficits. We tested the face detection abilities of seven children with DP. Four were impaired at face detection to some degree (i.e. abnormally slow, or failed to find faces) while the remaining three children had normal face detection. Hence, the cases with impaired detection are consistent with the two-process account suggesting that DP could result from a failure of face detection. However, the cases with normal detection implicate a higher-level origin. The dissociation between normal face detection and impaired identity perception also indicates that these abilities depend on different neurocognitive processes. © 2015 John Wiley & Sons Ltd.
Davis, Joshua; McKone, Elinor; Zirnsak, Marc; Moore, Tirin; O'Kearney, Richard; Apthorp, Deborah; Palermo, Romina
2017-02-01
This study distinguished between different subclusters of autistic traits in the general population and examined the relationships between these subclusters, looking at the eyes of faces, and the ability to recognize facial identity. Using the Autism Spectrum Quotient (AQ) measure in a university-recruited sample, we separate the social aspects of autistic traits (i.e., those related to communication and social interaction; AQ-Social) from the non-social aspects, particularly attention-to-detail (AQ-Attention). We provide the first evidence that these social and non-social aspects are associated differentially with looking at eyes: While AQ-Social showed the commonly assumed tendency towards reduced looking at eyes, AQ-Attention was associated with increased looking at eyes. We also report that higher attention-to-detail (AQ-Attention) was then indirectly related to improved face recognition, mediated by increased number of fixations to the eyes during face learning. Higher levels of socially relevant autistic traits (AQ-Social) trended in the opposite direction towards being related to poorer face recognition (significantly so in females on the Cambridge Face Memory Test). There was no evidence of any mediated relationship between AQ-Social and face recognition via reduced looking at the eyes. These different effects of AQ-Attention and AQ-Social suggest face-processing studies in Autism Spectrum Disorder might similarly benefit from considering symptom subclusters. Additionally, concerning mechanisms of face recognition, our results support the view that more looking at eyes predicts better face memory. © 2016 The British Psychological Society.
Automatic face recognition in HDR imaging
NASA Astrophysics Data System (ADS)
Pereira, Manuela; Moreno, Juan-Carlos; Proença, Hugo; Pinheiro, António M. G.
2014-05-01
The gaining popularity of the new High Dynamic Range (HDR) imaging systems is raising new privacy issues caused by the methods used for visualization. HDR images require tone mapping methods for an appropriate visualization on conventional and non-expensive LDR displays. These visualization methods might result in completely different visualization raising several issues on privacy intrusion. In fact, some visualization methods result in a perceptual recognition of the individuals, while others do not even show any identity. Although perceptual recognition might be possible, a natural question that can rise is how computer based recognition will perform using tone mapping generated images? In this paper, a study where automatic face recognition using sparse representation is tested with images that result from common tone mapping operators applied to HDR images. Its ability for the face identity recognition is described. Furthermore, typical LDR images are used for the face recognition training.
Are Face and Object Recognition Independent? A Neurocomputational Modeling Exploration.
Wang, Panqu; Gauthier, Isabel; Cottrell, Garrison
2016-04-01
Are face and object recognition abilities independent? Although it is commonly believed that they are, Gauthier et al. [Gauthier, I., McGugin, R. W., Richler, J. J., Herzmann, G., Speegle, M., & VanGulick, A. E. Experience moderates overlap between object and face recognition, suggesting a common ability. Journal of Vision, 14, 7, 2014] recently showed that these abilities become more correlated as experience with nonface categories increases. They argued that there is a single underlying visual ability, v, that is expressed in performance with both face and nonface categories as experience grows. Using the Cambridge Face Memory Test and the Vanderbilt Expertise Test, they showed that the shared variance between Cambridge Face Memory Test and Vanderbilt Expertise Test performance increases monotonically as experience increases. Here, we address why a shared resource across different visual domains does not lead to competition and to an inverse correlation in abilities? We explain this conundrum using our neurocomputational model of face and object processing ["The Model", TM, Cottrell, G. W., & Hsiao, J. H. Neurocomputational models of face processing. In A. J. Calder, G. Rhodes, M. Johnson, & J. Haxby (Eds.), The Oxford handbook of face perception. Oxford, UK: Oxford University Press, 2011]. We model the domain general ability v as the available computational resources (number of hidden units) in the mapping from input to label and experience as the frequency of individual exemplars in an object category appearing during network training. Our results show that, as in the behavioral data, the correlation between subordinate level face and object recognition accuracy increases as experience grows. We suggest that different domains do not compete for resources because the relevant features are shared between faces and objects. The essential power of experience is to generate a "spreading transform" for faces (separating them in representational space) that generalizes to objects that must be individuated. Interestingly, when the task of the network is basic level categorization, no increase in the correlation between domains is observed. Hence, our model predicts that it is the type of experience that matters and that the source of the correlation is in the fusiform face area, rather than in cortical areas that subserve basic level categorization. This result is consistent with our previous modeling elucidating why the FFA is recruited for novel domains of expertise [Tong, M. H., Joyce, C. A., & Cottrell, G. W. Why is the fusiform face area recruited for novel categories of expertise? A neurocomputational investigation. Brain Research, 1202, 14-24, 2008].
Visual scanning behavior is related to recognition performance for own- and other-age faces
Proietti, Valentina; Macchi Cassia, Viola; dell’Amore, Francesca; Conte, Stefania; Bricolo, Emanuela
2015-01-01
It is well-established that our recognition ability is enhanced for faces belonging to familiar categories, such as own-race faces and own-age faces. Recent evidence suggests that, for race, the recognition bias is also accompanied by different visual scanning strategies for own- compared to other-race faces. Here, we tested the hypothesis that these differences in visual scanning patterns extend also to the comparison between own and other-age faces and contribute to the own-age recognition advantage. Participants (young adults with limited experience with infants) were tested in an old/new recognition memory task where they encoded and subsequently recognized a series of adult and infant faces while their eye movements were recorded. Consistent with findings on the other-race bias, we found evidence of an own-age bias in recognition which was accompanied by differential scanning patterns, and consequently differential encoding strategies, for own-compared to other-age faces. Gaze patterns for own-age faces involved a more dynamic sampling of the internal features and longer viewing time on the eye region compared to the other regions of the face. This latter strategy was extensively employed during learning (vs. recognition) and was positively correlated to discriminability. These results suggest that deeply encoding the eye region is functional for recognition and that the own-age bias is evident not only in differential recognition performance, but also in the employment of different sampling strategies found to be effective for accurate recognition. PMID:26579056
Impaired processing of self-face recognition in anorexia nervosa.
Hirot, France; Lesage, Marine; Pedron, Lya; Meyer, Isabelle; Thomas, Pierre; Cottencin, Olivier; Guardia, Dewi
2016-03-01
Body image disturbances and massive weight loss are major clinical symptoms of anorexia nervosa (AN). The aim of the present study was to examine the influence of body changes and eating attitudes on self-face recognition ability in AN. Twenty-seven subjects suffering from AN and 27 control participants performed a self-face recognition task (SFRT). During the task, digital morphs between their own face and a gender-matched unfamiliar face were presented in a random sequence. Participants' self-face recognition failures, cognitive flexibility, body concern and eating habits were assessed with the Self-Face Recognition Questionnaire (SFRQ), Trail Making Test (TMT), Body Shape Questionnaire (BSQ) and Eating Disorder Inventory-2 (EDI-2), respectively. Subjects suffering from AN exhibited significantly greater difficulties than control participants in identifying their own face (p = 0.028). No significant difference was observed between the two groups for TMT (all p > 0.1, non-significant). Regarding predictors of self-face recognition skills, there was a negative correlation between SFRT and body mass index (p = 0.01) and a positive correlation between SFRQ and EDI-2 (p < 0.001) or BSQ (p < 0.001). Among factors involved, nutritional status and intensity of eating disorders could play a part in impaired self-face recognition.
Feeser, Melanie; Fan, Yan; Weigand, Anne; Hahn, Adam; Gärtner, Matti; Aust, Sabine; Böker, Heinz; Bajbouj, Malek; Grimm, Simone
2014-12-01
Previous studies have shown that oxytocin (OXT) enhances social cognitive processes. It has also been demonstrated that OXT does not uniformly facilitate social cognition. The effects of OXT administration strongly depend on the exposure to stressful experiences in early life. Emotional facial recognition is crucial for social cognition. However, no study has yet examined how the effects of OXT on the ability to identify emotional faces are altered by early life stress (ELS) experiences. Given the role of OXT in modulating social motivational processes, we specifically aimed to investigate its effects on the recognition of approach- and avoidance-related facial emotions. In a double-blind, between-subjects, placebo-controlled design, 82 male participants performed an emotion recognition task with faces taken from the "Karolinska Directed Emotional Faces" set. We clustered the six basic emotions along the dimensions approach (happy, surprise, anger) and avoidance (fear, sadness, disgust). ELS was assessed with the Childhood Trauma Questionnaire (CTQ). Our results showed that OXT improved the ability to recognize avoidance-related emotional faces as compared to approach-related emotional faces. Whereas the performance for avoidance-related emotions in participants with higher ELS scores was comparable in both OXT and placebo condition, OXT enhanced emotion recognition in participants with lower ELS scores. Independent of OXT administration, we observed increased emotion recognition for avoidance-related faces in participants with high ELS scores. Our findings suggest that the investigation of OXT on social recognition requires a broad approach that takes ELS experiences as well as motivational processes into account.
Recognizing Dynamic Faces in Malaysian Chinese Participants.
Tan, Chrystalle B Y; Sheppard, Elizabeth; Stephen, Ian D
2016-03-01
High performance level in face recognition studies does not seem to be replicable in real-life situations possibly because of the artificial nature of laboratory studies. Recognizing faces in natural social situations may be a more challenging task, as it involves constant examination of dynamic facial motions that may alter facial structure vital to the recognition of unfamiliar faces. Because of the incongruences of recognition performance, the current study developed stimuli that closely represent natural social situations to yield results that more accurately reflect observers' performance in real-life settings. Naturalistic stimuli of African, East Asian, and Western Caucasian actors introducing themselves were presented to investigate Malaysian Chinese participants' recognition sensitivity and looking strategies when performing a face recognition task. When perceiving dynamic facial stimuli, participants fixated most on the nose, followed by the mouth then the eyes. Focusing on the nose may have enabled participants to gain a more holistic view of actors' facial and head movements, which proved to be beneficial in recognizing identities. Participants recognized all three races of faces equally well. The current results, which differed from a previous static face recognition study, may be a more accurate reflection of observers' recognition abilities and looking strategies. © The Author(s) 2015.
Face emotion recognition is related to individual differences in psychosis-proneness.
Germine, L T; Hooker, C I
2011-05-01
Deficits in face emotion recognition (FER) in schizophrenia are well documented, and have been proposed as a potential intermediate phenotype for schizophrenia liability. However, research on the relationship between psychosis vulnerability and FER has mixed findings and methodological limitations. Moreover, no study has yet characterized the relationship between FER ability and level of psychosis-proneness. If FER ability varies continuously with psychosis-proneness, this suggests a relationship between FER and polygenic risk factors. We tested two large internet samples to see whether psychometric psychosis-proneness, as measured by the Schizotypal Personality Questionnaire-Brief (SPQ-B), is related to differences in face emotion identification and discrimination or other face processing abilities. Experiment 1 (n=2332) showed that psychosis-proneness predicts face emotion identification ability but not face gender identification ability. Experiment 2 (n=1514) demonstrated that psychosis-proneness also predicts performance on face emotion but not face identity discrimination. The tasks in Experiment 2 used identical stimuli and task parameters, differing only in emotion/identity judgment. Notably, the relationships demonstrated in Experiments 1 and 2 persisted even when individuals with the highest psychosis-proneness levels (the putative high-risk group) were excluded from analysis. Our data suggest that FER ability is related to individual differences in psychosis-like characteristics in the normal population, and that these differences cannot be accounted for by differences in face processing and/or visual perception. Our results suggest that FER may provide a useful candidate intermediate phenotype.
Parker, Alison E.; Mathis, Erin T.; Kupersmidt, Janis B.
2016-01-01
The study examined children’s recognition of emotion from faces and body poses, as well as gender differences in these recognition abilities. Preschool-aged children (N = 55) and their parents and teachers participated in the study. Preschool-aged children completed a web-based measure of emotion recognition skills, which included five tasks (three with faces and two with bodies). Parents and teachers reported on children’s aggressive behaviors and social skills. Children’s emotion accuracy on two of the three facial tasks and one of the body tasks was related to teacher reports of social skills. Some of these relations were moderated by child gender. In particular, the relationships between emotion recognition accuracy and reports of children’s behavior were stronger for boys than girls. Identifying preschool-aged children’s strengths and weaknesses in identification of emotion from faces and body poses may be helpful in guiding interventions with children who have problems with social and behavioral functioning that may be due, in part, to emotional knowledge deficits. Further developmental implications of these findings are discussed. PMID:27057129
Face recognition in newly hatched chicks at the onset of vision.
Wood, Samantha M W; Wood, Justin N
2015-04-01
How does face recognition emerge in the newborn brain? To address this question, we used an automated controlled-rearing method with a newborn animal model: the domestic chick (Gallus gallus). This automated method allowed us to examine chicks' face recognition abilities at the onset of both face experience and object experience. In the first week of life, newly hatched chicks were raised in controlled-rearing chambers that contained no objects other than a single virtual human face. In the second week of life, we used an automated forced-choice testing procedure to examine whether chicks could distinguish that familiar face from a variety of unfamiliar faces. Chicks successfully distinguished the familiar face from most of the unfamiliar faces-for example, chicks were sensitive to changes in the face's age, gender, and orientation (upright vs. inverted). Thus, chicks can build an accurate representation of the first face they see in their life. These results show that the initial state of face recognition is surprisingly powerful: Newborn visual systems can begin encoding and recognizing faces at the onset of vision. (c) 2015 APA, all rights reserved).
Lin, Chia-Yao; Tien, Yi-Min; Huang, Jong-Tsun; Tsai, Chon-Haw; Hsu, Li-Chuan
2016-01-01
Because of dopaminergic neurodegeneration, patients with Parkinson's disease (PD) show impairment in the recognition of negative facial expressions. In the present study, we aimed to determine whether PD patients with more advanced motor problems would show a much greater deficit in recognition of emotional facial expressions than a control group and whether impairment of emotion recognition would extend to positive emotions. Twenty-nine PD patients and 29 age-matched healthy controls were recruited. Participants were asked to discriminate emotions in Experiment 1 and identify gender in Experiment 2. In Experiment 1, PD patients demonstrated a recognition deficit for negative (sadness and anger) and positive faces. Further analysis showed that only PD patients with high motor dysfunction performed poorly in recognition of happy faces. In Experiment 2, PD patients showed an intact ability for gender identification, and the results eliminated possible abilities in the functions measured in Experiment 2 as alternative explanations for the results of Experiment 1. We concluded that patients' ability to recognize emotions deteriorated as the disease progressed. Recognition of negative emotions was impaired first, and then the impairment extended to positive emotions.
Tien, Yi-Min; Huang, Jong-Tsun
2016-01-01
Because of dopaminergic neurodegeneration, patients with Parkinson's disease (PD) show impairment in the recognition of negative facial expressions. In the present study, we aimed to determine whether PD patients with more advanced motor problems would show a much greater deficit in recognition of emotional facial expressions than a control group and whether impairment of emotion recognition would extend to positive emotions. Twenty-nine PD patients and 29 age-matched healthy controls were recruited. Participants were asked to discriminate emotions in Experiment 1 and identify gender in Experiment 2. In Experiment 1, PD patients demonstrated a recognition deficit for negative (sadness and anger) and positive faces. Further analysis showed that only PD patients with high motor dysfunction performed poorly in recognition of happy faces. In Experiment 2, PD patients showed an intact ability for gender identification, and the results eliminated possible abilities in the functions measured in Experiment 2 as alternative explanations for the results of Experiment 1. We concluded that patients' ability to recognize emotions deteriorated as the disease progressed. Recognition of negative emotions was impaired first, and then the impairment extended to positive emotions. PMID:27555668
Arguments Against a Configural Processing Account of Familiar Face Recognition.
Burton, A Mike; Schweinberger, Stefan R; Jenkins, Rob; Kaufmann, Jürgen M
2015-07-01
Face recognition is a remarkable human ability, which underlies a great deal of people's social behavior. Individuals can recognize family members, friends, and acquaintances over a very large range of conditions, and yet the processes by which they do this remain poorly understood, despite decades of research. Although a detailed understanding remains elusive, face recognition is widely thought to rely on configural processing, specifically an analysis of spatial relations between facial features (so-called second-order configurations). In this article, we challenge this traditional view, raising four problems: (1) configural theories are underspecified; (2) large configural changes leave recognition unharmed; (3) recognition is harmed by nonconfigural changes; and (4) in separate analyses of face shape and face texture, identification tends to be dominated by texture. We review evidence from a variety of sources and suggest that failure to acknowledge the impact of familiarity on facial representations may have led to an overgeneralization of the configural account. We argue instead that second-order configural information is remarkably unimportant for familiar face recognition. © The Author(s) 2015.
Devue, Christel; Barsics, Catherine
2016-10-01
Most humans seem to demonstrate astonishingly high levels of skill in face processing if one considers the sophisticated level of fine-tuned discrimination that face recognition requires. However, numerous studies now indicate that the ability to process faces is not as fundamental as once thought and that performance can range from despairingly poor to extraordinarily high across people. Here we studied people who are super specialists of faces, namely portrait artists, to examine how their specific visual experience with faces relates to a range of face processing skills (perceptual discrimination, short- and longer term recognition). Artists show better perceptual discrimination and, to some extent, recognition of newly learned faces than controls. They are also more accurate on other perceptual tasks (i.e., involving non-face stimuli or mental rotation). By contrast, artists do not display an advantage compared to controls on longer term face recognition (i.e., famous faces) nor on person recognition from other sensorial modalities (i.e., voices). Finally, the face inversion effect exists in artists and controls and is not modulated by artistic practice. Advantages in face processing for artists thus seem to closely mirror perceptual and visual short term memory skills involved in portraiture. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sellaro, Roberta; de Gelder, Beatrice; Finisguerra, Alessandra; Colzato, Lorenza S
2018-02-01
The polyvagal theory suggests that the vagus nerve is the key phylogenetic substrate enabling optimal social interactions, a crucial aspect of which is emotion recognition. A previous study showed that the vagus nerve plays a causal role in mediating people's ability to recognize emotions based on images of the eye region. The aim of this study is to verify whether the previously reported causal link between vagal activity and emotion recognition can be generalized to situations in which emotions must be inferred from images of whole faces and bodies. To this end, we employed transcutaneous vagus nerve stimulation (tVNS), a novel non-invasive brain stimulation technique that causes the vagus nerve to fire by the application of a mild electrical stimulation to the auricular branch of the vagus nerve, located in the anterior protuberance of the outer ear. In two separate sessions, participants received active or sham tVNS before and while performing two emotion recognition tasks, aimed at indexing their ability to recognize emotions from facial and bodily expressions. Active tVNS, compared to sham stimulation, enhanced emotion recognition for whole faces but not for bodies. Our results confirm and further extend recent observations supporting a causal relationship between vagus nerve activity and the ability to infer others' emotional state, but restrict this association to situations in which the emotional state is conveyed by the whole face and/or by salient facial cues, such as eyes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bate, Sarah; Bennetts, Rachel; Mole, Joseph A; Ainge, James A; Gregory, Nicola J; Bobak, Anna K; Bussunt, Amanda
2015-01-01
In this paper we describe the case of EM, a female adolescent who acquired prosopagnosia following encephalitis at the age of eight. Initial neuropsychological and eye-movement investigations indicated that EM had profound difficulties in face perception as well as face recognition. EM underwent 14 weeks of perceptual training in an online programme that attempted to improve her ability to make fine-grained discriminations between faces. Following training, EM's face perception skills had improved, and the effect generalised to untrained faces. Eye-movement analyses also indicated that EM spent more time viewing the inner facial features post-training. Examination of EM's face recognition skills revealed an improvement in her recognition of personally-known faces when presented in a laboratory-based test, although the same gains were not noted in her everyday experiences with these faces. In addition, EM did not improve on a test assessing the recognition of newly encoded faces. One month after training, EM had maintained the improvement on the eye-tracking test, and to a lesser extent, her performance on the familiar faces test. This pattern of findings is interpreted as promising evidence that the programme can improve face perception skills, and with some adjustments, may at least partially improve face recognition skills.
Mapping correspondence between facial mimicry and emotion recognition in healthy subjects.
Ponari, Marta; Conson, Massimiliano; D'Amico, Nunzia Pina; Grossi, Dario; Trojano, Luigi
2012-12-01
We aimed at verifying the hypothesis that facial mimicry is causally and selectively involved in emotion recognition. For this purpose, in Experiment 1, we explored the effect of tonic contraction of muscles in upper or lower half of participants' face on their ability to recognize emotional facial expressions. We found that the "lower" manipulation specifically impaired recognition of happiness and disgust, the "upper" manipulation impaired recognition of anger, while both manipulations affected recognition of fear; recognition of surprise and sadness were not affected by either blocking manipulations. In Experiment 2, we verified whether emotion recognition is hampered by stimuli in which an upper or lower half-face showing an emotional expression is combined with a neutral half-face. We found that the neutral lower half-face interfered with recognition of happiness and disgust, whereas the neutral upper half impaired recognition of anger; recognition of fear and sadness was impaired by both manipulations, whereas recognition of surprise was not affected by either manipulation. Taken together, the present findings support simulation models of emotion recognition and provide insight into the role of mimicry in comprehension of others' emotional facial expressions. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Face-blind for other-race faces: Individual differences in other-race recognition impairments.
Wan, Lulu; Crookes, Kate; Dawel, Amy; Pidcock, Madeleine; Hall, Ashleigh; McKone, Elinor
2017-01-01
We report the existence of a previously undescribed group of people, namely individuals who are so poor at recognition of other-race faces that they meet criteria for clinical-level impairment (i.e., they are "face-blind" for other-race faces). Testing 550 participants, and using the well-validated Cambridge Face Memory Test for diagnosing face blindness, results show the rate of other-race face blindness to be nontrivial, specifically 8.1% of Caucasians and Asians raised in majority own-race countries. Results also show risk factors for other-race face blindness to include: a lack of interracial contact; and being at the lower end of the normal range of general face recognition ability (i.e., even for own-race faces); but not applying less individuating effort to other-race than own-race faces. Findings provide a potential resolution of contradictory evidence concerning the importance of the other-race effect (ORE), by explaining how it is possible for the mean ORE to be modest in size (suggesting a genuine but minor problem), and simultaneously for individuals to suffer major functional consequences in the real world (e.g., eyewitness misidentification of other-race offenders leading to wrongful imprisonment). Findings imply that, in legal settings, evaluating an eyewitness's chance of having made an other-race misidentification requires information about the underlying face recognition abilities of the individual witness. Additionally, analogy with prosopagnosia (inability to recognize even own-race faces) suggests everyday social interactions with other-race people, such as those between colleagues in the workplace, will be seriously impacted by the ORE in some people. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Face Processing: Models For Recognition
NASA Astrophysics Data System (ADS)
Turk, Matthew A.; Pentland, Alexander P.
1990-03-01
The human ability to process faces is remarkable. We can identify perhaps thousands of faces learned throughout our lifetime and read facial expression to understand such subtle qualities as emotion. These skills are quite robust, despite sometimes large changes in the visual stimulus due to expression, aging, and distractions such as glasses or changes in hairstyle or facial hair. Computers which model and recognize faces will be useful in a variety of applications, including criminal identification, human-computer interface, and animation. We discuss models for representing faces and their applicability to the task of recognition, and present techniques for identifying faces and detecting eye blinks.
Face Recognition by Metropolitan Police Super-Recognisers.
Robertson, David J; Noyes, Eilidh; Dowsett, Andrew J; Jenkins, Rob; Burton, A Mike
2016-01-01
Face recognition is used to prove identity across a wide variety of settings. Despite this, research consistently shows that people are typically rather poor at matching faces to photos. Some professional groups, such as police and passport officers, have been shown to perform just as poorly as the general public on standard tests of face recognition. However, face recognition skills are subject to wide individual variation, with some people showing exceptional ability-a group that has come to be known as 'super-recognisers'. The Metropolitan Police Force (London) recruits 'super-recognisers' from within its ranks, for deployment on various identification tasks. Here we test four working super-recognisers from within this police force, and ask whether they are really able to perform at levels above control groups. We consistently find that the police 'super-recognisers' perform at well above normal levels on tests of unfamiliar and familiar face matching, with degraded as well as high quality images. Recruiting employees with high levels of skill in these areas, and allocating them to relevant tasks, is an efficient way to overcome some of the known difficulties associated with unfamiliar face recognition.
Anzures, Gizelle; Kelly, David J; Pascalis, Olivier; Quinn, Paul C; Slater, Alan M; de Viviés, Xavier; Lee, Kang
2014-02-01
We used a matching-to-sample task and manipulated facial pose and feature composition to examine the other-race effect (ORE) in face identity recognition between 5 and 10 years of age. Overall, the present findings provide a genuine measure of own- and other-race face identity recognition in children that is independent of photographic and image processing. The current study also confirms the presence of an ORE in children as young as 5 years of age using a recognition paradigm that is sensitive to their developing cognitive abilities. In addition, the present findings show that with age, increasing experience with familiar classes of own-race faces and further lack of experience with unfamiliar classes of other-race faces serves to maintain the ORE between 5 and 10 years of age rather than exacerbate the effect. All age groups also showed a differential effect of stimulus facial pose in their recognition of the internal regions of own- and other-race faces. Own-race inner faces were remembered best when three-quarter poses were used during familiarization and frontal poses were used during the recognition test. In contrast, other-race inner faces were remembered best when frontal poses were used during familiarization and three-quarter poses were used during the recognition test. Thus, children encode and/or retrieve own- and other-race faces from memory in qualitatively different ways.
Anzures, Gizelle; Kelly, David J.; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; de Viviés, Xavier; Lee, Kang
2013-01-01
We used a matching-to-sample task and manipulated facial pose and feature composition to examine the other-race effect (ORE) in face identity recognition between 5 and 10 years of age. Overall, the present findings provide a genuine measure of own- and other-race face identity recognition in children that is independent of photographic and image processing. The present study also confirms the presence of an ORE in children as young as 5 years of age using a recognition paradigm that is sensitive to their developing cognitive abilities. In addition, the present findings show that with age, increasing experience with familiar classes of own-race faces and further lack of experience with unfamiliar classes of other-race faces serves to maintain the ORE between 5 to 10 years of age rather than exacerbate the effect. All age groups also showed a differential effect of stimulus facial pose in their recognition of the internal regions of own- and other-race faces. Own-race inner faces were remembered best when three-quarter poses were used during familiarization and frontal poses were used during the recognition test. In contrast, other-race inner faces were remembered best when frontal poses were used during familiarization and three-quarter poses were used during the recognition test. Thus, children encode and/or retrieve own- and other-race faces from memory in qualitatively different ways. PMID:23731287
Rhodes, Gillian; Ewing, Louise; Jeffery, Linda; Avard, Eleni; Taylor, Libby
2014-09-01
Faces are adaptively coded relative to visual norms that are updated by experience. This coding is compromised in autism and the broader autism phenotype, suggesting that atypical adaptive coding of faces may be an endophenotype for autism. Here we investigate the nature of this atypicality, asking whether adaptive face-coding mechanisms are fundamentally altered, or simply less responsive to experience, in autism. We measured adaptive coding, using face identity aftereffects, in cognitively able children and adolescents with autism and neurotypical age- and ability-matched participants. We asked whether these aftereffects increase with adaptor identity strength as in neurotypical populations, or whether they show a different pattern indicating a more fundamental alteration in face-coding mechanisms. As expected, face identity aftereffects were reduced in the autism group, but they nevertheless increased with adaptor strength, like those of our neurotypical participants, consistent with norm-based coding of face identity. Moreover, their aftereffects correlated positively with face recognition ability, consistent with an intact functional role for adaptive coding in face recognition ability. We conclude that adaptive norm-based face-coding mechanisms are basically intact in autism, but are less readily calibrated by experience. Copyright © 2014 Elsevier Ltd. All rights reserved.
Recognition of face and non-face stimuli in autistic spectrum disorder.
Arkush, Leo; Smith-Collins, Adam P R; Fiorentini, Chiara; Skuse, David H
2013-12-01
The ability to remember faces is critical for the development of social competence. From childhood to adulthood, we acquire a high level of expertise in the recognition of facial images, and neural processes become dedicated to sustaining competence. Many people with autism spectrum disorder (ASD) have poor face recognition memory; changes in hairstyle or other non-facial features in an otherwise familiar person affect their recollection skills. The observation implies that they may not use the configuration of the inner face to achieve memory competence, but bolster performance in other ways. We aimed to test this hypothesis by comparing the performance of a group of high-functioning unmedicated adolescents with ASD and a matched control group on a "surprise" face recognition memory task. We compared their memory for unfamiliar faces with their memory for images of houses. To evaluate the role that is played by peripheral cues in assisting recognition memory, we cropped both sets of pictures, retaining only the most salient central features. ASD adolescents had poorer recognition memory for faces than typical controls, but their recognition memory for houses was unimpaired. Cropping images of faces did not disproportionately influence their recall accuracy, relative to controls. House recognition skills (cropped and uncropped) were similar in both groups. In the ASD group only, performance on both sets of task was closely correlated, implying that memory for faces and other complex pictorial stimuli is achieved by domain-general (non-dedicated) cognitive mechanisms. Adolescents with ASD apparently do not use domain-specialized processing of inner facial cues to support face recognition memory. © 2013 International Society for Autism Research, Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Brenna, Viola; Proietti, Valentina; Montirosso, Rosario; Turati, Chiara
2013-01-01
The current study examined whether and how the presence of a positive or a negative emotional expression may affect the face recognition process at 3 months of age. Using a familiarization procedure, Experiment 1 demonstrated that positive (i.e., happiness), but not negative (i.e., fear and anger) facial expressions facilitate infants' ability to…
Age- and gender-related variations of emotion recognition in pseudowords and faces.
Demenescu, Liliana R; Mathiak, Krystyna A; Mathiak, Klaus
2014-01-01
BACKGROUND/STUDY CONTEXT: The ability to interpret emotionally salient stimuli is an important skill for successful social functioning at any age. The objective of the present study was to disentangle age and gender effects on emotion recognition ability in voices and faces. Three age groups of participants (young, age range: 18-35 years; middle-aged, age range: 36-55 years; and older, age range: 56-75 years) identified basic emotions presented in voices and faces in a forced-choice paradigm. Five emotions (angry, fearful, sad, disgusted, and happy) and a nonemotional category (neutral) were shown as encoded in color photographs of facial expressions and pseudowords spoken in affective prosody. Overall, older participants had a lower accuracy rate in categorizing emotions than young and middle-aged participants. Females performed better than males in recognizing emotions from voices, and this gender difference emerged in middle-aged and older participants. The performance of emotion recognition in faces was significantly correlated with the performance in voices. The current study provides further evidence for a general age and gender effect on emotion recognition; the advantage of females seems to be age- and stimulus modality-dependent.
Lateralization of kin recognition signals in the human face
Dal Martello, Maria F.; Maloney, Laurence T.
2010-01-01
When human subjects view photographs of faces, their judgments of identity, gender, emotion, age, and attractiveness depend more on one side of the face than the other. We report an experiment testing whether allocentric kin recognition (the ability to judge the degree of kinship between individuals other than the observer) is also lateralized. One hundred and twenty-four observers judged whether or not pairs of children were biological siblings by looking at photographs of their faces. In three separate conditions, (1) the right hemi-face was masked, (2) the left hemi-face was masked, or (3) the face was fully visible. The d′ measures for the masked left hemi-face and masked right hemi-face were 1.024 and 1.004, respectively (no significant difference), and the d′ measure for the unmasked face was 1.079, not significantly greater than that for either of the masked conditions. We conclude, first, that there is no superiority of one or the other side of the observed face in kin recognition, second, that the information present in the left and right hemi-faces relevant to recognizing kin is completely redundant, and last that symmetry cues are not used for kin recognition. PMID:20884584
An in-depth cognitive examination of individuals with superior face recognition skills.
Bobak, Anna K; Bennetts, Rachel J; Parris, Benjamin A; Jansari, Ashok; Bate, Sarah
2016-09-01
Previous work has reported the existence of "super-recognisers" (SRs), or individuals with extraordinary face recognition skills. However, the precise underpinnings of this ability have not yet been investigated. In this paper we examine (a) the face-specificity of super recognition, (b) perception of facial identity in SRs, (c) whether SRs present with enhancements in holistic processing and (d) the consistency of these findings across different SRs. A detailed neuropsychological investigation into six SRs indicated domain-specificity in three participants, with some evidence of enhanced generalised visuo-cognitive or socio-emotional processes in the remaining individuals. While superior face-processing skills were restricted to face memory in three of the SRs, enhancements to facial identity perception were observed in the others. Notably, five of the six participants showed at least some evidence of enhanced holistic processing. These findings indicate cognitive heterogeneity in the presentation of superior face recognition, and have implications for our theoretical understanding of the typical face-processing system and the identification of superior face-processing skills in applied settings. Copyright © 2016 Elsevier Ltd. All rights reserved.
Reder, Lynne M; Victoria, Lindsay W; Manelis, Anna; Oates, Joyce M; Dutcher, Janine M; Bates, Jordan T; Cook, Shaun; Aizenstein, Howard J; Quinlan, Joseph; Gyulai, Ferenc
2013-03-01
In two experiments, we provided support for the hypothesis that stimuli with preexisting memory representations (e.g., famous faces) are easier to associate to their encoding context than are stimuli that lack long-term memory representations (e.g., unknown faces). Subjects viewed faces superimposed on different backgrounds (e.g., the Eiffel Tower). Face recognition on a surprise memory test was better when the encoding background was reinstated than when it was swapped with a different background; however, the reinstatement advantage was modulated by how many faces had been seen with a given background, and reinstatement did not improve recognition for unknown faces. The follow-up experiment added a drug intervention that inhibited the ability to form new associations. Context reinstatement did not improve recognition for famous or unknown faces under the influence of the drug. The results suggest that it is easier to associate context to faces that have a preexisting long-term memory representation than to faces that do not.
Body Emotion Recognition Disproportionately Depends on Vertical Orientations during Childhood
ERIC Educational Resources Information Center
Balas, Benjamin; Auen, Amanda; Saville, Alyson; Schmidt, Jamie
2018-01-01
Children's ability to recognize emotional expressions from faces and bodies develops during childhood. However, the low-level features that support accurate body emotion recognition during development have not been well characterized. This is in marked contrast to facial emotion recognition, which is known to depend upon specific spatial frequency…
Developmental prosopagnosia and the Benton Facial Recognition Test.
Duchaine, Bradley C; Nakayama, Ken
2004-04-13
The Benton Facial Recognition Test is used for clinical and research purposes, but evidence suggests that it is possible to pass the test with impaired face discrimination abilities. The authors tested 11 patients with developmental prosopagnosia using this test, and a majority scored in the normal range. Consequently, scores in the normal range should be interpreted cautiously, and testing should always be supplemented by other face tests.
The Impact of Early Bilingualism on Face Recognition Processes.
Kandel, Sonia; Burfin, Sabine; Méary, David; Ruiz-Tada, Elisa; Costa, Albert; Pascalis, Olivier
2016-01-01
Early linguistic experience has an impact on the way we decode audiovisual speech in face-to-face communication. The present study examined whether differences in visual speech decoding could be linked to a broader difference in face processing. To identify a phoneme we have to do an analysis of the speaker's face to focus on the relevant cues for speech decoding (e.g., locating the mouth with respect to the eyes). Face recognition processes were investigated through two classic effects in face recognition studies: the Other-Race Effect (ORE) and the Inversion Effect. Bilingual and monolingual participants did a face recognition task with Caucasian faces (own race), Chinese faces (other race), and cars that were presented in an Upright or Inverted position. The results revealed that monolinguals exhibited the classic ORE. Bilinguals did not. Overall, bilinguals were slower than monolinguals. These results suggest that bilinguals' face processing abilities differ from monolinguals'. Early exposure to more than one language may lead to a perceptual organization that goes beyond language processing and could extend to face analysis. We hypothesize that these differences could be due to the fact that bilinguals focus on different parts of the face than monolinguals, making them more efficient in other race face processing but slower. However, more studies using eye-tracking techniques are necessary to confirm this explanation.
Falkmer, Marita; Black, Melissa; Tang, Julia; Fitzgerald, Patrick; Girdler, Sonya; Leung, Denise; Ordqvist, Anna; Tan, Tele; Jahan, Ishrat; Falkmer, Torbjorn
2016-01-01
While local bias in visual processing in children with autism spectrum disorders (ASD) has been reported to result in difficulties in recognizing faces and facially expressed emotions, but superior ability in disembedding figures, associations between these abilities within a group of children with and without ASD have not been explored. Possible associations in performance on the Visual Perception Skills Figure-Ground test, a face recognition test and an emotion recognition test were investigated within 25 8-12-years-old children with high-functioning autism/Asperger syndrome, and in comparison to 33 typically developing children. Analyses indicated a weak positive correlation between accuracy in Figure-Ground recognition and emotion recognition. No other correlation estimates were significant. These findings challenge both the enhanced perceptual function hypothesis and the weak central coherence hypothesis, and accentuate the importance of further scrutinizing the existance and nature of local visual bias in ASD.
Wilson, C. Ellie; Palermo, Romina; Brock, Jon
2012-01-01
Background Previous research suggests that many individuals with autism spectrum disorder (ASD) have impaired facial identity recognition, and also exhibit abnormal visual scanning of faces. Here, two hypotheses accounting for an association between these observations were tested: i) better facial identity recognition is associated with increased gaze time on the Eye region; ii) better facial identity recognition is associated with increased eye-movements around the face. Methodology and Principal Findings Eye-movements of 11 children with ASD and 11 age-matched typically developing (TD) controls were recorded whilst they viewed a series of faces, and then completed a two alternative forced-choice recognition memory test for the faces. Scores on the memory task were standardized according to age. In both groups, there was no evidence of an association between the proportion of time spent looking at the Eye region of faces and age-standardized recognition performance, thus the first hypothesis was rejected. However, the ‘Dynamic Scanning Index’ – which was incremented each time the participant saccaded into and out of one of the core-feature interest areas – was strongly associated with age-standardized face recognition scores in both groups, even after controlling for various other potential predictors of performance. Conclusions and Significance In support of the second hypothesis, results suggested that increased saccading between core-features was associated with more accurate face recognition ability, both in typical development and ASD. Causal directions of this relationship remain undetermined. PMID:22666378
Tibbetts, Elizabeth A; Injaian, Allison; Sheehan, Michael J; Desjardins, Nicole
2018-05-01
Research on individual recognition often focuses on species-typical recognition abilities rather than assessing intraspecific variation in recognition. As individual recognition is cognitively costly, the capacity for recognition may vary within species. We test how individual face recognition differs between nest-founding queens (foundresses) and workers in Polistes fuscatus paper wasps. Individual recognition mediates dominance interactions among foundresses. Three previously published experiments have shown that foundresses (1) benefit by advertising their identity with distinctive facial patterns that facilitate recognition, (2) have robust memories of individuals, and (3) rapidly learn to distinguish between face images. Like foundresses, workers have variable facial patterns and are capable of individual recognition. However, worker dominance interactions are muted. Therefore, individual recognition may be less important for workers than for foundresses. We find that (1) workers with unique faces receive amounts of aggression similar to those of workers with common faces, indicating that wasps do not benefit from advertising their individual identity with a unique appearance; (2) workers lack robust memories for individuals, as they cannot remember unique conspecifics after a 6-day separation; and (3) workers learn to distinguish between facial images more slowly than foundresses during training. The recognition differences between foundresses and workers are notable because Polistes lack discrete castes; foundresses and workers are morphologically similar, and workers can take over as queens. Overall, social benefits and receiver capacity for individual recognition are surprisingly plastic.
Enhanced ERPs to visual stimuli in unaffected male siblings of ASD children.
Anzures, Gizelle; Goyet, Louise; Ganea, Natasa; Johnson, Mark H
2016-01-01
Autism spectrum disorders are characterized by deficits in social and communication abilities. While unaffected relatives lack severe deficits, milder impairments have been reported in some first-degree relatives. The present study sought to verify whether mild deficits in face perception are evident among the unaffected younger siblings of children with ASD. Children between 6-9 years of age completed a face-recognition task and a passive viewing ERP task with face and house stimuli. Sixteen children were typically developing with no family history of ASD, and 17 were unaffected children with an older sibling with ASD. Findings indicate that, while unaffected siblings are comparable to controls in their face-recognition abilities, unaffected male siblings in particular show relatively enhanced P100 and P100-N170 peak-to-peak amplitude responses to faces and houses. Enhanced ERPs among unaffected male siblings is discussed in relation to potential differences in neural network recruitment during visual and face processing.
Face Recognition and Description Abilities in People with Mild Intellectual Disabilities
ERIC Educational Resources Information Center
Gawrylowicz, Julie; Gabbert, Fiona; Carson, Derek; Lindsay, William R.; Hancock, Peter J. B.
2013-01-01
Background: People with intellectual disabilities (ID) are as likely as the general population to find themselves in the situation of having to identify and/or describe a perpetrator's face to the police. However, limited verbal and memory abilities in people with ID might prevent them to engage in standard police procedures. Method: Two…
Where Cognitive Development and Aging Meet: Face Learning Ability Peaks after Age 30
ERIC Educational Resources Information Center
Germine, Laura T.; Duchaine, Bradley; Nakayama, Ken
2011-01-01
Research on age-related cognitive change traditionally focuses on either development or aging, where development ends with adulthood and aging begins around 55 years. This approach ignores age-related changes during the 35 years in-between, implying that this period is uninformative. Here we investigated face recognition as an ability that may…
Ryu, Nam Gyu; Lim, Byung Woo; Cho, Jae Keun; Kim, Jin
2016-09-01
We investigated whether experiencing right- or left-sided facial paralysis would affect an individual's ability to recognize one side of the human face using hybrid hemi-facial photos by preliminary study. Further investigation looked at the relationship between facial recognition ability, stress, and quality of life. To investigate predominance of one side of the human face for face recognition, 100 normal participants (right-handed: n = 97, left-handed: n = 3, right brain dominance: n = 56, left brain dominance: n = 44) answered a questionnaire that included hybrid hemi-facial photos developed to determine decide superiority of one side for human face recognition. To determine differences of stress level and quality of life between individuals experiencing right- and left-sided facial paralysis, 100 patients (right side:50, left side:50, not including traumatic facial nerve paralysis) answered a questionnaire about facial disability index test and quality of life (SF-36 Korean version). Regardless of handedness or hemispheric dominance, the proportion of predominance of the right side in human face recognition was larger than the left side (71% versus 12%, neutral: 17%). Facial distress index of the patients with right-sided facial paralysis was lower than that of left-sided patients (68.8 ± 9.42 versus 76.4 ± 8.28), and the SF-36 scores of right-sided patients were lower than left-sided patients (119.07 ± 15.24 versus 123.25 ± 16.48, total score: 166). Universal preference for the right side in human face recognition showed worse psychological mood and social interaction in patients with right-side facial paralysis than left-sided paralysis. This information is helpful to clinicians in that psychological and social factors should be considered when treating patients with facial-paralysis. Copyright © 2016 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Associative (prosop)agnosia without (apparent) perceptual deficits: a case-study.
Anaki, David; Kaufman, Yakir; Freedman, Morris; Moscovitch, Morris
2007-04-09
In associative agnosia early perceptual processing of faces or objects are considered to be intact, while the ability to access stored semantic information about the individual face or object is impaired. Recent claims, however, have asserted that associative agnosia is also characterized by deficits at the perceptual level, which are too subtle to be detected by current neuropsychological tests. Thus, the impaired identification of famous faces or common objects in associative agnosia stems from difficulties in extracting the minute perceptual details required to identify a face or an object. In the present study, we report the case of a patient DBO with a left occipital infarct, who shows impaired object and famous face recognition. Despite his disability, he exhibits a face inversion effect, and is able to select a famous face from among non-famous distractors. In addition, his performance is normal in an immediate and delayed recognition memory for faces, whose external features were deleted. His deficits in face recognition are apparent only when he is required to name a famous face, or select two faces from among a triad of famous figures based on their semantic relationships (a task which does not require access to names). The nature of his deficits in object perception and recognition are similar to his impairments in the face domain. This pattern of behavior supports the notion that apperceptive and associative agnosia reflect distinct and dissociated deficits, which result from damage to different stages of the face and object recognition process.
Emotion Recognition and Visual-Scan Paths in Fragile X Syndrome
ERIC Educational Resources Information Center
Shaw, Tracey A.; Porter, Melanie A.
2013-01-01
This study investigated emotion recognition abilities and visual scanning of emotional faces in 16 Fragile X syndrome (FXS) individuals compared to 16 chronological-age and 16 mental-age matched controls. The relationships between emotion recognition, visual scan-paths and symptoms of social anxiety, schizotypy and autism were also explored.…
The Other-Race Effect Develops During Infancy
Quinn, Paul C.; Slater, Alan M.; Lee, Kang; Ge, Liezhong; Pascalis, Olivier
2008-01-01
Experience plays a crucial role in the development of face processing. In the study reported here, we investigated how faces observed within the visual environment affect the development of the face-processing system during the 1st year of life. We assessed 3-, 6-, and 9-month-old Caucasian infants' ability to discriminate faces within their own racial group and within three other-race groups (African, Middle Eastern, and Chinese). The 3-month-old infants demonstrated recognition in all conditions, the 6-month-old infants were able to recognize Caucasian and Chinese faces only, and the 9-month-old infants' recognition was restricted to own-race faces. The pattern of preferences indicates that the other-race effect is emerging by 6 months of age and is present at 9 months of age. The findings suggest that facial input from the infant's visual environment is crucial for shaping the face-processing system early in infancy, resulting in differential recognition accuracy for faces of different races in adulthood. PMID:18031416
Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji
2003-01-01
Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.
Duchaine, Brad; Nakayama, Ken
2006-01-01
The two standardized tests of face recognition that are widely used suffer from serious shortcomings [Duchaine, B. & Weidenfeld, A. (2003). An evaluation of two commonly used tests of unfamiliar face recognition. Neuropsychologia, 41, 713-720; Duchaine, B. & Nakayama, K. (2004). Developmental prosopagnosia and the Benton Facial Recognition Test. Neurology, 62, 1219-1220]. Images in the Warrington Recognition Memory for Faces test include substantial non-facial information, and the simultaneous presentation of faces in the Benton Facial Recognition Test allows feature matching. Here, we present results from a new test, the Cambridge Face Memory Test, which builds on the strengths of the previous tests. In the test, participants are introduced to six target faces, and then they are tested with forced choice items consisting of three faces, one of which is a target. For each target face, three test items contain views identical to those studied in the introduction, five present novel views, and four present novel views with noise. There are a total of 72 items, and 50 controls averaged 58. To determine whether the test requires the special mechanisms used to recognize upright faces, we conducted two experiments. We predicted that controls would perform much more poorly when the face images are inverted, and as predicted, inverted performance was much worse with a mean of 42. Next we assessed whether eight prosopagnosics would perform poorly on the upright version. The prosopagnosic mean was 37, and six prosopagnosics scored outside the normal range. In contrast, the Warrington test and the Benton test failed to classify a majority of the prosopagnosics as impaired. These results indicate that the new test effectively assesses face recognition across a wide range of abilities.
Can Changes in Eye Movement Scanning Alter the Age-Related Deficit in Recognition Memory?
Chan, Jessica P. K.; Kamino, Daphne; Binns, Malcolm A.; Ryan, Jennifer D.
2011-01-01
Older adults typically exhibit poorer face recognition compared to younger adults. These recognition differences may be due to underlying age-related changes in eye movement scanning. We examined whether older adults’ recognition could be improved by yoking their eye movements to those of younger adults. Participants studied younger and older faces, under free viewing conditions (bases), through a gaze-contingent moving window (own), or a moving window which replayed the eye movements of a base participant (yoked). During the recognition test, participants freely viewed the faces with no viewing restrictions. Own-age recognition biases were observed for older adults in all viewing conditions, suggesting that this effect occurs independently of scanning. Participants in the bases condition had the highest recognition accuracy, and participants in the yoked condition were more accurate than participants in the own condition. Among yoked participants, recognition did not depend on age of the base participant. These results suggest that successful encoding for all participants requires the bottom-up contribution of peripheral information, regardless of the locus of control of the viewer. Although altering the pattern of eye movements did not increase recognition, the amount of sampling of the face during encoding predicted subsequent recognition accuracy for all participants. Increased sampling may confer some advantages for subsequent recognition, particularly for people who have declining memory abilities. PMID:21687460
Age-related increase of image-invariance in the fusiform face area.
Nordt, Marisa; Semmelmann, Kilian; Genç, Erhan; Weigelt, Sarah
2018-06-01
Face recognition undergoes prolonged development from childhood to adulthood, thereby raising the question which neural underpinnings are driving this development. Here, we address the development of the neural foundation of the ability to recognize a face across naturally varying images. Fourteen children (ages, 7-10) and 14 adults (ages, 20-23) watched images of either the same or different faces in a functional magnetic resonance imaging adaptation paradigm. The same face was either presented in exact image repetitions or in varying images. Additionally, a subset of participants completed a behavioral task, in which they decided if the face in consecutively presented images belonged to the same person. Results revealed age-related increases in neural sensitivity to face identity in the fusiform face area. Importantly, ventral temporal face-selective regions exhibited more image-invariance - as indicated by stronger adaptation for different images of the same person - in adults compared to children. Crucially, the amount of adaptation to face identity across varying images was correlated with the ability to recognize individual faces in different images. These results suggest that the increase of image-invariance in face-selective regions might be related to the development of face recognition skills. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
The Facial Appearance of CEOs: Faces Signal Selection but Not Performance.
Stoker, Janka I; Garretsen, Harry; Spreeuwers, Luuk J
2016-01-01
Research overwhelmingly shows that facial appearance predicts leader selection. However, the evidence on the relevance of faces for actual leader ability and consequently performance is inconclusive. By using a state-of-the-art, objective measure for face recognition, we test the predictive value of CEOs' faces for firm performance in a large sample of faces. We first compare the faces of Fortune500 CEOs with those of US citizens and professors. We find clear confirmation that CEOs do look different when compared to citizens or professors, replicating the finding that faces matter for selection. More importantly, we also find that faces of CEOs of top performing firms do not differ from other CEOs. Based on our advanced face recognition method, our results suggest that facial appearance matters for leader selection but that it does not do so for leader performance.
Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung
2018-01-01
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417
Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung
2018-02-26
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.
Newborns' Face Recognition over Changes in Viewpoint
ERIC Educational Resources Information Center
Turati, Chiara; Bulf, Hermann; Simion, Francesca
2008-01-01
The study investigated the origins of the ability to recognize faces despite rotations in depth. Four experiments are reported that tested, using the habituation technique, whether 1-to-3-day-old infants are able to recognize the invariant aspects of a face over changes in viewpoint. Newborns failed to recognize facial perceptual invariances…
Dawson, Geraldine; Webb, Sara Jane; Wijsman, Ellen; Schellenberg, Gerard; Estes, Annette; Munson, Jeffrey; Faja, Susan
2005-01-01
Neuroimaging and behavioral studies have shown that children and adults with autism have impaired face recognition. Individuals with autism also exhibit atypical event-related brain potentials to faces, characterized by a failure to show a negative component (N170) latency advantage to face compared to nonface stimuli and a bilateral, rather than right lateralized, pattern of N170 distribution. In this report, performance by 143 parents of children with autism on standardized verbal, visual-spatial, and face recognition tasks was examined. It was found that parents of children with autism exhibited a significant decrement in face recognition ability relative to their verbal and visual spatial abilities. Event-related brain potentials to face and nonface stimuli were examined in 21 parents of children with autism and 21 control adults. Parents of children with autism showed an atypical event-related potential response to faces, which mirrored the pattern shown by children and adults with autism. These results raise the possibility that face processing might be a functional trait marker of genetic susceptibility to autism. Discussion focuses on hypotheses regarding the neurodevelopmental and genetic basis of altered face processing in autism. A general model of the normal emergence of social brain circuitry in the first year of life is proposed, followed by a discussion of how the trajectory of normal development of social brain circuitry, including cortical specialization for face processing, is altered in individuals with autism. The hypothesis that genetic-mediated dysfunction of the dopamine reward system, especially its functioning in social contexts, might account for altered face processing in individuals with autism and their relatives is discussed.
Hourihan, Kathleen L.; Benjamin, Aaron S.; Liu, Xiping
2012-01-01
The Cross-Race Effect (CRE) in face recognition is the well-replicated finding that people are better at recognizing faces from their own race, relative to other races. The CRE reveals systematic limitations on eyewitness identification accuracy and suggests that some caution is warranted in evaluating cross-race identification. The CRE is a problem because jurors value eyewitness identification highly in verdict decisions. In the present paper, we explore how accurate people are in predicting their ability to recognize own-race and other-race faces. Caucasian and Asian participants viewed photographs of Caucasian and Asian faces, and made immediate judgments of learning during study. An old/new recognition test replicated the CRE: both groups displayed superior discriminability of own-race faces, relative to other-race faces. Importantly, relative metamnemonic accuracy was also greater for own-race faces, indicating that the accuracy of predictions about face recognition is influenced by race. This result indicates another source of concern when eliciting or evaluating eyewitness identification: people are less accurate in judging whether they will or will not recognize a face when that face is of a different race than they are. This new result suggests that a witness’s claim of being likely to recognize a suspect from a lineup should be interpreted with caution when the suspect is of a different race than the witness. PMID:23162788
Effect of familiarity and viewpoint on face recognition in chimpanzees
Parr, Lisa A; Siebert, Erin; Taubert, Jessica
2012-01-01
Numerous studies have shown that familiarity strongly influences how well humans recognize faces. This is particularly true when faces are encountered across a change in viewpoint. In this situation, recognition may be accomplished by matching partial or incomplete information about a face to a stored representation of the known individual, whereas such representations are not available for unknown faces. Chimpanzees, our closest living relatives, share many of the same behavioral specializations for face processing as humans, but the influence of familiarity and viewpoint have never been compared in the same study. Here, we examined the ability of chimpanzees to match the faces of familiar and unfamiliar conspecifics in their frontal and 3/4 views using a computerized task. Results showed that, while chimpanzees were able to accurately match both familiar and unfamiliar faces in their frontal orientations, performance was significantly impaired only when unfamiliar faces were presented across a change in viewpoint. Therefore, like in humans, face processing in chimpanzees appears to be sensitive to individual familiarity. We propose that familiarization is a robust mechanism for strengthening the representation of faces and has been conserved in primates to achieve efficient individual recognition over a range of natural viewing conditions. PMID:22128558
Valla, Jeffrey M; Maendel, Jeffrey W; Ganzel, Barbara L; Barsky, Andrew R; Belmonte, Matthew K
2013-01-01
Autistic face processing difficulties are either uniquely social or due to a piecemeal cognitive "style." Co-morbidity of social deficits and piecemeal cognition in autism makes teasing apart these accounts difficult. These traits vary normally, and are more separable in the general population, suggesting another way to compare accounts. Participants completed the Autism Quotient survey of autistic traits, and one of three face recognition tests: full-face, eyes-only, or mouth-only. Social traits predicted performance in the full-face condition in both sexes. Eyes-only males' performance was predicted by a social × cognitive trait interaction: attention to detail boosted face recognition in males with few social traits, but hindered performance in those reporting many social traits. This suggests social/non-social Autism Spectrum Conditions (ASC) trait interactions at the behavioral level. In the presence of few ASC-like difficulties in social reciprocity, an ASC-like attention to detail may confer advantages on typical males' face recognition skills. On the other hand, when attention to detail co-occurs with difficulties in social reciprocity, a detailed focus may exacerbate such already present social difficulties, as is thought to occur in autism.
Simpson, Claire; Pinkham, Amy E; Kelsven, Skylar; Sasson, Noah J
2013-12-01
Emotion can be expressed by both the voice and face, and previous work suggests that presentation modality may impact emotion recognition performance in individuals with schizophrenia. We investigated the effect of stimulus modality on emotion recognition accuracy and the potential role of visual attention to faces in emotion recognition abilities. Thirty-one patients who met DSM-IV criteria for schizophrenia (n=8) or schizoaffective disorder (n=23) and 30 non-clinical control individuals participated. Both groups identified emotional expressions in three different conditions: audio only, visual only, combined audiovisual. In the visual only and combined conditions, time spent visually fixating salient features of the face were recorded. Patients were significantly less accurate than controls in emotion recognition during both the audio and visual only conditions but did not differ from controls on the combined condition. Analysis of visual scanning behaviors demonstrated that patients attended less than healthy individuals to the mouth in the visual condition but did not differ in visual attention to salient facial features in the combined condition, which may in part explain the absence of a deficit for patients in this condition. Collectively, these findings demonstrate that patients benefit from multimodal stimulus presentations of emotion and support hypotheses that visual attention to salient facial features may serve as a mechanism for accurate emotion identification. © 2013.
The Facial Appearance of CEOs: Faces Signal Selection but Not Performance
Garretsen, Harry; Spreeuwers, Luuk J.
2016-01-01
Research overwhelmingly shows that facial appearance predicts leader selection. However, the evidence on the relevance of faces for actual leader ability and consequently performance is inconclusive. By using a state-of-the-art, objective measure for face recognition, we test the predictive value of CEOs’ faces for firm performance in a large sample of faces. We first compare the faces of Fortune500 CEOs with those of US citizens and professors. We find clear confirmation that CEOs do look different when compared to citizens or professors, replicating the finding that faces matter for selection. More importantly, we also find that faces of CEOs of top performing firms do not differ from other CEOs. Based on our advanced face recognition method, our results suggest that facial appearance matters for leader selection but that it does not do so for leader performance. PMID:27462986
Newborns' Face Recognition Is Based on Spatial Frequencies below 0.5 Cycles per Degree
ERIC Educational Resources Information Center
de Heering, Adelaide; Turati, Chiara; Rossion, Bruno; Bulf, Hermann; Goffaux, Valerie; Simion, Francesca
2008-01-01
A critical question in Cognitive Science concerns how knowledge of specific domains emerges during development. Here we examined how limitations of the visual system during the first days of life may shape subsequent development of face processing abilities. By manipulating the bands of spatial frequencies of face images, we investigated what is…
NASA Astrophysics Data System (ADS)
Wang, Q.; Elbouz, M.; Alfalou, A.; Brosseau, C.
2017-06-01
We present a novel method to optimize the discrimination ability and noise robustness of composite filters. This method is based on the iterative preprocessing of training images which can extract boundary and detailed feature information of authentic training faces, thereby improving the peak-to-correlation energy (PCE) ratio of authentic faces and to be immune to intra-class variance and noise interference. By adding the training images directly, one can obtain a composite template with high discrimination ability and robustness for face recognition task. The proposed composite correlation filter does not involve any complicated mathematical analysis and computation which are often required in the design of correlation algorithms. Simulation tests have been conducted to check the effectiveness and feasibility of our proposal. Moreover, to assess robustness of composite filters using receiver operating characteristic (ROC) curves, we devise a new method to count the true positive and false positive rates for which the difference between PCE and threshold is involved.
Face recognition with the Karhunen-Loeve transform
NASA Astrophysics Data System (ADS)
Suarez, Pedro F.
1991-12-01
The major goal of this research was to investigate machine recognition of faces. The approach taken to achieve this goal was to investigate the use of Karhunen-Loe've Transform (KLT) by implementing flexible and practical code. The KLT utilizes the eigenvectors of the covariance matrix as a basis set. Faces were projected onto the eigenvectors, called eigenfaces, and the resulting projection coefficients were used as features. Face recognition accuracies for the KLT coefficients were superior to Fourier based techniques. Additionally, this thesis demonstrated the image compression and reconstruction capabilities of the KLT. This theses also developed the use of the KLT as a facial feature detector. The ability to differentiate between facial features provides a computer communications interface for non-vocal people with cerebral palsy. Lastly, this thesis developed a KLT based axis system for laser scanner data of human heads. The scanner data axis system provides the anthropometric community a more precise method of fitting custom helmets.
Typical and Atypical Development of Functional Connectivity in the Face Network.
Song, Yiying; Zhu, Qi; Li, Jingguang; Wang, Xu; Liu, Jia
2015-10-28
Extensive studies have demonstrated that face recognition performance does not reach adult levels until adolescence. However, there is no consensus on whether such prolonged improvement stems from development of general cognitive factors or face-specific mechanisms. Here, we used behavioral experiments and functional magnetic resonance imaging (fMRI) to evaluate these two hypotheses. With a large cohort of children (n = 379), we found that the ability of face-specific recognition in humans increased with age throughout childhood and into late adolescence in both face memory and face perception. Neurally, to circumvent the potential problem of age differences in task performance, attention, or cognitive strategies in task-state fMRI studies, we measured the resting-state functional connectivity (RSFC) between the occipital face area (OFA) and fusiform face area (FFA) in human brain and found that the OFA-FFA RSFC increased until 11-13 years of age. Moreover, the OFA-FFA RSFC was selectively impaired in adults with developmental prosopagnosia (DP). In contrast, no age-related changes or differences between DP and normal adults were observed for RSFCs in the object system. Finally, the OFA-FFA RSFC matured earlier than face selectivity in either the OFA or FFA. These results suggest the critical role of the OFA-FFA RSFC in the development of face recognition. Together, our findings support the hypothesis that prolonged development of face recognition is face specific, not domain general. Copyright © 2015 the authors 0270-6474/15/3514624-12$15.00/0.
Olsen, Rosanna K; Lee, Yunjo; Kube, Jana; Rosenbaum, R Shayna; Grady, Cheryl L; Moscovitch, Morris; Ryan, Jennifer D
2015-04-01
Current theories state that the hippocampus is responsible for the formation of memory representations regarding relations, whereas extrahippocampal cortical regions support representations for single items. However, findings of impaired item memory in hippocampal amnesics suggest a more nuanced role for the hippocampus in item memory. The hippocampus may be necessary when the item elements need to be bound within and across episodes to form a lasting representation that can be used flexibly. The current investigation was designed to test this hypothesis in face recognition. H.C., an individual who developed with a compromised hippocampal system, and control participants incidentally studied individual faces that either varied in presentation viewpoint across study repetitions or remained in a fixed viewpoint across the study repetitions. Eye movements were recorded during encoding and participants then completed a surprise recognition memory test. H.C. demonstrated altered face viewing during encoding. Although the overall number of fixations made by H.C. was not significantly different from that of controls, the distribution of her viewing was primarily directed to the eye region. Critically, H.C. was significantly impaired in her ability to subsequently recognize faces studied from variable viewpoints, but demonstrated spared performance in recognizing faces she encoded from a fixed viewpoint, implicating a relationship between eye movement behavior in the service of a hippocampal binding function. These findings suggest that a compromised hippocampal system disrupts the ability to bind item features within and across study repetitions, ultimately disrupting recognition when it requires access to flexible relational representations. Copyright © 2015 the authors 0270-6474/15/355342-09$15.00/0.
Impaired perception of facial emotion in developmental prosopagnosia.
Biotti, Federica; Cook, Richard
2016-08-01
Developmental prosopagnosia (DP) is a neurodevelopmental condition characterised by difficulties recognising faces. Despite severe difficulties recognising facial identity, expression recognition is typically thought to be intact in DP; case studies have described individuals who are able to correctly label photographic displays of facial emotion, and no group differences have been reported. This pattern of deficits suggests a locus of impairment relatively late in the face processing stream, after the divergence of expression and identity analysis pathways. To date, however, there has been little attempt to investigate emotion recognition systematically in a large sample of developmental prosopagnosics using sensitive tests. In the present study, we describe three complementary experiments that examine emotion recognition in a sample of 17 developmental prosopagnosics. In Experiment 1, we investigated observers' ability to make binary classifications of whole-face expression stimuli drawn from morph continua. In Experiment 2, observers judged facial emotion using only the eye-region (the rest of the face was occluded). Analyses of both experiments revealed diminished ability to classify facial expressions in our sample of developmental prosopagnosics, relative to typical observers. Imprecise expression categorisation was particularly evident in those individuals exhibiting apperceptive profiles, associated with problems encoding facial shape accurately. Having split the sample of prosopagnosics into apperceptive and non-apperceptive subgroups, only the apperceptive prosopagnosics were impaired relative to typical observers. In our third experiment, we examined the ability of observers' to classify the emotion present within segments of vocal affect. Despite difficulties judging facial emotion, the prosopagnosics exhibited excellent recognition of vocal affect. Contrary to the prevailing view, our results suggest that many prosopagnosics do experience difficulties classifying expressions, particularly those with apperceptive profiles. These individuals may have difficulties forming view-invariant structural descriptions at an early stage in the face processing stream, before identity and expression pathways diverge. Copyright © 2016 Elsevier Ltd. All rights reserved.
Huang, Charles Lung-Cheng; Hsiao, Sigmund; Hwu, Hai-Gwo; Howng, Shen-Long
2013-10-30
This study assessed facial emotion recognition abilities in subjects with paranoid and non-paranoid schizophrenia (NPS) using signal detection theory. We explore the differential deficits in facial emotion recognition in 44 paranoid patients with schizophrenia (PS) and 30 non-paranoid patients with schizophrenia (NPS), compared to 80 healthy controls. We used morphed faces with different intensities of emotion and computed the sensitivity index (d') of each emotion. The results showed that performance differed between the schizophrenia and healthy controls groups in the recognition of both negative and positive affects. The PS group performed worse than the healthy controls group but better than the NPS group in overall performance. Performance differed between the NPS and healthy controls groups in the recognition of all basic emotions and neutral faces; between the PS and healthy controls groups in the recognition of angry faces; and between the PS and NPS groups in the recognition of happiness, anger, sadness, disgust, and neutral affects. The facial emotion recognition impairment in schizophrenia may reflect a generalized deficit rather than a negative-emotion specific deficit. The PS group performed worse than the control group, but better than the NPS group in facial expression recognition, with differential deficits between PS and NPS patients. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Stropahl, Maren; Plotz, Karsten; Schönfeld, Rüdiger; Lenarz, Thomas; Sandmann, Pascale; Yovel, Galit; De Vos, Maarten; Debener, Stefan
2015-11-01
There is converging evidence that the auditory cortex takes over visual functions during a period of auditory deprivation. A residual pattern of cross-modal take-over may prevent the auditory cortex to adapt to restored sensory input as delivered by a cochlear implant (CI) and limit speech intelligibility with a CI. The aim of the present study was to investigate whether visual face processing in CI users activates auditory cortex and whether this has adaptive or maladaptive consequences. High-density electroencephalogram data were recorded from CI users (n=21) and age-matched normal hearing controls (n=21) performing a face versus house discrimination task. Lip reading and face recognition abilities were measured as well as speech intelligibility. Evaluation of event-related potential (ERP) topographies revealed significant group differences over occipito-temporal scalp regions. Distributed source analysis identified significantly higher activation in the right auditory cortex for CI users compared to NH controls, confirming visual take-over. Lip reading skills were significantly enhanced in the CI group and appeared to be particularly better after a longer duration of deafness, while face recognition was not significantly different between groups. However, auditory cortex activation in CI users was positively related to face recognition abilities. Our results confirm a cross-modal reorganization for ecologically valid visual stimuli in CI users. Furthermore, they suggest that residual takeover, which can persist even after adaptation to a CI is not necessarily maladaptive. Copyright © 2015 Elsevier Inc. All rights reserved.
Development of coffee maker service robot using speech and face recognition systems using POMDP
NASA Astrophysics Data System (ADS)
Budiharto, Widodo; Meiliana; Santoso Gunawan, Alexander Agung
2016-07-01
There are many development of intelligent service robot in order to interact with user naturally. This purpose can be done by embedding speech and face recognition ability on specific tasks to the robot. In this research, we would like to propose Intelligent Coffee Maker Robot which the speech recognition is based on Indonesian language and powered by statistical dialogue systems. This kind of robot can be used in the office, supermarket or restaurant. In our scenario, robot will recognize user's face and then accept commands from the user to do an action, specifically in making a coffee. Based on our previous work, the accuracy for speech recognition is about 86% and face recognition is about 93% in laboratory experiments. The main problem in here is to know the intention of user about how sweetness of the coffee. The intelligent coffee maker robot should conclude the user intention through conversation under unreliable automatic speech in noisy environment. In this paper, this spoken dialog problem is treated as a partially observable Markov decision process (POMDP). We describe how this formulation establish a promising framework by empirical results. The dialog simulations are presented which demonstrate significant quantitative outcome.
Eye-movement strategies in developmental prosopagnosia and "super" face recognition.
Bobak, Anna K; Parris, Benjamin A; Gregory, Nicola J; Bennetts, Rachel J; Bate, Sarah
2017-02-01
Developmental prosopagnosia (DP) is a cognitive condition characterized by a severe deficit in face recognition. Few investigations have examined whether impairments at the early stages of processing may underpin the condition, and it is also unknown whether DP is simply the "bottom end" of the typical face-processing spectrum. To address these issues, we monitored the eye-movements of DPs, typical perceivers, and "super recognizers" (SRs) while they viewed a set of static images displaying people engaged in naturalistic social scenarios. Three key findings emerged: (a) Individuals with more severe prosopagnosia spent less time examining the internal facial region, (b) as observed in acquired prosopagnosia, some DPs spent less time examining the eyes and more time examining the mouth than controls, and (c) SRs spent more time examining the nose-a measure that also correlated with face recognition ability in controls. These findings support previous suggestions that DP is a heterogeneous condition, but suggest that at least the most severe cases represent a group of individuals that qualitatively differ from the typical population. While SRs seem to merely be those at the "top end" of normal, this work identifies the nose as a critical region for successful face recognition.
Development of the other-race effect during infancy: evidence toward universality?
Kelly, David J; Liu, Shaoying; Lee, Kang; Quinn, Paul C; Pascalis, Olivier; Slater, Alan M; Ge, Liezhong
2009-09-01
The other-race effect in face processing develops within the first year of life in Caucasian infants. It is currently unknown whether the developmental trajectory observed in Caucasian infants can be extended to other cultures. This is an important issue to investigate because recent findings from cross-cultural psychology have suggested that individuals from Eastern and Western backgrounds tend to perceive the world in fundamentally different ways. To this end, the current study investigated 3-, 6-, and 9-month-old Chinese infants' ability to discriminate faces within their own racial group and within two other racial groups (African and Caucasian). The 3-month-olds demonstrated recognition in all conditions, whereas the 6-month-olds recognized Chinese faces and displayed marginal recognition for Caucasian faces but did not recognize African faces. The 9-month-olds' recognition was limited to Chinese faces. This pattern of development is consistent with the perceptual narrowing hypothesis that our perceptual systems are shaped by experience to be optimally sensitive to stimuli most commonly encountered in one's unique cultural environment.
Deska, Jason C; Lloyd, E Paige; Hugenberg, Kurt
2018-04-01
The ability to rapidly and accurately decode facial expressions is adaptive for human sociality. Although judgments of emotion are primarily determined by musculature, static face structure can also impact emotion judgments. The current work investigates how facial width-to-height ratio (fWHR), a stable feature of all faces, influences perceivers' judgments of expressive displays of anger and fear (Studies 1a, 1b, & 2), and anger and happiness (Study 3). Across 4 studies, we provide evidence consistent with the hypothesis that perceivers more readily see anger on faces with high fWHR compared with those with low fWHR, which instead facilitates the recognition of fear and happiness. This bias emerges when participants are led to believe that targets displaying otherwise neutral faces are attempting to mask an emotion (Studies 1a & 1b), and is evident when faces display an emotion (Studies 2 & 3). Together, these studies suggest that target facial width-to-height ratio biases ascriptions of emotion with consequences for emotion recognition speed and accuracy. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
The utility of multiple synthesized views in the recognition of unfamiliar faces.
Jones, Scott P; Dwyer, Dominic M; Lewis, Michael B
2017-05-01
The ability to recognize an unfamiliar individual on the basis of prior exposure to a photograph is notoriously poor and prone to errors, but recognition accuracy is improved when multiple photographs are available. In applied situations, when only limited real images are available (e.g., from a mugshot or CCTV image), the generation of new images might provide a technological prosthesis for otherwise fallible human recognition. We report two experiments examining the effects of providing computer-generated additional views of a target face. In Experiment 1, provision of computer-generated views supported better target face recognition than exposure to the target image alone and equivalent performance to that for exposure of multiple photograph views. Experiment 2 replicated the advantage of providing generated views, but also indicated an advantage for multiple viewings of the single target photograph. These results strengthen the claim that identifying a target face can be improved by providing multiple synthesized views based on a single target image. In addition, our results suggest that the degree of advantage provided by synthesized views may be affected by the quality of synthesized material.
Daini, Roberta; Comparetti, Chiara M.; Ricciardelli, Paola
2014-01-01
Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition. PMID:25520643
Daini, Roberta; Comparetti, Chiara M; Ricciardelli, Paola
2014-01-01
Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition.
Is That Me or My Twin? Lack of Self-Face Recognition Advantage in Identical Twins
Martini, Matteo; Bufalari, Ilaria; Stazi, Maria Antonietta; Aglioti, Salvatore Maria
2015-01-01
Despite the increasing interest in twin studies and the stunning amount of research on face recognition, the ability of adult identical twins to discriminate their own faces from those of their co-twins has been scarcely investigated. One’s own face is the most distinctive feature of the bodily self, and people typically show a clear advantage in recognizing their own face even more than other very familiar identities. Given the very high level of resemblance of their faces, monozygotic twins represent a unique model for exploring self-face processing. Herein we examined the ability of monozygotic twins to distinguish their own face from the face of their co-twin and of a highly familiar individual. Results show that twins equally recognize their own face and their twin’s face. This lack of self-face advantage was negatively predicted by how much they felt physically similar to their co-twin and by their anxious or avoidant attachment style. We speculate that in monozygotic twins, the visual representation of the self-face overlaps with that of the co-twin. Thus, to distinguish the self from the co-twin, monozygotic twins have to rely much more than control participants on the multisensory integration processes upon which the sense of bodily self is based. Moreover, in keeping with the notion that attachment style influences perception of self and significant others, we propose that the observed self/co-twin confusion may depend upon insecure attachment. PMID:25853249
[Developmental change in facial recognition by premature infants during infancy].
Konishi, Yukihiko; Kusaka, Takashi; Nishida, Tomoko; Isobe, Kenichi; Itoh, Susumu
2014-09-01
Premature infants are thought to be at increased risk for developmental disorders. We evaluated facial recognition by premature infants during early infancy, as this ability has been reported to be impaired commonly in developmentally disabled children. In premature infants and full-term infants at the age of 4 months (4 corrected months for premature infants), visual behaviors while performing facial recognition tasks were determined and analyzed using an eye-tracking system (Tobii T60 manufactured by Tobii Technologics, Sweden). Both types of infants had a preference towards normal facial expressions; however, no preference towards the upper face was observed in premature infants. Our study suggests that facial recognition ability in premature infants may develop differently from that in full-term infants.
Memory Abilities in Williams Syndrome: Dissociation or Developmental Delay Hypothesis?
ERIC Educational Resources Information Center
Sampaio, Adriana; Sousa, Nuno; Fernandez, Montse; Henriques, Margarida; Goncalves, Oscar F.
2008-01-01
Williams syndrome (WS) is a neurodevelopmental genetic disorder often described as being characterized by a dissociative cognitive architecture, in which profound impairments of visuo-spatial cognition contrast with relative preservation of linguistic, face recognition and auditory short-memory abilities. This asymmetric and dissociative cognition…
Emotion Recognition in Face and Body Motion in Bulimia Nervosa.
Dapelo, Marcela Marin; Surguladze, Simon; Morris, Robin; Tchanturia, Kate
2017-11-01
Social cognition has been studied extensively in anorexia nervosa (AN), but there are few studies in bulimia nervosa (BN). This study investigated the ability of people with BN to recognise emotions in ambiguous facial expressions and in body movement. Participants were 26 women with BN, who were compared with 35 with AN, and 42 healthy controls. Participants completed an emotion recognition task by using faces portraying blended emotions, along with a body emotion recognition task by using videos of point-light walkers. The results indicated that BN participants exhibited difficulties recognising disgust in less-ambiguous facial expressions, and a tendency to interpret non-angry faces as anger, compared with healthy controls. These difficulties were similar to those found in AN. There were no significant differences amongst the groups in body motion emotion recognition. The findings suggest that difficulties with disgust and anger recognition in facial expressions may be shared transdiagnostically in people with eating disorders. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association.
ERIC Educational Resources Information Center
Wild, Heather A.; Barett, Susan E.; Spence, Melanie J.; O'Toole, Alice J.; Cheng, Yi D.; Brooke, Jessica
2000-01-01
Investigated 7-year-olds', 9-year-olds', and adults' ability to classify children's and adults' faces by sex using only biological based internal facial structure. Found that participants categorized adult faces by sex at accuracy levels varying from just above chance (7-year-olds) to nearly perfect (adults). All groups were less accurate for…
How Negative Social Bias Affects Memory for Faces: An Electrical Neuroimaging Study
Proverbio, Alice Mado; La Mastra, Francesca; Zani, Alberto
2016-01-01
During social interactions, we make inferences about people’s personal characteristics based on their appearance. These inferences form a potential prejudice that can positively or negatively bias our interaction with them. Not much is known about the effects of negative bias on face perception and the ability to recognize people faces. This ability was investigated by recording event-related potentials (ERPs) from 128 sites in 16 volunteers. In the first session (encoding), they viewed 200 faces associated with a short fictional story that described anecdotal positive or negative characteristics about each person. In the second session (recognition), they underwent an old/new memory test, in which they had to distinguish 100 new faces from the previously shown faces. ERP data relative to the encoding phase showed a larger anterior negativity in response to negatively (vs. positively) biased faces, indicating an additional processing of faces with unpleasant social traits. In the recognition task, ERPs recorded in response to new faces elicited a larger FN400 than to old faces, and to positive than negative faces. Additionally, old faces elicited a larger Old-New parietal response than new faces, in the form of an enlarged late positive (LPC) component. An inverse solution SwLORETA (450–550 ms) indicated that remembering old faces was associated with the activation of right superior frontal gyrus (SFG), left medial temporal gyrus, and right fusiform gyrus. Only negatively connoted faces strongly activated the limbic and parahippocampal areas and the left SFG. A dissociation was found between familiarity (modulated by negative bias) and recollection (distinguishing old from new faces). PMID:27655327
[The effects of normal aging on face naming and recognition of famous people: battery 75].
Pluchon, C; Simonnet, E; Toullat, G; Gil, R
2002-07-01
The difficulty to recall proper nouns is often something elderly people complain about. Thus, we tried to build and standardize a tool that could allow a quantified estimation of the naming and recognition abilities about famous people faces, specifying the part of gender, age and cultural level for each kind of test. The performances of 542 subjects divided in 3 age brackets and 3 academic knowledge levels were analysed. To carry out the test material, the artistic team of the Grevin Museum (Paris) was called upon. Their work offers a homogeneous way to shape famous people faces. One same person thus photographed 75 characters from different social categories with the same conditions of light, during only one day. The results of the study show that men perform better than women as concerns naming task, but that there's no difference between genders as concerns recognition task. Recognition performances are significantly better whatever the age, the gender and the cultural level may be. Generally, performances are all the more better since subjects are younger and have a higher cultural level. Our study then confirms the fact that normal aging goes hand in hand with rising difficulties to name faces. Moreover, results tend to show that recognition of faces remains better preserved and that the greater disability to recall a name is linked to difficulties in lexical accessing.
Kita, Yosuke; Gunji, Atsuko; Inoue, Yuki; Goto, Takaaki; Sakihara, Kotoe; Kaga, Makiko; Inagaki, Masumi; Hosokawa, Toru
2011-06-01
It is assumed that children with autism spectrum disorders (ASD) have specificities for self-face recognition, which is known to be a basic cognitive ability for social development. In the present study, we investigated neurological substrates and potentially influential factors for self-face recognition of ASD patients using near-infrared spectroscopy (NIRS). The subjects were 11 healthy adult men, 13 normally developing boys, and 10 boys with ASD. Their hemodynamic activities in the frontal area and their scanning strategies (eye-movement) were examined during self-face recognition. Other factors such as ASD severities and self-consciousness were also evaluated by parents and patients, respectively. Oxygenated hemoglobin levels were higher in the regions corresponding to the right inferior frontal gyrus than in those corresponding to the left inferior frontal gyrus. In two groups of children these activities reflected ASD severities, such that the more serious ASD characteristics corresponded with lower activity levels. Moreover, higher levels of public self-consciousness intensified the activities, which were not influenced by the scanning strategies. These findings suggest that dysfunction in the right inferior frontal gyrus areas responsible for self-face recognition is one of the crucial neural substrates underlying ASD characteristics, which could potentially be used to evaluate psychological aspects such as public self-consciousness. Copyright © 2010 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Neil, Louise; Cappagli, Giulia; Karaminis, Themelis; Jenkins, Rob; Pellicano, Elizabeth
2016-01-01
Unfamiliar face recognition follows a particularly protracted developmental trajectory and is more likely to be atypical in children with autism than those without autism. There is a paucity of research, however, examining the ability to recognize the same face across multiple naturally varying images. Here, we investigated within-person face recognition in children with and without autism. In Experiment 1, typically developing 6- and 7-year-olds, 8- and 9-year-olds, 10- and 11-year-olds, 12- to 14-year-olds, and adults were given 40 grayscale photographs of two distinct male identities (20 of each face taken at different ages, from different angles, and in different lighting conditions) and were asked to sort them by identity. Children mistook images of the same person as images of different people, subdividing each individual into many perceived identities. Younger children divided images into more perceived identities than adults and also made more misidentification errors (placing two different identities together in the same group) than older children and adults. In Experiment 2, we used the same procedure with 32 cognitively able children with autism. Autistic children reported a similar number of identities and made similar numbers of misidentification errors to a group of typical children of similar age and ability. Fine-grained analysis using matrices revealed marginal group differences in overall performance. We suggest that the immature performance in typical and autistic children could arise from problems extracting the perceptual commonalities from different images of the same person and building stable representations of facial identity. PMID:26615971
Moghadam, Saeed Montazeri; Seyyedsalehi, Seyyed Ali
2018-05-31
Nonlinear components extracted from deep structures of bottleneck neural networks exhibit a great ability to express input space in a low-dimensional manifold. Sharing and combining the components boost the capability of the neural networks to synthesize and interpolate new and imaginary data. This synthesis is possibly a simple model of imaginations in human brain where the components are expressed in a nonlinear low dimensional manifold. The current paper introduces a novel Dynamic Deep Bottleneck Neural Network to analyze and extract three main features of videos regarding the expression of emotions on the face. These main features are identity, emotion and expression intensity that are laid in three different sub-manifolds of one nonlinear general manifold. The proposed model enjoying the advantages of recurrent networks was used to analyze the sequence and dynamics of information in videos. It is noteworthy to mention that this model also has also the potential to synthesize new videos showing variations of one specific emotion on the face of unknown subjects. Experiments on discrimination and recognition ability of extracted components showed that the proposed model has an average of 97.77% accuracy in recognition of six prominent emotions (Fear, Surprise, Sadness, Anger, Disgust, and Happiness), and 78.17% accuracy in the recognition of intensity. The produced videos revealed variations from neutral to the apex of an emotion on the face of the unfamiliar test subject which is on average 0.8 similar to reference videos in the scale of the SSIM method. Copyright © 2018 Elsevier Ltd. All rights reserved.
Impact of severity of drug use on discrete emotions recognition in polysubstance abusers.
Fernández-Serrano, María José; Lozano, Oscar; Pérez-García, Miguel; Verdejo-García, Antonio
2010-06-01
Neuropsychological studies support the association between severity of drug intake and alterations in specific cognitive domains and neural systems, but there is disproportionately less research on the neuropsychology of emotional alterations associated with addiction. One of the key aspects of adaptive emotional functioning potentially relevant to addiction progression and treatment is the ability to recognize basic emotions in the faces of others. Therefore, the aims of this study were: (i) to examine facial emotion recognition in abstinent polysubstance abusers, and (ii) to explore the association between patterns of quantity and duration of use of several drugs co-abused (including alcohol, cannabis, cocaine, heroin and MDMA) and the ability to identify discrete facial emotional expressions portraying basic emotions. We compared accuracy of emotion recognition of facial expressions portraying six basic emotions (measured with the Ekman Faces Test) between polysubstance abusers (PSA, n=65) and non-drug using comparison individuals (NDCI, n=30), and used regression models to explore the association between quantity and duration of use of the different drugs co-abused and indices of recognition of each of the six emotions, while controlling for relevant socio-demographic and affect-related confounders. Results showed: (i) that PSA had significantly poorer recognition than NDCI for facial expressions of anger, disgust, fear and sadness; (ii) that measures of quantity and duration of drugs used significantly predicted poorer discrete emotions recognition: quantity of cocaine use predicted poorer anger recognition, and duration of cocaine use predicted both poorer anger and fear recognition. Severity of cocaine use also significantly predicted overall recognition accuracy. Copyright (c) 2010 Elsevier Ireland Ltd. All rights reserved.
A voxel-based lesion study on facial emotion recognition after penetrating brain injury
Dal Monte, Olga; Solomon, Jeffrey M.; Schintu, Selene; Knutson, Kristine M.; Strenziok, Maren; Pardini, Matteo; Leopold, Anne; Raymont, Vanessa; Grafman, Jordan
2013-01-01
The ability to read emotions in the face of another person is an important social skill that can be impaired in subjects with traumatic brain injury (TBI). To determine the brain regions that modulate facial emotion recognition, we conducted a whole-brain analysis using a well-validated facial emotion recognition task and voxel-based lesion symptom mapping (VLSM) in a large sample of patients with focal penetrating TBIs (pTBIs). Our results revealed that individuals with pTBI performed significantly worse than normal controls in recognizing unpleasant emotions. VLSM mapping results showed that impairment in facial emotion recognition was due to damage in a bilateral fronto-temporo-limbic network, including medial prefrontal cortex (PFC), anterior cingulate cortex, left insula and temporal areas. Beside those common areas, damage to the bilateral and anterior regions of PFC led to impairment in recognizing unpleasant emotions, whereas bilateral posterior PFC and left temporal areas led to impairment in recognizing pleasant emotions. Our findings add empirical evidence that the ability to read pleasant and unpleasant emotions in other people's faces is a complex process involving not only a common network that includes bilateral fronto-temporo-limbic lobes, but also other regions depending on emotional valence. PMID:22496440
The Ability of Visually Impaired Children to Read Expressions and Recognize Faces.
ERIC Educational Resources Information Center
Ellis, H. D.; And Others
1987-01-01
Seventeen visually impaired children, aged 7-11 years, were compared with sighted children on a test of facial recognition and a test of expression identification. The visually impaired children were less able to recognize faces successfully but showed no disadvantage in discerning facial expressions such as happiness, anger, surprise, or fear.…
Dalkıran, Mihriban; Tasdemir, Akif; Salihoglu, Tamer; Emul, Murat; Duran, Alaattin; Ugur, Mufit; Yavuz, Ruhi
2017-09-01
People with schizophrenia have impairments in emotion recognition along with other social cognitive deficits. In the current study, we aimed to investigate the immediate benefits of ECT on facial emotion recognition ability. Thirty-two treatment resistant patients with schizophrenia who have been indicated for ECT enrolled in the study. Facial emotion stimuli were a set of 56 photographs that depicted seven basic emotions: sadness, anger, happiness, disgust, surprise, fear, and neutral faces. The average age of the participants was 33.4 ± 10.5 years. The rate of recognizing the disgusted facial expression increased significantly after ECT (p < 0.05) and no significant changes were found in the rest of the facial expressions (p > 0.05). After the ECT, the time period of responding to the fear and happy facial expressions were significantly shorter (p < 0.05). Facial emotion recognition ability is an important social cognitive skill for social harmony, proper relation and living independently. At least, the ECT sessions do not seem to affect facial emotion recognition ability negatively and seem to improve identifying disgusted facial emotion which is related with dopamine enriched regions in brain.
From scores to face templates: a model-based approach.
Mohanty, Pranab; Sarkar, Sudeep; Kasturi, Rangachar
2007-12-01
Regeneration of templates from match scores has security and privacy implications related to any biometric authentication system. We propose a novel paradigm to reconstruct face templates from match scores using a linear approach. It proceeds by first modeling the behavior of the given face recognition algorithm by an affine transformation. The goal of the modeling is to approximate the distances computed by a face recognition algorithm between two faces by distances between points, representing these faces, in an affine space. Given this space, templates from an independent image set (break-in) are matched only once with the enrolled template of the targeted subject and match scores are recorded. These scores are then used to embed the targeted subject in the approximating affine (non-orthogonal) space. Given the coordinates of the targeted subject in the affine space, the original template of the targeted subject is reconstructed using the inverse of the affine transformation. We demonstrate our ideas using three, fundamentally different, face recognition algorithms: Principal Component Analysis (PCA) with Mahalanobis cosine distance measure, Bayesian intra-extrapersonal classifier (BIC), and a feature-based commercial algorithm. To demonstrate the independence of the break-in set with the gallery set, we select face templates from two different databases: Face Recognition Grand Challenge (FRGC) and Facial Recognition Technology (FERET) Database (FERET). With an operational point set at 1 percent False Acceptance Rate (FAR) and 99 percent True Acceptance Rate (TAR) for 1,196 enrollments (FERET gallery), we show that at most 600 attempts (score computations) are required to achieve a 73 percent chance of breaking in as a randomly chosen target subject for the commercial face recognition system. With similar operational set up, we achieve a 72 percent and 100 percent chance of breaking in for the Bayesian and PCA based face recognition systems, respectively. With three different levels of score quantization, we achieve 69 percent, 68 percent and 49 percent probability of break-in, indicating the robustness of our proposed scheme to score quantization. We also show that the proposed reconstruction scheme has 47 percent more probability of breaking in as a randomly chosen target subject for the commercial system as compared to a hill climbing approach with the same number of attempts. Given that the proposed template reconstruction method uses distinct face templates to reconstruct faces, this work exposes a more severe form of vulnerability than a hill climbing kind of attack where incrementally different versions of the same face are used. Also, the ability of the proposed approach to reconstruct actual face templates of the users increases privacy concerns in biometric systems.
Maurer, Leonie; Zitting, Kirsi-Marja; Elliott, Kieran; Czeisler, Charles A.; Ronda, Joseph M.; Duffy, Jeanne F.
2015-01-01
Sleep has been demonstrated to improve consolidation of many types of new memories. However, few prior studies have examined how sleep impacts learning of face-name associations. The recognition of a new face along with the associated name is an important human cognitive skill. Here we investigated whether post-presentation sleep impacts recognition memory of new face-name associations in healthy adults. Fourteen participants were tested twice. Each time, they were presented 20 photos of faces with a corresponding name. Twelve hours later, they were shown each face twice, once with the correct and once with an incorrect name, and asked if each face-name combination was correct and to rate their confidence. In one condition the 12-hour interval between presentation and recall included an 8-hour nighttime sleep opportunity (“Sleep”), while in the other condition they remained awake (“Wake”). There were more correct and highly confident correct responses when the interval between presentation and recall included a sleep opportunity, although improvement between the “Wake” and “Sleep” conditions was not related to duration of sleep or any sleep stage. These data suggest that a nighttime sleep opportunity improves the ability to correctly recognize face-name associations. Further studies investigating the mechanism of this improvement are important, as this finding has implications for individuals with sleep disturbances and/or memory impairments. PMID:26549626
Maurer, Leonie; Zitting, Kirsi-Marja; Elliott, Kieran; Czeisler, Charles A; Ronda, Joseph M; Duffy, Jeanne F
2015-12-01
Sleep has been demonstrated to improve consolidation of many types of new memories. However, few prior studies have examined how sleep impacts learning of face-name associations. The recognition of a new face along with the associated name is an important human cognitive skill. Here we investigated whether post-presentation sleep impacts recognition memory of new face-name associations in healthy adults. Fourteen participants were tested twice. Each time, they were presented 20 photos of faces with a corresponding name. Twelve hours later, they were shown each face twice, once with the correct and once with an incorrect name, and asked if each face-name combination was correct and to rate their confidence. In one condition the 12-h interval between presentation and recall included an 8-h nighttime sleep opportunity ("Sleep"), while in the other condition they remained awake ("Wake"). There were more correct and highly confident correct responses when the interval between presentation and recall included a sleep opportunity, although improvement between the "Wake" and "Sleep" conditions was not related to duration of sleep or any sleep stage. These data suggest that a nighttime sleep opportunity improves the ability to correctly recognize face-name associations. Further studies investigating the mechanism of this improvement are important, as this finding has implications for individuals with sleep disturbances and/or memory impairments. Copyright © 2015 Elsevier Inc. All rights reserved.
Deep learning and face recognition: the state of the art
NASA Astrophysics Data System (ADS)
Balaban, Stephen
2015-05-01
Deep Neural Networks (DNNs) have established themselves as a dominant technique in machine learning. DNNs have been top performers on a wide variety of tasks including image classification, speech recognition, and face recognition.1-3 Convolutional neural networks (CNNs) have been used in nearly all of the top performing methods on the Labeled Faces in the Wild (LFW) dataset.3-6 In this talk and accompanying paper, I attempt to provide a review and summary of the deep learning techniques used in the state-of-the-art. In addition, I highlight the need for both larger and more challenging public datasets to benchmark these systems. Despite the ability of DNNs and autoencoders to perform unsupervised feature learning, modern facial recognition pipelines still require domain specific engineering in the form of re-alignment. For example, in Facebook's recent DeepFace paper, a 3D "frontalization" step lies at the beginning of the pipeline. This step creates a 3D face model for the incoming image and then uses a series of affine transformations of the fiducial points to "frontalize" the image. This step enables the DeepFace system to use a neural network architecture with locally connected layers without weight sharing as opposed to standard convolutional layers.6 Deep learning techniques combined with large datasets have allowed research groups to surpass human level performance on the LFW dataset.3, 5 The high accuracy (99.63% for FaceNet at the time of publishing) and utilization of outside data (hundreds of millions of images in the case of Google's FaceNet) suggest that current face verification benchmarks such as LFW may not be challenging enough, nor provide enough data, for current techniques.3, 5 There exist a variety of organizations with mobile photo sharing applications that would be capable of releasing a very large scale and highly diverse dataset of facial images captured on mobile devices. Such an "ImageNet for Face Recognition" would likely receive a warm welcome from researchers and practitioners alike.
Buchy, Lisa; Barbato, Mariapaola; Makowski, Carolina; Bray, Signe; MacMaster, Frank P; Deighton, Stephanie; Addington, Jean
2017-11-01
People with psychosis show deficits recognizing facial emotions and disrupted activation in the underlying neural circuitry. We evaluated associations between facial emotion recognition and cortical thickness using a correlation-based approach to map structural covariance networks across the brain. Fifteen people with an early psychosis provided magnetic resonance scans and completed the Penn Emotion Recognition and Differentiation tasks. Fifteen historical controls provided magnetic resonance scans. Cortical thickness was computed using CIVET and analyzed with linear models. Seed-based structural covariance analysis was done using the mapping anatomical correlations across the cerebral cortex methodology. To map structural covariance networks involved in facial emotion recognition, the right somatosensory cortex and bilateral fusiform face areas were selected as seeds. Statistics were run in SurfStat. Findings showed increased cortical covariance between the right fusiform face region seed and right orbitofrontal cortex in controls than early psychosis subjects. Facial emotion recognition scores were not significantly associated with thickness in any region. A negative effect of Penn Differentiation scores on cortical covariance was seen between the left fusiform face area seed and right superior parietal lobule in early psychosis subjects. Results suggest that facial emotion recognition ability is related to covariance in a temporal-parietal network in early psychosis. Copyright © 2017 Elsevier B.V. All rights reserved.
Differences in the way older and younger adults rate threat in faces but not situations.
Ruffman, Ted; Sullivan, Susan; Edge, Nigel
2006-07-01
We compared young and healthy older adults' ability to rate photos of faces and situations (e.g., sporting activities) for the degree of threat they posed. Older adults did not distinguish between more and less dangerous faces to the same extent as younger adults did. In contrast, we found no significant age differences in young and older adults' ability to distinguish between high- and low-danger situations. The differences between young and older adults on the face task were independent of age differences in older adults' fluid IQ. We discuss results in relation to differences between young and older adults on emotion-recognition tasks; we also discuss sociocognitive and neuropsychological (e.g., amygdala) theories of aging.
Neil, Louise; Cappagli, Giulia; Karaminis, Themelis; Jenkins, Rob; Pellicano, Elizabeth
2016-03-01
Unfamiliar face recognition follows a particularly protracted developmental trajectory and is more likely to be atypical in children with autism than those without autism. There is a paucity of research, however, examining the ability to recognize the same face across multiple naturally varying images. Here, we investigated within-person face recognition in children with and without autism. In Experiment 1, typically developing 6- and 7-year-olds, 8- and 9-year-olds, 10- and 11-year-olds, 12- to 14-year-olds, and adults were given 40 grayscale photographs of two distinct male identities (20 of each face taken at different ages, from different angles, and in different lighting conditions) and were asked to sort them by identity. Children mistook images of the same person as images of different people, subdividing each individual into many perceived identities. Younger children divided images into more perceived identities than adults and also made more misidentification errors (placing two different identities together in the same group) than older children and adults. In Experiment 2, we used the same procedure with 32 cognitively able children with autism. Autistic children reported a similar number of identities and made similar numbers of misidentification errors to a group of typical children of similar age and ability. Fine-grained analysis using matrices revealed marginal group differences in overall performance. We suggest that the immature performance in typical and autistic children could arise from problems extracting the perceptual commonalities from different images of the same person and building stable representations of facial identity. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Empathy costs: Negative emotional bias in high empathisers.
Chikovani, George; Babuadze, Lasha; Iashvili, Nino; Gvalia, Tamar; Surguladze, Simon
2015-09-30
Excessive empathy has been associated with compassion fatigue in health professionals and caregivers. We investigated an effect of empathy on emotion processing in 137 healthy individuals of both sexes. We tested a hypothesis that high empathy may underlie increased sensitivity to negative emotion recognition which may interact with gender. Facial emotion stimuli comprised happy, angry, fearful, and sad faces presented at different intensities (mild and prototypical) and different durations (500ms and 2000ms). The parameters of emotion processing were represented by discrimination accuracy, response bias and reaction time. We found that higher empathy was associated with better recognition of all emotions. We also demonstrated that higher empathy was associated with response bias towards sad and fearful faces. The reaction time analysis revealed that higher empathy in females was associated with faster (compared with males) recognition of mildly sad faces of brief duration. We conclude that although empathic abilities were providing for advantages in recognition of all facial emotional expressions, the bias towards emotional negativity may potentially carry a risk for empathic distress. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Dennett, Hugh W; McKone, Elinor; Tavashmi, Raka; Hall, Ashleigh; Pidcock, Madeleine; Edwards, Mark; Duchaine, Bradley
2012-06-01
Many research questions require a within-class object recognition task matched for general cognitive requirements with a face recognition task. If the object task also has high internal reliability, it can improve accuracy and power in group analyses (e.g., mean inversion effects for faces vs. objects), individual-difference studies (e.g., correlations between certain perceptual abilities and face/object recognition), and case studies in neuropsychology (e.g., whether a prosopagnosic shows a face-specific or object-general deficit). Here, we present such a task. Our Cambridge Car Memory Test (CCMT) was matched in format to the established Cambridge Face Memory Test, requiring recognition of exemplars across view and lighting change. We tested 153 young adults (93 female). Results showed high reliability (Cronbach's alpha = .84) and a range of scores suitable both for normal-range individual-difference studies and, potentially, for diagnosis of impairment. The mean for males was much higher than the mean for females. We demonstrate independence between face memory and car memory (dissociation based on sex, plus a modest correlation between the two), including where participants have high relative expertise with cars. We also show that expertise with real car makes and models of the era used in the test significantly predicts CCMT performance. Surprisingly, however, regression analyses imply that there is an effect of sex per se on the CCMT that is not attributable to a stereotypical male advantage in car expertise.
Mazura, Jan C; Juluru, Krishna; Chen, Joseph J; Morgan, Tara A; John, Majnu; Siegel, Eliot L
2012-06-01
Image de-identification has focused on the removal of textual protected health information (PHI). Surface reconstructions of the face have the potential to reveal a subject's identity even when textual PHI is absent. This study assessed the ability of a computer application to match research subjects' 3D facial reconstructions with conventional photographs of their face. In a prospective study, 29 subjects underwent CT scans of the head and had frontal digital photographs of their face taken. Facial reconstructions of each CT dataset were generated on a 3D workstation. In phase 1, photographs of the 29 subjects undergoing CT scans were added to a digital directory and tested for recognition using facial recognition software. In phases 2-4, additional photographs were added in groups of 50 to increase the pool of possible matches and the test for recognition was repeated. As an internal control, photographs of all subjects were tested for recognition against an identical photograph. Of 3D reconstructions, 27.5% were matched correctly to corresponding photographs (95% upper CL, 40.1%). All study subject photographs were matched correctly to identical photographs (95% lower CL, 88.6%). Of 3D reconstructions, 96.6% were recognized simply as a face by the software (95% lower CL, 83.5%). Facial recognition software has the potential to recognize features on 3D CT surface reconstructions and match these with photographs, with implications for PHI.
Recognition of facial and musical emotions in Parkinson's disease.
Saenz, A; Doé de Maindreville, A; Henry, A; de Labbey, S; Bakchine, S; Ehrlé, N
2013-03-01
Patients with amygdala lesions were found to be impaired in recognizing the fear emotion both from face and from music. In patients with Parkinson's disease (PD), impairment in recognition of emotions from facial expressions was reported for disgust, fear, sadness and anger, but no studies had yet investigated this population for the recognition of emotions from both face and music. The ability to recognize basic universal emotions (fear, happiness and sadness) from both face and music was investigated in 24 medicated patients with PD and 24 healthy controls. The patient group was tested for language (verbal fluency tasks), memory (digit and spatial span), executive functions (Similarities and Picture Completion subtests of the WAIS III, Brixton and Stroop tests), visual attention (Bells test), and fulfilled self-assessment tests for anxiety and depression. Results showed that the PD group was significantly impaired for recognition of both fear and sadness emotions from facial expressions, whereas their performance in recognition of emotions from musical excerpts was not different from that of the control group. The scores of fear and sadness recognition from faces were neither correlated to scores in tests for executive and cognitive functions, nor to scores in self-assessment scales. We attributed the observed dissociation to the modality (visual vs. auditory) of presentation and to the ecological value of the musical stimuli that we used. We discuss the relevance of our findings for the care of patients with PD. © 2012 The Author(s) European Journal of Neurology © 2012 EFNS.
Tanaka, James W; Wolf, Julie M; Klaiman, Cheryl; Koenig, Kathleen; Cockburn, Jeffrey; Herlihy, Lauren; Brown, Carla; Stahl, Sherin S; South, Mikle; McPartland, James C; Kaiser, Martha D; Schultz, Robert T
2012-12-01
Although impaired social-emotional ability is a hallmark of autism spectrum disorder (ASD), the perceptual skills and mediating strategies contributing to the social deficits of autism are not well understood. A perceptual skill that is fundamental to effective social communication is the ability to accurately perceive and interpret facial emotions. To evaluate the expression processing of participants with ASD, we designed the Let's Face It! Emotion Skills Battery (LFI! Battery), a computer-based assessment composed of three subscales measuring verbal and perceptual skills implicated in the recognition of facial emotions. We administered the LFI! Battery to groups of participants with ASD and typically developing control (TDC) participants that were matched for age and IQ. On the Name Game labeling task, participants with ASD (N = 68) performed on par with TDC individuals (N = 66) in their ability to name the facial emotions of happy, sad, disgust and surprise and were only impaired in their ability to identify the angry expression. On the Matchmaker Expression task that measures the recognition of facial emotions across different facial identities, the ASD participants (N = 66) performed reliably worse than TDC participants (N = 67) on the emotions of happy, sad, disgust, frighten and angry. In the Parts-Wholes test of perceptual strategies of expression, the TDC participants (N = 67) displayed more holistic encoding for the eyes than the mouths in expressive faces whereas ASD participants (N = 66) exhibited the reverse pattern of holistic recognition for the mouth and analytic recognition of the eyes. In summary, findings from the LFI! Battery show that participants with ASD were able to label the basic facial emotions (with the exception of angry expression) on par with age- and IQ-matched TDC participants. However, participants with ASD were impaired in their ability to generalize facial emotions across different identities and showed a tendency to recognize the mouth feature holistically and the eyes as isolated parts. © 2012 The Authors. Journal of Child Psychology and Psychiatry © 2012 Association for Child and Adolescent Mental Health.
Offenders become the victim in virtual reality: impact of changing perspective in domestic violence.
Seinfeld, S; Arroyo-Palacios, J; Iruretagoyena, G; Hortensius, R; Zapata, L E; Borland, D; de Gelder, B; Slater, M; Sanchez-Vives, M V
2018-02-09
The role of empathy and perspective-taking in preventing aggressive behaviors has been highlighted in several theoretical models. In this study, we used immersive virtual reality to induce a full body ownership illusion that allows offenders to be in the body of a victim of domestic abuse. A group of male domestic violence offenders and a control group without a history of violence experienced a virtual scene of abuse in first-person perspective. During the virtual encounter, the participants' real bodies were replaced with a life-sized virtual female body that moved synchronously with their own real movements. Participants' emotion recognition skills were assessed before and after the virtual experience. Our results revealed that offenders have a significantly lower ability to recognize fear in female faces compared to controls, with a bias towards classifying fearful faces as happy. After being embodied in a female victim, offenders improved their ability to recognize fearful female faces and reduced their bias towards recognizing fearful faces as happy. For the first time, we demonstrate that changing the perspective of an aggressive population through immersive virtual reality can modify socio-perceptual processes such as emotion recognition, thought to underlie this specific form of aggressive behaviors.
From Facial Emotional Recognition Abilities to Emotional Attribution: A Study in Down Syndrome
ERIC Educational Resources Information Center
Hippolyte, Loyse; Barisnikov, Koviljka; Van der Linden, Martial; Detraux, Jean-Jacques
2009-01-01
Facial expression processing and the attribution of facial emotions to a context were investigated in adults with Down syndrome (DS) in two experiments. Their performances were compared with those of a child control group matched for receptive vocabulary. The ability to process faces without emotional content was controlled for, and no differences…
Li, Yuan Hang; Tottenham, Nim
2013-04-01
A growing literature suggests that the self-face is involved in processing the facial expressions of others. The authors experimentally activated self-face representations to assess its effects on the recognition of dynamically emerging facial expressions of others. They exposed participants to videos of either their own faces (self-face prime) or faces of others (nonself-face prime) prior to a facial expression judgment task. Their results show that experimentally activating self-face representations results in earlier recognition of dynamically emerging facial expression. As a group, participants in the self-face prime condition recognized expressions earlier (when less affective perceptual information was available) compared to participants in the nonself-face prime condition. There were individual differences in performance, such that poorer expression identification was associated with higher autism traits (in this neurocognitively healthy sample). However, when randomized into the self-face prime condition, participants with high autism traits performed as well as those with low autism traits. Taken together, these data suggest that the ability to recognize facial expressions in others is linked with the internal representations of our own faces. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Leist, Tatyana; Dadds, Mark R
2009-04-01
Emotional processing styles appear to characterize various forms of psychopathology and environmental adversity in children. For example, autistic, anxious, high- and low-emotion conduct problem children, and children who have been maltreated, all appear to show specific deficits and strengths in recognizing the facial expressions of emotions. Until now, the relationships between emotion recognition, antisocial behaviour, emotional problems, callous-unemotional (CU) traits and early maltreatment have never been assessed simultaneously in one study, and the specific associations of emotion recognition to maltreatment and child characteristics are therefore unknown. We examined facial-emotion processing in a sample of 23 adolescents selected for high-risk status on the variables of interest. As expected, maltreatment and child characteristics showed unique associations. CU traits were uniquely related to impairments in fear recognition. Antisocial behaviour was uniquely associated with better fear recognition, but impaired anger recognition. Emotional problems were associated with better recognition of anger and sadness, but lower recognition of neutral faces. Maltreatment was predictive of superior recognition of fear and sadness. The findings are considered in terms of social information-processing theories of psychopathology. Implications for clinical interventions are discussed.
Learning to recognize face shapes through serial exploration.
Wallraven, Christian; Whittingstall, Lisa; Bülthoff, Heinrich H
2013-05-01
Human observers are experts at visual face recognition due to specialized visual mechanisms for face processing that evolve with perceptual expertize. Such expertize has long been attributed to the use of configural processing, enabled by fast, parallel information encoding of the visual information in the face. Here we tested whether participants can learn to efficiently recognize faces that are serially encoded-that is, when only partial visual information about the face is available at any given time. For this, ten participants were trained in gaze-restricted face recognition in which face masks were viewed through a small aperture controlled by the participant. Tests comparing trained with untrained performance revealed (1) a marked improvement in terms of speed and accuracy, (2) a gradual development of configural processing strategies, and (3) participants' ability to rapidly learn and accurately recognize novel exemplars. This performance pattern demonstrates that participants were able to learn new strategies to compensate for the serial nature of information encoding. The results are discussed in terms of expertize acquisition and relevance for other sensory modalities relying on serial encoding.
Item Response Theory Analyses of the Cambridge Face Memory Test (CFMT)
Cho, Sun-Joo; Wilmer, Jeremy; Herzmann, Grit; McGugin, Rankin; Fiset, Daniel; Van Gulick, Ana E.; Ryan, Katie; Gauthier, Isabel
2014-01-01
We evaluated the psychometric properties of the Cambridge face memory test (CFMT; Duchaine & Nakayama, 2006). First, we assessed the dimensionality of the test with a bi-factor exploratory factor analysis (EFA). This EFA analysis revealed a general factor and three specific factors clustered by targets of CFMT. However, the three specific factors appeared to be minor factors that can be ignored. Second, we fit a unidimensional item response model. This item response model showed that the CFMT items could discriminate individuals at different ability levels and covered a wide range of the ability continuum. We found the CFMT to be particularly precise for a wide range of ability levels. Third, we implemented item response theory (IRT) differential item functioning (DIF) analyses for each gender group and two age groups (Age ≤ 20 versus Age > 21). This DIF analysis suggested little evidence of consequential differential functioning on the CFMT for these groups, supporting the use of the test to compare older to younger, or male to female, individuals. Fourth, we tested for a gender difference on the latent facial recognition ability with an explanatory item response model. We found a significant but small gender difference on the latent ability for face recognition, which was higher for women than men by 0.184, at age mean 23.2, controlling for linear and quadratic age effects. Finally, we discuss the practical considerations of the use of total scores versus IRT scale scores in applications of the CFMT. PMID:25642930
Reduced set averaging of face identity in children and adolescents with autism.
Rhodes, Gillian; Neumann, Markus F; Ewing, Louise; Palermo, Romina
2015-01-01
Individuals with autism have difficulty abstracting and updating average representations from their diet of faces. These averages function as perceptual norms for coding faces, and poorly calibrated norms may contribute to face recognition difficulties in autism. Another kind of average, known as an ensemble representation, can be abstracted from briefly glimpsed sets of faces. Here we show for the first time that children and adolescents with autism also have difficulty abstracting ensemble representations from sets of faces. On each trial, participants saw a study set of four identities and then indicated whether a test face was present. The test face could be a set average or a set identity, from either the study set or another set. Recognition of set averages was reduced in participants with autism, relative to age- and ability-matched typically developing participants. This difference, which actually represents more accurate responding, indicates weaker set averaging and thus weaker ensemble representations of face identity in autism. Our finding adds to the growing evidence for atypical abstraction of average face representations from experience in autism. Weak ensemble representations may have negative consequences for face processing in autism, given the importance of ensemble representations in dealing with processing capacity limitations.
Rhodes, Gillian; Nishimura, Mayu; de Heering, Adelaide; Jeffery, Linda; Maurer, Daphne
2017-05-01
Faces are adaptively coded relative to visual norms that are updated by experience, and this adaptive coding is linked to face recognition ability. Here we investigated whether adaptive coding of faces is disrupted in individuals (adolescents and adults) who experience face recognition difficulties following visual deprivation from congenital cataracts in infancy. We measured adaptive coding using face identity aftereffects, where smaller aftereffects indicate less adaptive updating of face-coding mechanisms by experience. We also examined whether the aftereffects increase with adaptor identity strength, consistent with norm-based coding of identity, as in typical populations, or whether they show a different pattern indicating some more fundamental disruption of face-coding mechanisms. Cataract-reversal patients showed significantly smaller face identity aftereffects than did controls (Experiments 1 and 2). However, their aftereffects increased significantly with adaptor strength, consistent with norm-based coding (Experiment 2). Thus we found reduced adaptability but no fundamental disruption of norm-based face-coding mechanisms in cataract-reversal patients. Our results suggest that early visual experience is important for the normal development of adaptive face-coding mechanisms. © 2016 John Wiley & Sons Ltd.
Children with Autism Spectrum Disorder scan own-race faces differently from other-race faces.
Yi, Li; Quinn, Paul C; Fan, Yuebo; Huang, Dan; Feng, Cong; Joseph, Lisa; Li, Jiao; Lee, Kang
2016-01-01
It has been well documented that people recognize and scan other-race faces differently from faces of their own race. The current study examined whether this cross-racial difference in face processing found in the typical population also exists in individuals with Autism Spectrum Disorder (ASD). Participants included 5- to 10-year-old children with ASD (n=29), typically developing (TD) children matched on chronological age (n=29), and TD children matched on nonverbal IQ (n=29). Children completed a face recognition task in which they were asked to memorize and recognize both own- and other-race faces while their eye movements were tracked. We found no recognition advantage for own-race faces relative to other-race faces in any of the three groups. However, eye-tracking results indicated that, similar to TD children, children with ASD exhibited a cross-racial face-scanning pattern: they looked at the eyes of other-race faces longer than at those of own-race faces, whereas they looked at the mouth of own-race faces longer than at that of other-race faces. The findings suggest that although children with ASD have difficulty with processing some aspects of faces, their ability to process face race information is relatively spared. Copyright © 2015 Elsevier Inc. All rights reserved.
The association between PTSD and facial affect recognition.
Williams, Christian L; Milanak, Melissa E; Judah, Matt R; Berenbaum, Howard
2018-05-05
The major aims of this study were to examine how, if at all, having higher levels of PTSD would be associated with performance on a facial affect recognition task in which facial expressions of emotion are superimposed on emotionally valenced, non-face images. College students with trauma histories (N = 90) completed a facial affect recognition task as well as measures of exposure to traumatic events, and PTSD symptoms. When the face and context matched, participants with higher levels of PTSD were significantly more accurate. When the face and context were mismatched, participants with lower levels of PTSD were more accurate than were those with higher levels of PTSD. These findings suggest that PTSD is associated with how people process affective information. Furthermore, these results suggest that the enhanced attention of people with higher levels of PTSD to affective information can be either beneficial or detrimental to their ability to accurately identify facial expressions of emotion. Limitations, future directions and clinical implications are discussed. Copyright © 2018 Elsevier B.V. All rights reserved.
Chang, Allen; Murray, Elizabeth; Yassa, Michael A.
2016-01-01
Face recognition is an important component of successful social interactions in humans. A large literature in social psychology has focused on the phenomenon termed “the other race” (ORE) effect, the tendency to be more proficient with face recognition within one’s own ethnic group, as compared to other ethnic groups. Several potential hypotheses have been proposed for this effect including perceptual expertise, social grouping, and holistic face processing. Recent work on mnemonic discrimination (i.e. the ability to resolve mnemonic interference among similar experiences) may provide a mechanistic account for the ORE. In the current study, we examined how discrimination and generalization in the presence of mnemonic interference may contribute to the ORE. We developed a database of computerized faces divided evenly among ethnic origins (Black, Caucasian, East Asian, South Asian), as well as morphed face stimuli that varied in the amount of similarity to the original stimuli (30%, 40%, 50%, and 60% morphs). Participants first examined the original unmorphed stimuli during study, then during test were asked to judge the prior occurrence of repetitions (targets), morphed stimuli (lures), and new stimuli (foils). We examined participants’ ability to correctly reject similar morphed lures and found that it increased linearly as a function of face dissimilarity. We additionally found that Caucasian participants’ mnemonic discrimination/generalization functions were sharply tuned for Caucasian faces but considerably less tuned for East Asian and Black faces. These results suggest that expertise plays an important role in resolving mnemonic interference, which may offer a mechanistic account for the ORE. PMID:26413724
Ruocco, Anthony C.; Reilly, James L.; Rubin, Leah H.; Daros, Alex R.; Gershon, Elliot S.; Tamminga, Carol A.; Pearlson, Godfrey D.; Hill, S. Kristian; Keshavan, Matcheri S.; Gur, Ruben C.; Sweeney, John A.
2014-01-01
Background Difficulty recognizing facial emotions is an important social-cognitive deficit associated with psychotic disorders. It also may reflect a familial risk for psychosis in schizophrenia-spectrum disorders and bipolar disorder. Objective The objectives of this study from the Bipolar-Schizophrenia Network on Intermediate Phenotypes (B-SNIP) consortium were to: 1) compare emotion recognition deficits in schizophrenia, schizoaffective disorder and bipolar disorder with psychosis, 2) determine the familiality of emotion recognition deficits across these disorders, and 3) evaluate emotion recognition deficits in nonpsychotic relatives with and without elevated Cluster A and Cluster B personality disorder traits. Method Participants included probands with schizophrenia (n=297), schizoaffective disorder (depressed type, n=61; bipolar type, n=69), bipolar disorder with psychosis (n=248), their first-degree relatives (n=332, n=69, n=154, and n=286, respectively) and healthy controls (n=380). All participants completed the Penn Emotion Recognition Test, a standardized measure of facial emotion recognition assessing four basic emotions (happiness, sadness, anger and fear) and neutral expressions (no emotion). Results Compared to controls, emotion recognition deficits among probands increased progressively from bipolar disorder to schizoaffective disorder to schizophrenia. Proband and relative groups showed similar deficits perceiving angry and neutral faces, whereas deficits on fearful, happy and sad faces were primarily isolated to schizophrenia probands. Even non-psychotic relatives without elevated Cluster A or Cluster B personality disorder traits showed deficits on neutral and angry faces. Emotion recognition ability was moderately familial only in schizophrenia families. Conclusions Emotion recognition deficits are prominent but somewhat different across psychotic disorders. These deficits are reflected to a lesser extent in relatives, particularly on angry and neutral faces. Deficits were evident in non-psychotic relatives even without elevated personality disorder traits. Deficits in facial emotion recognition may reflect an important social-cognitive deficit in patients with psychotic disorders. PMID:25052782
Ruocco, Anthony C; Reilly, James L; Rubin, Leah H; Daros, Alex R; Gershon, Elliot S; Tamminga, Carol A; Pearlson, Godfrey D; Hill, S Kristian; Keshavan, Matcheri S; Gur, Ruben C; Sweeney, John A
2014-09-01
Difficulty recognizing facial emotions is an important social-cognitive deficit associated with psychotic disorders. It also may reflect a familial risk for psychosis in schizophrenia-spectrum disorders and bipolar disorder. The objectives of this study from the Bipolar-Schizophrenia Network on Intermediate Phenotypes (B-SNIP) consortium were to: 1) compare emotion recognition deficits in schizophrenia, schizoaffective disorder and bipolar disorder with psychosis, 2) determine the familiality of emotion recognition deficits across these disorders, and 3) evaluate emotion recognition deficits in nonpsychotic relatives with and without elevated Cluster A and Cluster B personality disorder traits. Participants included probands with schizophrenia (n=297), schizoaffective disorder (depressed type, n=61; bipolar type, n=69), bipolar disorder with psychosis (n=248), their first-degree relatives (n=332, n=69, n=154, and n=286, respectively) and healthy controls (n=380). All participants completed the Penn Emotion Recognition Test, a standardized measure of facial emotion recognition assessing four basic emotions (happiness, sadness, anger and fear) and neutral expressions (no emotion). Compared to controls, emotion recognition deficits among probands increased progressively from bipolar disorder to schizoaffective disorder to schizophrenia. Proband and relative groups showed similar deficits perceiving angry and neutral faces, whereas deficits on fearful, happy and sad faces were primarily isolated to schizophrenia probands. Even non-psychotic relatives without elevated Cluster A or Cluster B personality disorder traits showed deficits on neutral and angry faces. Emotion recognition ability was moderately familial only in schizophrenia families. Emotion recognition deficits are prominent but somewhat different across psychotic disorders. These deficits are reflected to a lesser extent in relatives, particularly on angry and neutral faces. Deficits were evident in non-psychotic relatives even without elevated personality disorder traits. Deficits in facial emotion recognition may reflect an important social-cognitive deficit in patients with psychotic disorders. Copyright © 2014 Elsevier B.V. All rights reserved.
Delayed Face Recognition in Children and Adolescents with Autism Spectrum Disorders
Tehrani-Doost, Mehdi; Ghanbari-Motlagh, Maria; Shahrivar, Zahra
2012-01-01
Objective Children with autism spectrum disorders (ASDs) have great problems in social interactions including face recognition. There are many studies reporting deficits in face memory in individuals with ASDs. On the other hand, some studies indicate that this kind of memory is intact in this group. In the present study, delayed face recognition has been investigated in children and adolescents with ASDs compared to the age and sex matched typically developing group. Methods In two sessions, Benton Facial Recognition Test was administered to 15 children and adolescents with ASDs (high functioning autism and Asperger syndrome) and to 15 normal participants, ages 8-17 years. In the first condition, the long form of Benton Facial Recognition Test was used without any delay. In the second session, this test was administered with 15 seconds delay after one week. The reaction times and correct responses were measured in both conditions as the dependent variables. Results Comparison of the reaction times and correct responses in the two groups revealed no significant difference in delayed and non-delayed conditions. Furthermore, no significant difference was observed between the two conditions in ASDs patients when comparing the variables. Although a significant correlation (p<0.05) was found between delayed and non-delayed conditions, it was not significant in the normal group. Moreover, data analysis revealed no significant difference between the two groups in the two conditions when the IQ was considered as covariate. Conclusion In this study, it was found that the ability to recognize faces in simultaneous and delayed conditions is similar between adolescents with ASDs and their normal counterparts. PMID:22952545
Visual Scanning Patterns and Executive Function in Relation to Facial Emotion Recognition in Aging
Circelli, Karishma S.; Clark, Uraina S.; Cronin-Golomb, Alice
2012-01-01
Objective The ability to perceive facial emotion varies with age. Relative to younger adults (YA), older adults (OA) are less accurate at identifying fear, anger, and sadness, and more accurate at identifying disgust. Because different emotions are conveyed by different parts of the face, changes in visual scanning patterns may account for age-related variability. We investigated the relation between scanning patterns and recognition of facial emotions. Additionally, as frontal-lobe changes with age may affect scanning patterns and emotion recognition, we examined correlations between scanning parameters and performance on executive function tests. Methods We recorded eye movements from 16 OA (mean age 68.9) and 16 YA (mean age 19.2) while they categorized facial expressions and non-face control images (landscapes), and administered standard tests of executive function. Results OA were less accurate than YA at identifying fear (p<.05, r=.44) and more accurate at identifying disgust (p<.05, r=.39). OA fixated less than YA on the top half of the face for disgust, fearful, happy, neutral, and sad faces (p’s<.05, r’s≥.38), whereas there was no group difference for landscapes. For OA, executive function was correlated with recognition of sad expressions and with scanning patterns for fearful, sad, and surprised expressions. Conclusion We report significant age-related differences in visual scanning that are specific to faces. The observed relation between scanning patterns and executive function supports the hypothesis that frontal-lobe changes with age may underlie some changes in emotion recognition. PMID:22616800
Fraudulent ID using face morphs: Experiments on human and automatic recognition
Robertson, David J.; Kramer, Robin S. S.
2017-01-01
Matching unfamiliar faces is known to be difficult, and this can give an opportunity to those engaged in identity fraud. Here we examine a relatively new form of fraud, the use of photo-ID containing a graphical morph between two faces. Such a document may look sufficiently like two people to serve as ID for both. We present two experiments with human viewers, and a third with a smartphone face recognition system. In Experiment 1, viewers were asked to match pairs of faces, without being warned that one of the pair could be a morph. They very commonly accepted a morphed face as a match. However, in Experiment 2, following very short training on morph detection, their acceptance rate fell considerably. Nevertheless, there remained large individual differences in people’s ability to detect a morph. In Experiment 3 we show that a smartphone makes errors at a similar rate to ‘trained’ human viewers—i.e. accepting a small number of morphs as genuine ID. We discuss these results in reference to the use of face photos for security. PMID:28328928
Fraudulent ID using face morphs: Experiments on human and automatic recognition.
Robertson, David J; Kramer, Robin S S; Burton, A Mike
2017-01-01
Matching unfamiliar faces is known to be difficult, and this can give an opportunity to those engaged in identity fraud. Here we examine a relatively new form of fraud, the use of photo-ID containing a graphical morph between two faces. Such a document may look sufficiently like two people to serve as ID for both. We present two experiments with human viewers, and a third with a smartphone face recognition system. In Experiment 1, viewers were asked to match pairs of faces, without being warned that one of the pair could be a morph. They very commonly accepted a morphed face as a match. However, in Experiment 2, following very short training on morph detection, their acceptance rate fell considerably. Nevertheless, there remained large individual differences in people's ability to detect a morph. In Experiment 3 we show that a smartphone makes errors at a similar rate to 'trained' human viewers-i.e. accepting a small number of morphs as genuine ID. We discuss these results in reference to the use of face photos for security.
Facial emotion recognition in paranoid schizophrenia and autism spectrum disorder.
Sachse, Michael; Schlitt, Sabine; Hainz, Daniela; Ciaramidaro, Angela; Walter, Henrik; Poustka, Fritz; Bölte, Sven; Freitag, Christine M
2014-11-01
Schizophrenia (SZ) and autism spectrum disorder (ASD) share deficits in emotion processing. In order to identify convergent and divergent mechanisms, we investigated facial emotion recognition in SZ, high-functioning ASD (HFASD), and typically developed controls (TD). Different degrees of task difficulty and emotion complexity (face, eyes; basic emotions, complex emotions) were used. Two Benton tests were implemented in order to elicit potentially confounding visuo-perceptual functioning and facial processing. Nineteen participants with paranoid SZ, 22 with HFASD and 20 TD were included, aged between 14 and 33 years. Individuals with SZ were comparable to TD in all obtained emotion recognition measures, but showed reduced basic visuo-perceptual abilities. The HFASD group was impaired in the recognition of basic and complex emotions compared to both, SZ and TD. When facial identity recognition was adjusted for, group differences remained for the recognition of complex emotions only. Our results suggest that there is a SZ subgroup with predominantly paranoid symptoms that does not show problems in face processing and emotion recognition, but visuo-perceptual impairments. They also confirm the notion of a general facial and emotion recognition deficit in HFASD. No shared emotion recognition deficit was found for paranoid SZ and HFASD, emphasizing the differential cognitive underpinnings of both disorders. Copyright © 2014 Elsevier B.V. All rights reserved.
A model of attention-guided visual perception and recognition.
Rybak, I A; Gusakova, V I; Golovan, A V; Podladchikova, L N; Shevtsova, N A
1998-08-01
A model of visual perception and recognition is described. The model contains: (i) a low-level subsystem which performs both a fovea-like transformation and detection of primary features (edges), and (ii) a high-level subsystem which includes separated 'what' (sensory memory) and 'where' (motor memory) structures. Image recognition occurs during the execution of a 'behavioral recognition program' formed during the primary viewing of the image. The recognition program contains both programmed attention window movements (stored in the motor memory) and predicted image fragments (stored in the sensory memory) for each consecutive fixation. The model shows the ability to recognize complex images (e.g. faces) invariantly with respect to shift, rotation and scale.
Sullivan, Susan; Campbell, Anna; Hutton, Sam B; Ruffman, Ted
2017-05-01
Research indicates that older adults' (≥60 years) emotion recognition is worse than that of young adults, young and older men's emotion recognition is worse than that of young and older women (respectively), older adults' looking at mouths compared with eyes is greater than that of young adults. Nevertheless, previous research has not compared older men's and women's looking at emotion faces so the present study had two aims: (a) to examine whether the tendency to look at mouths is stronger amongst older men compared with older women and (b) to examine whether men's mouth looking correlates with better emotion recognition. We examined the emotion recognition abilities and spontaneous gaze patterns of young (n = 60) and older (n = 58) males and females as they labelled emotion faces. Older men spontaneously looked more to mouths than older women, and older men's looking at mouths correlated with their emotion recognition, whereas women's looking at eyes correlated with their emotion recognition. The findings are discussed in relation to a growing body of research suggesting both age and gender differences in response to emotional stimuli and the differential efficacy of mouth and eyes looking for men and women. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Schultebraucks, Katharina; Deuter, Christian E; Duesenberg, Moritz; Schulze, Lars; Hellmann-Regen, Julian; Domke, Antonia; Lockenvitz, Lisa; Kuehl, Linn K; Otte, Christian; Wingenfeld, Katja
2016-09-01
Selective attention toward emotional cues and emotion recognition of facial expressions are important aspects of social cognition. Stress modulates social cognition through cortisol, which acts on glucocorticoid (GR) and mineralocorticoid receptors (MR) in the brain. We examined the role of MR activation on attentional bias toward emotional cues and on emotion recognition. We included 40 healthy young women and 40 healthy young men (mean age 23.9 ± 3.3), who either received 0.4 mg of the MR agonist fludrocortisone or placebo. A dot-probe paradigm was used to test for attentional biases toward emotional cues (happy and sad faces). Moreover, we used a facial emotion recognition task to investigate the ability to recognize emotional valence (anger and sadness) from facial expression in four graded categories of emotional intensity (20, 30, 40, and 80 %). In the emotional dot-probe task, we found a main effect of treatment and a treatment × valence interaction. Post hoc analyses revealed an attentional bias away from sad faces after placebo intake and a shift in selective attention toward sad faces compared to placebo. We found no attentional bias toward happy faces after fludrocortisone or placebo intake. In the facial emotion recognition task, there was no main effect of treatment. MR stimulation seems to be important in modulating quick, automatic emotional processing, i.e., a shift in selective attention toward negative emotional cues. Our results confirm and extend previous findings of MR function. However, we did not find an effect of MR stimulation on emotion recognition.
Maat, Arija; van Haren, Neeltje E M; Bartholomeusz, Cali F; Kahn, René S; Cahn, Wiepke
2016-02-01
Investigations of social cognition in schizophrenia have demonstrated consistent impairments compared to healthy controls. Functional imaging studies in schizophrenia patients and healthy controls have revealed that social cognitive processing depends critically on the amygdala and the prefrontal cortex (PFC). However, the relationship between social cognition and structural brain abnormalities in these regions in schizophrenia patients is less well understood. Measures of facial emotion recognition and theory of mind (ToM), two key social cognitive abilities, as well as face perception and IQ, were assessed in 166 patients with schizophrenia and 134 healthy controls. MRI brain scans were acquired. Automated parcellation of the brain to determine gray matter volume of the amygdala and the superior, middle, inferior and orbital PFC was performed. Between-group analyses showed poorer recognition of angry faces and ToM performance, and decreased amygdala and PFC gray matter volumes in schizophrenia patients as compared to healthy controls. Moreover, in schizophrenia patients, recognition of angry faces was associated with inferior PFC gray matter volume, particularly the pars triangularis (p=0.006), with poor performance being related to reduced pars triangularis gray matter volume. In addition, ToM ability was related to PFC gray matter volume, particularly middle PFC (p=0.001), in that poor ToM skills in schizophrenia patients were associated with reduced middle PFC gray matter volume. In conclusion, reduced PFC, but not amygdala, gray matter volume is associated with social cognitive deficits in schizophrenia. Copyright © 2015 Elsevier B.V. and ECNP. All rights reserved.
Split-brain reveals separate but equal self-recognition in the two cerebral hemispheres.
Uddin, Lucina Q; Rayman, Jan; Zaidel, Eran
2005-09-01
To assess the ability of the disconnected cerebral hemispheres to recognize images of the self, a split-brain patient (an individual who underwent complete cerebral commissurotomy to relieve intractable epilepsy) was tested using morphed self-face images presented to one visual hemifield (projecting to one hemisphere) at a time while making "self/other" judgments. The performance of the right and left hemispheres of this patient as assessed by a signal detection method was not significantly different, though a measure of bias did reveal hemispheric differences. The right and left hemispheres of this patient independently and equally possessed the ability to self-recognize, but only the right hemisphere could successfully recognize familiar others. This supports a modular concept of self-recognition and other-recognition, separately present in each cerebral hemisphere.
Wingenbach, Tanja S H; Ashwin, Chris; Brosnan, Mark
2018-01-01
There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or 'extreme' examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates) and response latencies for emotion recognition using short video stimuli (1sec) of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral) across three variations in the intensity of the emotional expression (low, intermediate, high) in an adolescent and adult sample (N = 111; 51 male, 60 female) aged between 16 and 45 (M = 22.2, SD = 5.7). Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations.
Sex differences in facial emotion recognition across varying expression intensity levels from videos
2018-01-01
There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or ‘extreme’ examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates) and response latencies for emotion recognition using short video stimuli (1sec) of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral) across three variations in the intensity of the emotional expression (low, intermediate, high) in an adolescent and adult sample (N = 111; 51 male, 60 female) aged between 16 and 45 (M = 22.2, SD = 5.7). Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations. PMID:29293674
Famous face recognition, face matching, and extraversion.
Lander, Karen; Poyarekar, Siddhi
2015-01-01
It has been previously established that extraverts who are skilled at interpersonal interaction perform significantly better than introverts on a face-specific recognition memory task. In our experiment we further investigate the relationship between extraversion and face recognition, focusing on famous face recognition and face matching. Results indicate that more extraverted individuals perform significantly better on an upright famous face recognition task and show significantly larger face inversion effects. However, our results did not find an effect of extraversion on face matching or inverted famous face recognition.
Zhou, Guifei; Liu, Jiangang; Ding, Xiao Pan; Fu, Genyue; Lee, Kang
2016-01-01
Numerous developmental studies have suggested that other-race effect (ORE) in face recognition emerges as early as in infancy and develops steadily throughout childhood. However, there is very limited research on the neural mechanisms underlying this developmental ORE. The present study used Granger causality analysis (GCA) to examine the development of children's cortical networks in processing own- and other-race faces. Children were between 3 and 13 years. An old-new paradigm was used to assess their own- and other-race face recognition with ETG-4000 (Hitachi Medical Co., Japan) acquiring functional near infrared spectroscopy (fNIRS) data. After preprocessing, for each participant and under each face condition, we obtained the causal map by calculating the weights of causal relations between the time courses of [oxy-Hb] of each pair of channels using GCA. To investigate further the differential causal connectivity for own-race faces and other-race faces at the group level, a repeated measure analysis of variance (ANOVA) was performed on the GCA weights for each pair of channels with the face race task (own-race face vs. other-race face) as the within-subject variable and the age as a between-subject factor (continuous variable). We found an age-related increase in functional connectivity, paralleling a similar age-related improvement in behavioral face processing ability. More importantly, we found that the significant differences in neural functional connectivity between the recognition of own-race faces and that of other-race faces were modulated by age. Thus, like the behavioral ORE, the neural ORE emerges early and undergoes a protracted developmental course. PMID:27713696
Garrido, Lucia; Driver, Jon; Dolan, Raymond J.; Duchaine, Bradley C.; Furl, Nicholas
2016-01-01
Face processing is mediated by interactions between functional areas in the occipital and temporal lobe, and the fusiform face area (FFA) and anterior temporal lobe play key roles in the recognition of facial identity. Individuals with developmental prosopagnosia (DP), a lifelong face recognition impairment, have been shown to have structural and functional neuronal alterations in these areas. The present study investigated how face selectivity is generated in participants with normal face processing, and how functional abnormalities associated with DP, arise as a function of network connectivity. Using functional magnetic resonance imaging and dynamic causal modeling, we examined effective connectivity in normal participants by assessing network models that include early visual cortex (EVC) and face-selective areas and then investigated the integrity of this connectivity in participants with DP. Results showed that a feedforward architecture from EVC to the occipital face area, EVC to FFA, and EVC to posterior superior temporal sulcus (pSTS) best explained how face selectivity arises in both controls and participants with DP. In this architecture, the DP group showed reduced connection strengths on feedforward connections carrying face information from EVC to FFA and EVC to pSTS. These altered network dynamics in DP contribute to the diminished face selectivity in the posterior occipitotemporal areas affected in DP. These findings suggest a novel view on the relevance of feedforward projection from EVC to posterior occipitotemporal face areas in generating cortical face selectivity and differences in face recognition ability. SIGNIFICANCE STATEMENT Areas of the human brain showing enhanced activation to faces compared to other objects or places have been extensively studied. However, the factors leading to this face selectively have remained mostly unknown. We show that effective connectivity from early visual cortex to posterior occipitotemporal face areas gives rise to face selectivity. Furthermore, people with developmental prosopagnosia, a lifelong face recognition impairment, have reduced face selectivity in the posterior occipitotemporal face areas and left anterior temporal lobe. We show that this reduced face selectivity can be predicted by effective connectivity from early visual cortex to posterior occipitotemporal face areas. This study presents the first network-based account of how face selectivity arises in the human brain. PMID:27030766
ERIC Educational Resources Information Center
Farran, Emily K.; Branson, Amanda; King, Ben J.
2011-01-01
Facial expression recognition was investigated in 20 males with high functioning autism (HFA) or Asperger syndrome (AS), compared to typically developing individuals matched for chronological age (TD CA group) and verbal and non-verbal ability (TD V/NV group). This was the first study to employ a visual search, "face in the crowd" paradigm with a…
Robust representation and recognition of facial emotions using extreme sparse learning.
Shojaeilangari, Seyedehsamaneh; Yau, Wei-Yun; Nandakumar, Karthik; Li, Jun; Teoh, Eam Khwang
2015-07-01
Recognition of natural emotions from human faces is an interesting topic with a wide range of potential applications, such as human-computer interaction, automated tutoring systems, image and video retrieval, smart environments, and driver warning systems. Traditionally, facial emotion recognition systems have been evaluated on laboratory controlled data, which is not representative of the environment faced in real-world applications. To robustly recognize the facial emotions in real-world natural situations, this paper proposes an approach called extreme sparse learning, which has the ability to jointly learn a dictionary (set of basis) and a nonlinear classification model. The proposed approach combines the discriminative power of extreme learning machine with the reconstruction property of sparse representation to enable accurate classification when presented with noisy signals and imperfect data recorded in natural settings. In addition, this paper presents a new local spatio-temporal descriptor that is distinctive and pose-invariant. The proposed framework is able to achieve the state-of-the-art recognition accuracy on both acted and spontaneous facial emotion databases.
Holistic processing of static and moving faces.
Zhao, Mintao; Bülthoff, Isabelle
2017-07-01
Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability-holistic face processing-remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based face processing by manipulating the presence of facial motion during study and at test in a composite face task. The results showed that rigidly moving faces were processed as holistically as static faces (Experiment 1). Holistic processing of moving faces persisted whether facial motion was presented during study, at test, or both (Experiment 2). Moreover, when faces were inverted to eliminate the contributions of both an upright face template and observers' expertise with upright faces, rigid facial motion facilitated holistic face processing (Experiment 3). Thus, holistic processing represents a general principle of face perception that applies to both static and dynamic faces, rather than being limited to static faces. These results support an emerging view that both perceiver-based and face-based factors contribute to holistic face processing, and they offer new insights on what underlies holistic face processing, how information supporting holistic face processing interacts with each other, and why facial motion may affect face recognition and holistic face processing differently. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Crookes, Kate; Robbins, Rachel A
2014-10-01
Performance on laboratory face tasks improves across childhood, not reaching adult levels until adolescence. Debate surrounds the source of this development, with recent reviews suggesting that underlying face processing mechanisms are mature early in childhood and that the improvement seen on experimental tasks instead results from general cognitive/perceptual development. One face processing mechanism that has been argued to develop slowly is the ability to encode faces in a view-invariant manner (i.e., allowing recognition across changes in viewpoint). However, many previous studies have not controlled for general cognitive factors. In the current study, 8-year-olds and adults performed a recognition memory task with two study-test viewpoint conditions: same view (study front view, test front view) and change view (study front view, test three-quarter view). To allow quantitative comparison between children and adults, performance in the same view condition was matched across the groups by increasing the learning set size for adults. Results showed poorer memory in the change view condition than in the same view condition for both adults and children. Importantly, there was no quantitative difference between children and adults in the size of decrement in memory performance resulting from a change in viewpoint. This finding adds to growing evidence that face processing mechanisms are mature early in childhood. Copyright © 2014 Elsevier Inc. All rights reserved.
State-dependent alteration in face emotion recognition in depression.
Anderson, Ian M; Shippen, Clare; Juhasz, Gabriella; Chase, Diana; Thomas, Emma; Downey, Darragh; Toth, Zoltan G; Lloyd-Williams, Kathryn; Elliott, Rebecca; Deakin, J F William
2011-04-01
Negative biases in emotional processing are well recognised in people who are currently depressed but are less well described in those with a history of depression, where such biases may contribute to vulnerability to relapse. To compare accuracy, discrimination and bias in face emotion recognition in those with current and remitted depression. The sample comprised a control group (n = 101), a currently depressed group (n = 30) and a remitted depression group (n = 99). Participants provided valid data after receiving a computerised face emotion recognition task following standardised assessment of diagnosis and mood symptoms. In the control group women were more accurate in recognising emotions than men owing to greater discrimination. Among participants with depression, those in remission correctly identified more emotions than controls owing to increased response bias, whereas those currently depressed recognised fewer emotions owing to decreased discrimination. These effects were most marked for anger, fear and sadness but there was no significant emotion × group interaction, and a similar pattern tended to be seen for happiness although not for surprise or disgust. These differences were confined to participants who were antidepressant-free, with those taking antidepressants having similar results to the control group. Abnormalities in face emotion recognition differ between people with current depression and those in remission. Reduced discrimination in depressed participants may reflect withdrawal from the emotions of others, whereas the increased bias in those with a history of depression could contribute to vulnerability to relapse. The normal face emotion recognition seen in those taking medication may relate to the known effects of antidepressants on emotional processing and could contribute to their ability to protect against depressive relapse.
Jarros, Rafaela Behs; Salum, Giovanni Abrahão; Belem da Silva, Cristiano Tschiedel; Toazza, Rudineia; de Abreu Costa, Marianna; Fumagalli de Salles, Jerusa; Manfro, Gisele Gus
2012-02-01
The aim of the present study was to test the ability of adolescents with a current anxiety diagnosis to recognize facial affective expressions, compared to those without an anxiety disorder. Forty cases and 27 controls were selected from a larger cross sectional community sample of adolescents, aged from 10 to 17 years old. Adolescent's facial recognition of six human emotions (sadness, anger, disgust, happy, surprise and fear) and neutral faces was assessed through a facial labeling test using Ekman's Pictures of Facial Affect (POFA). Adolescents with anxiety disorders had a higher mean number of errors in angry faces as compared to controls: 3.1 (SD=1.13) vs. 2.5 (SD=2.5), OR=1.72 (CI95% 1.02 to 2.89; p=0.040). However, they named neutral faces more accurately than adolescents without anxiety diagnosis: 15% of cases vs. 37.1% of controls presented at least one error in neutral faces, OR=3.46 (CI95% 1.02 to 11.7; p=0.047). No differences were found considering other human emotions or on the distribution of errors in each emotional face between the groups. Our findings support an anxiety-mediated influence on the recognition of facial expressions in adolescence. These difficulty in recognizing angry faces and more accuracy in naming neutral faces may lead to misinterpretation of social clues and can explain some aspects of the impairment in social interactions in adolescents with anxiety disorders. Copyright © 2011 Elsevier Ltd. All rights reserved.
Hot, Pascal; Klein-Koerkamp, Yanica; Borg, Céline; Richard-Mornas, Aurélie; Zsoldos, Isabella; Paignon Adeline, Adeline; Thomas Antérion, Catherine; Baciu, Monica
2013-06-01
A decline in the ability to identify fearful expression has been frequently reported in patients with Alzheimer's disease (AD). In patients with severe destruction of the bilateral amygdala, similar difficulties have been reduced by using an explicit visual exploration strategy focusing on gaze. The current study assessed the possibility of applying a similar strategy in AD patients to improve fear recognition. It also assessed the possibility of improving fear recognition when a visual exploration strategy induced AD patients to process the eyes region. Seventeen patients with mild AD and 34 healthy subjects (17 young adults and 17 older adults) performed a classical task of emotional identification of faces expressing happiness, anger, and fear in two conditions: The face appeared progressively from the eyes region to the periphery (eyes region condition) or it appeared as a whole (global condition). Specific impairment in identifying a fearful expression was shown in AD patients compared with older adult controls during the global condition. Fear expression recognition was significantly improved in AD patients during the eyes region condition, in which they performed similarly to older adult controls. Our results suggest that using a different strategy of face exploration, starting first with processing of the eyes region, may compensate for a fear recognition deficit in AD patients. Findings suggest that a part of this deficit could be related to visuo-perceptual impairments. Additionally, these findings suggest that the decline of fearful face recognition reported in both normal aging and in AD may result from impairment of non-amygdalar processing in both groups and impairment of amygdalar-dependent processing in AD. Copyright © 2013 Elsevier Inc. All rights reserved.
A new selective developmental deficit: Impaired object recognition with normal face recognition.
Germine, Laura; Cashdollar, Nathan; Düzel, Emrah; Duchaine, Bradley
2011-05-01
Studies of developmental deficits in face recognition, or developmental prosopagnosia, have shown that individuals who have not suffered brain damage can show face recognition impairments coupled with normal object recognition (Duchaine and Nakayama, 2005; Duchaine et al., 2006; Nunn et al., 2001). However, no developmental cases with the opposite dissociation - normal face recognition with impaired object recognition - have been reported. The existence of a case of non-face developmental visual agnosia would indicate that the development of normal face recognition mechanisms does not rely on the development of normal object recognition mechanisms. To see whether a developmental variant of non-face visual object agnosia exists, we conducted a series of web-based object and face recognition tests to screen for individuals showing object recognition memory impairments but not face recognition impairments. Through this screening process, we identified AW, an otherwise normal 19-year-old female, who was then tested in the lab on face and object recognition tests. AW's performance was impaired in within-class visual recognition memory across six different visual categories (guns, horses, scenes, tools, doors, and cars). In contrast, she scored normally on seven tests of face recognition, tests of memory for two other object categories (houses and glasses), and tests of recall memory for visual shapes. Testing confirmed that her impairment was not related to a general deficit in lower-level perception, object perception, basic-level recognition, or memory. AW's results provide the first neuropsychological evidence that recognition memory for non-face visual object categories can be selectively impaired in individuals without brain damage or other memory impairment. These results indicate that the development of recognition memory for faces does not depend on intact object recognition memory and provide further evidence for category-specific dissociations in visual recognition. Copyright © 2010 Elsevier Srl. All rights reserved.
The importance of internal facial features in learning new faces.
Longmore, Christopher A; Liu, Chang Hong; Young, Andrew W
2015-01-01
For familiar faces, the internal features (eyes, nose, and mouth) are known to be differentially salient for recognition compared to external features such as hairstyle. Two experiments are reported that investigate how this internal feature advantage accrues as a face becomes familiar. In Experiment 1, we tested the contribution of internal and external features to the ability to generalize from a single studied photograph to different views of the same face. A recognition advantage for the internal features over the external features was found after a change of viewpoint, whereas there was no internal feature advantage when the same image was used at study and test. In Experiment 2, we removed the most salient external feature (hairstyle) from studied photographs and looked at how this affected generalization to a novel viewpoint. Removing the hair from images of the face assisted generalization to novel viewpoints, and this was especially the case when photographs showing more than one viewpoint were studied. The results suggest that the internal features play an important role in the generalization between different images of an individual's face by enabling the viewer to detect the common identity-diagnostic elements across non-identical instances of the face.
Random-Profiles-Based 3D Face Recognition System
Joongrock, Kim; Sunjin, Yu; Sangyoun, Lee
2014-01-01
In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation. PMID:24691101
Jemel, Boutheina; Pisani, Michèle; Calabria, Marco; Crommelinck, Marc; Bruyer, Raymond
2003-07-01
Impoverished images of faces, two-tone Mooney faces, severely impair the ability to recognize to whom the face pertains. However, previously seeing the corresponding face in a clear format helps fame-judgments to Mooney faces. In the present experiment, we sought to demonstrate that enhancement in the perceptual encoding of Mooney faces results from top-down effects, due to previous activation of familiar face representation. Event-related potentials (ERPs) were obtained for target Mooney images of familiar and unfamiliar faces preceded by clear pictures portraying either the same photo (same photo prime), or a different photo of the same person (different photo prime) or a new unfamiliar face (no-prime). In agreement with previous findings the use of primes was effective in enhancing the recognition of familiar faces in Mooney images; this priming effect was larger in the same than in different photo priming condition. ERP data revealed that the amplitude of the N170 face-sensitive component was smaller when elicited by familiar than by unfamiliar face targets, and for familiar face targets primed by the same than by different photos (a graded priming effect). Because the priming effect was restricted to familiar faces and occurred at the peak of the N170, we suggest that the early perceptual stage of face processing is likely to be penetrable by the top-down effect due to the activation of face representations within the face recognition system.
Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia
2016-05-01
Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Detecting individual memories through the neural decoding of memory states and past experience.
Rissman, Jesse; Greely, Henry T; Wagner, Anthony D
2010-05-25
A wealth of neuroscientific evidence indicates that our brains respond differently to previously encountered than to novel stimuli. There has been an upswell of interest in the prospect that functional MRI (fMRI), when coupled with multivariate data analysis techniques, might allow the presence or absence of individual memories to be detected from brain activity patterns. This could have profound implications for forensic investigations and legal proceedings, and thus the merits and limitations of such an approach are in critical need of empirical evaluation. We conducted two experiments to investigate whether neural signatures of recognition memory can be reliably decoded from fMRI data. In Exp. 1, participants were scanned while making explicit recognition judgments for studied and novel faces. Multivoxel pattern analysis (MVPA) revealed a robust ability to classify whether a given face was subjectively experienced as old or new, as well as whether recognition was accompanied by recollection, strong familiarity, or weak familiarity. Moreover, a participant's subjective mnemonic experiences could be reliably decoded even when the classifier was trained on the brain data from other individuals. In contrast, the ability to classify a face's objective old/new status, when holding subjective status constant, was severely limited. This important boundary condition was further evidenced in Exp. 2, which demonstrated that mnemonic decoding is poor when memory is indirectly (implicitly) probed. Thus, although subjective memory states can be decoded quite accurately under controlled experimental conditions, fMRI has uncertain utility for objectively detecting an individual's past experiences.
Aviezer, Hillel; Hassin, Ran. R.; Bentin, Shlomo
2011-01-01
In the current study we examined the recognition of facial expressions embedded in emotionally expressive bodies in case LG, an individual with a rare form of developmental visual agnosia who suffers from severe prosopagnosia. Neuropsychological testing demonstrated that LG‘s agnosia is characterized by profoundly impaired visual integration. Unlike individuals with typical developmental prosopagnosia who display specific difficulties with face identity (but typically not expression) recognition, LG was also impaired at recognizing isolated facial expressions. By contrast, he successfully recognized the expressions portrayed by faceless emotional bodies handling affective paraphernalia. When presented with contextualized faces in emotional bodies his ability to detect the emotion expressed by a face did not improve even if it was embedded in an emotionally-congruent body context. Furthermore, in contrast to controls, LG displayed an abnormal pattern of contextual influence from emotionally-incongruent bodies. The results are interpreted in the context of a general integration deficit in developmental visual agnosia, suggesting that impaired integration may extend from the level of the face to the level of the full person. PMID:21482423
Deep features for efficient multi-biometric recognition with face and ear images
NASA Astrophysics Data System (ADS)
Omara, Ibrahim; Xiao, Gang; Amrani, Moussa; Yan, Zifei; Zuo, Wangmeng
2017-07-01
Recently, multimodal biometric systems have received considerable research interest in many applications especially in the fields of security. Multimodal systems can increase the resistance to spoof attacks, provide more details and flexibility, and lead to better performance and lower error rate. In this paper, we present a multimodal biometric system based on face and ear, and propose how to exploit the extracted deep features from Convolutional Neural Networks (CNNs) on the face and ear images to introduce more powerful discriminative features and robust representation ability for them. First, the deep features for face and ear images are extracted based on VGG-M Net. Second, the extracted deep features are fused by using a traditional concatenation and a Discriminant Correlation Analysis (DCA) algorithm. Third, multiclass support vector machine is adopted for matching and classification. The experimental results show that the proposed multimodal system based on deep features is efficient and achieves a promising recognition rate up to 100 % by using face and ear. In addition, the results indicate that the fusion based on DCA is superior to traditional fusion.
A Contemporary Approach to Entrepreneurship Education
ERIC Educational Resources Information Center
Jones, Colin; English, Jack
2004-01-01
Entrepreneurial education is the process of providing individuals with the ability to recognise commercial opportunities and the insight, self-esteem, knowledge and skills to act on them. It includes instruction in opportunity recognition, commercialising a concept, marshalling resources in the face of risk, and initiating a business venture. It…
Individual differences in false memory from misinformation: cognitive factors.
Zhu, Bi; Chen, Chuansheng; Loftus, Elizabeth F; Lin, Chongde; He, Qinghua; Chen, Chunhui; Li, He; Xue, Gui; Lu, Zhonglin; Dong, Qi
2010-07-01
This research investigated the cognitive correlates of false memories that are induced by the misinformation paradigm. A large sample of Chinese college students (N=436) participated in a misinformation procedure and also took a battery of cognitive tests. Results revealed sizable and systematic individual differences in false memory arising from exposure to misinformation. False memories were significantly and negatively correlated with measures of intelligence (measured with Raven's Advanced Progressive Matrices and Wechsler Adult Intelligence Scale), perception (Motor-Free Visual Perception Test, Change Blindness, and Tone Discrimination), memory (Wechsler Memory Scales and 2-back Working Memory tasks), and face judgement (Face Recognition and Facial Expression Recognition). These findings suggest that people with relatively low intelligence and poor perceptual abilities might be more susceptible to the misinformation effect.
Li, Pengli; Zhang, Chunhua; Yi, Li
2016-07-01
The current study examined how children with Autism Spectrum Disorders (ASD) could selectively trust others based on three facial cues: the face race, attractiveness, and trustworthiness. In a computer-based hide-and-seek game, two face images, which differed significantly in one of the three facial cues, were presented as two cues for selective trust. Children had to selectively trust the own-race, attractive and trustworthy faces to get the prize. Our findings demonstrate an intact ability of selective trust based on face appearance in ASD compared to typical children: they could selectively trust the informant based on face race and attractiveness. Our results imply that despite their face recognition deficits, children with ASD are still sensitive to some aspects of face appearance.
Fiacconi, Chris M.; Barkley, Victoria; Finger, Elizabeth C.; Carson, Nicole; Duke, Devin; Rosenbaum, R. Shayna; Gilboa, Asaf; Köhler, Stefan
2014-01-01
Patients with Capgras syndrome (CS) adopt the delusional belief that persons well-known to them have been replaced by an imposter. Several current theoretical models of CS attribute such misidentification problems to deficits in covert recognition processes related to the generation of appropriate affective autonomic signals. These models assume intact overt recognition processes for the imposter and, more broadly, for other individuals. As such, it has been suggested that CS could reflect the “mirror-image” of prosopagnosia. The purpose of the current study was to determine whether overt person recognition abilities are indeed always spared in CS. Furthermore, we examined whether CS might be associated with any impairments in overt affective judgments of facial expressions. We pursued these goals by studying a patient with Dementia with Lewy bodies (DLB) who showed clear signs of CS, and by comparing him to another patient with DLB who did not experience CS, as well as to a group of healthy control participants. Clinical magnetic resonance imaging scans revealed medial prefrontal cortex (mPFC) atrophy that appeared to be uniquely associated with the presence CS. We assessed overt person recognition with three fame recognition tasks, using faces, voices, and names as cues. We also included measures of confidence and probed pertinent semantic knowledge. In addition, participants rated the intensity of fearful facial expressions. We found that CS was associated with overt person recognition deficits when probed with faces and voices, but not with names. Critically, these deficits were not present in the DLB patient without CS. In addition, CS was associated with impairments in overt judgments of affect intensity. Taken together, our findings cast doubt on the traditional view that CS is the mirror-image of prosopagnosia and that it spares overt recognition abilities. These findings can still be accommodated by models of CS that emphasize deficits in autonomic responding, to the extent that the potential role of interoceptive awareness in overt judgments is taken into account. PMID:25309399
A framework for the recognition of 3D faces and expressions
NASA Astrophysics Data System (ADS)
Li, Chao; Barreto, Armando
2006-04-01
Face recognition technology has been a focus both in academia and industry for the last couple of years because of its wide potential applications and its importance to meet the security needs of today's world. Most of the systems developed are based on 2D face recognition technology, which uses pictures for data processing. With the development of 3D imaging technology, 3D face recognition emerges as an alternative to overcome the difficulties inherent with 2D face recognition, i.e. sensitivity to illumination conditions and orientation positioning of the subject. But 3D face recognition still needs to tackle the problem of deformation of facial geometry that results from the expression changes of a subject. To deal with this issue, a 3D face recognition framework is proposed in this paper. It is composed of three subsystems: an expression recognition system, a system for the identification of faces with expression, and neutral face recognition system. A system for the recognition of faces with one type of expression (happiness) and neutral faces was implemented and tested on a database of 30 subjects. The results proved the feasibility of this framework.
Problems of Face Recognition in Patients with Behavioral Variant Frontotemporal Dementia.
Chandra, Sadanandavalli Retnaswami; Patwardhan, Ketaki; Pai, Anupama Ramakanth
2017-01-01
Faces are very special as they are most essential for social cognition in humans. It is partly understood that face processing in its abstractness involves several extra striate areas. One of the most important causes for caregiver suffering in patients with anterior dementia is lack of empathy. This apart from being a behavioral disorder could be also due to failure to categorize the emotions of the people around them. Inlusion criteria: DSM IV for Bv FTD Tested for prosopagnosia - familiar faces, famous face, smiling face, crying face and reflected face using a simple picture card (figure 1). Advanced illness and mixed causes. 46 patients (15 females, 31 males) 24 had defective face recognition. (mean age 51.5),10/15 females (70%) and 14/31males(47. Familiar face recognition defect was found in 6/10 females and 6/14 males. Total- 40%(6/15) females and 19.35%(6/31)males with FTD had familiar face recognition. Famous Face: 9/10 females and 7/14 males. Total- 60% (9/15) females with FTD had famous face recognition defect as against 22.6%(7/31) males with FTD Smiling face defects in 8/10 female and no males. Total- 53.33% (8/15) females. Crying face recognition defect in 3/10 female and 2 /14 males. Total- 20%(3/15) females and 6.5%(2/31) males. Reflected face recognition defect in 4 females. Famous face recognition and positive emotion recognition defect in 80%, only 20% comprehend positive emotions, Face recognition defects are found in only 45% of males and more common in females. Face recognition is more affected in females with FTD There is differential involvement of different aspects of the face recognition could be one of the important factor underlying decline in the emotional and social behavior of these patients. Understanding these pathological processes will give more insight regarding patient behavior.
Chang, Liangtang; Zhang, Shikun; Poo, Mu-ming; Gong, Neng
2017-01-01
Mirror self-recognition (MSR) is generally considered to be an intrinsic cognitive ability found only in humans and a few species of great apes. Rhesus monkeys do not spontaneously show MSR, but they have the ability to use a mirror as an instrument to find hidden objects. The mechanism underlying the transition from simple mirror use to MSR remains unclear. Here we show that rhesus monkeys could show MSR after learning precise visual-proprioceptive association for mirror images. We trained head-fixed monkeys on a chair in front of a mirror to touch with spatiotemporal precision a laser pointer light spot on an adjacent board that could only be seen in the mirror. After several weeks of training, when the same laser pointer light was projected to the monkey's face, a location not used in training, all three trained monkeys successfully touched the face area marked by the light spot in front of a mirror. All trained monkeys passed the standard face mark test for MSR both on the monkey chair and in their home cage. Importantly, distinct from untrained control monkeys, the trained monkeys showed typical mirror-induced self-directed behaviors in their home cage, such as using the mirror to explore normally unseen body parts. Thus, bodily self-consciousness may be a cognitive ability present in many more species than previously thought, and acquisition of precise visual-proprioceptive association for the images in the mirror is critical for revealing the MSR ability of the animal. PMID:28193875
Chang, Liangtang; Zhang, Shikun; Poo, Mu-Ming; Gong, Neng
2017-03-21
Mirror self-recognition (MSR) is generally considered to be an intrinsic cognitive ability found only in humans and a few species of great apes. Rhesus monkeys do not spontaneously show MSR, but they have the ability to use a mirror as an instrument to find hidden objects. The mechanism underlying the transition from simple mirror use to MSR remains unclear. Here we show that rhesus monkeys could show MSR after learning precise visual-proprioceptive association for mirror images. We trained head-fixed monkeys on a chair in front of a mirror to touch with spatiotemporal precision a laser pointer light spot on an adjacent board that could only be seen in the mirror. After several weeks of training, when the same laser pointer light was projected to the monkey's face, a location not used in training, all three trained monkeys successfully touched the face area marked by the light spot in front of a mirror. All trained monkeys passed the standard face mark test for MSR both on the monkey chair and in their home cage. Importantly, distinct from untrained control monkeys, the trained monkeys showed typical mirror-induced self-directed behaviors in their home cage, such as using the mirror to explore normally unseen body parts. Thus, bodily self-consciousness may be a cognitive ability present in many more species than previously thought, and acquisition of precise visual-proprioceptive association for the images in the mirror is critical for revealing the MSR ability of the animal.
Peñaloza, Claudia; Mirman, Daniel; Tuomiranta, Leena; Benetello, Annalisa; Heikius, Ida-Maria; Järvinen, Sonja; Majos, Maria C; Cardona, Pedro; Juncadella, Montserrat; Laine, Matti; Martin, Nadine; Rodríguez-Fornells, Antoni
2016-06-01
Recent research suggests that some people with aphasia preserve some ability to learn novel words and to retain them in the long-term. However, this novel word learning ability has been studied only in the context of single word-picture pairings. We examined the ability of people with chronic aphasia to learn novel words using a paradigm that presents new word forms together with a limited set of different possible visual referents and requires the identification of the correct word-object associations on the basis of online feedback. We also studied the relationship between word learning ability and aphasia severity, word processing abilities, and verbal short-term memory (STM). We further examined the influence of gross lesion location on new word learning. The word learning task was first validated with a group of forty-five young adults. Fourteen participants with chronic aphasia were administered the task and underwent tests of immediate and long-term recognition memory at 1 week. Their performance was compared to that of a group of fourteen matched controls using growth curve analysis. The learning curve and recognition performance of the aphasia group was significantly below the matched control group, although above-chance recognition performance and case-by-case analyses indicated that some participants with aphasia had learned the correct word-referent mappings. Verbal STM but not word processing abilities predicted word learning ability after controlling for aphasia severity. Importantly, participants with lesions in the left frontal cortex performed significantly worse than participants with lesions that spared the left frontal region both during word learning and on the recognition tests. Our findings indicate that some people with aphasia can preserve the ability to learn a small novel lexicon in an ambiguous word-referent context. This learning and recognition memory ability was associated with verbal STM capacity, aphasia severity and the integrity of the left inferior frontal region. Copyright © 2016 Elsevier Ltd. All rights reserved.
Peñaloza, Claudia; Mirman, Daniel; Tuomiranta, Leena; Benetello, Annalisa; Heikius, Ida-Maria; Järvinen, Sonja; Majos, Maria C.; Cardona, Pedro; Juncadella, Montserrat; Laine, Matti; Martin, Nadine; Rodríguez-Fornells, Antoni
2017-01-01
Recent research suggests that some people with aphasia preserve some ability to learn novel words and to retain them in the long-term. However, this novel word learning ability has been studied only in the context of single word-picture pairings. We examined the ability of people with chronic aphasia to learn novel words using a paradigm that presents new word forms together with a limited set of different possible visual referents and requires the identification of the correct word-object associations on the basis of online feedback. We also studied the relationship between word learning ability and aphasia severity, word processing abilities, and verbal short-term memory (STM). We further examined the influence of gross lesion location on new word learning. The word learning task was first validated with a group of forty-five young adults. Fourteen participants with chronic aphasia were administered the task and underwent tests of immediate and long-term recognition memory at 1 week. Their performance was compared to that of a group of fourteen matched controls using growth curve analysis. The learning curve and recognition performance of the aphasia group was significantly below the matched control group, although above-chance recognition performance and case-by-case analyses indicated that some participants with aphasia had learned the correct word-referent mappings. Verbal STM but not word processing abilities predicted word learning ability after controlling for aphasia severity. Importantly, participants with lesions in the left frontal cortex performed significantly worse than participants with lesions that spared the left frontal region both during word learning and on the recognition tests. Our findings indicate that some people with aphasia can preserve the ability to learn a small novel lexicon in an ambiguous word-referent context. This learning and recognition memory ability was associated with verbal STM capacity, aphasia severity and the integrity of the left inferior frontal region. PMID:27085892
Discrimination and categorization of emotional facial expressions and faces in Parkinson's disease.
Alonso-Recio, Laura; Martín, Pilar; Rubio, Sandra; Serrano, Juan M
2014-09-01
Our objective was to compare the ability to discriminate and categorize emotional facial expressions (EFEs) and facial identity characteristics (age and/or gender) in a group of 53 individuals with Parkinson's disease (PD) and another group of 53 healthy subjects. On the one hand, by means of discrimination and identification tasks, we compared two stages in the visual recognition process that could be selectively affected in individuals with PD. On the other hand, facial expression versus gender and age comparison permits us to contrast whether the emotional or non-emotional content influences the configural perception of faces. In Experiment I, we did not find differences between groups, either with facial expression or age, in discrimination tasks. Conversely, in Experiment II, we found differences between the groups, but only in the EFE identification task. Taken together, our results indicate that configural perception of faces does not seem to be globally impaired in PD. However, this ability is selectively altered when the categorization of emotional faces is required. A deeper assessment of the PD group indicated that decline in facial expression categorization is more evident in a subgroup of patients with higher global impairment (motor and cognitive). Taken together, these results suggest that the problems found in facial expression recognition may be associated with the progressive neuronal loss in frontostriatal and mesolimbic circuits, which characterizes PD. © 2013 The British Psychological Society.
Ho, Michael R; Pezdek, Kathy
2016-06-01
The cross-race effect (CRE) describes the finding that same-race faces are recognized more accurately than cross-race faces. According to social-cognitive theories of the CRE, processes of categorization and individuation at encoding account for differential recognition of same- and cross-race faces. Recent face memory research has suggested that similar but distinct categorization and individuation processes also occur postencoding, at recognition. Using a divided-attention paradigm, in Experiments 1A and 1B we tested and confirmed the hypothesis that distinct postencoding categorization and individuation processes occur during the recognition of same- and cross-race faces. Specifically, postencoding configural divided-attention tasks impaired recognition accuracy more for same-race than for cross-race faces; on the other hand, for White (but not Black) participants, postencoding featural divided-attention tasks impaired recognition accuracy more for cross-race than for same-race faces. A social categorization paradigm used in Experiments 2A and 2B tested the hypothesis that the postencoding in-group or out-group social orientation to faces affects categorization and individuation processes during the recognition of same-race and cross-race faces. Postencoding out-group orientation to faces resulted in categorization for White but not for Black participants. This was evidenced by White participants' impaired recognition accuracy for same-race but not for cross-race out-group faces. Postencoding in-group orientation to faces had no effect on recognition accuracy for either same-race or cross-race faces. The results of Experiments 2A and 2B suggest that this social orientation facilitates White but not Black participants' individuation and categorization processes at recognition. Models of recognition memory for same-race and cross-race faces need to account for processing differences that occur at both encoding and recognition.
Visualisation of Lines of Best Fit
ERIC Educational Resources Information Center
Rudziewicz, Michael; Bossé, Michael J.; Marland, Eric S.; Rhoads, Gregory S.
2017-01-01
Humans possess a remarkable ability to recognise both simple patterns such as shapes and handwriting and very complex patterns such as faces and landscapes. To investigate one small aspect of human pattern recognition, in this study participants position lines of "best fit" to two-dimensional scatter plots of data. The study investigates…
Developing the Enterprise Curriculum: Building on Rock, Not Sand
ERIC Educational Resources Information Center
Jones, Colin
2007-01-01
Entrepreneurship education is the process of providing individuals with the ability to recognize commercial opportunities and the insight, self-esteem, knowledge and skills to act on them. It includes instruction in opportunity recognition, commercializing a concept, marshalling resources in the face of risk and initiating a business venture. It…
Face and body recognition show similar improvement during childhood.
Bank, Samantha; Rhodes, Gillian; Read, Ainsley; Jeffery, Linda
2015-09-01
Adults are proficient in extracting identity cues from faces. This proficiency develops slowly during childhood, with performance not reaching adult levels until adolescence. Bodies are similar to faces in that they convey identity cues and rely on specialized perceptual mechanisms. However, it is currently unclear whether body recognition mirrors the slow development of face recognition during childhood. Recent evidence suggests that body recognition develops faster than face recognition. Here we measured body and face recognition in 6- and 10-year-old children and adults to determine whether these two skills show different amounts of improvement during childhood. We found no evidence that they do. Face and body recognition showed similar improvement with age, and children, like adults, were better at recognizing faces than bodies. These results suggest that the mechanisms of face and body memory mature at a similar rate or that improvement of more general cognitive and perceptual skills underlies improvement of both face and body recognition. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hsieh, Cheng-Ta; Huang, Kae-Horng; Lee, Chang-Hsing; Han, Chin-Chuan; Fan, Kuo-Chin
2017-12-01
Robust face recognition under illumination variations is an important and challenging task in a face recognition system, particularly for face recognition in the wild. In this paper, a face image preprocessing approach, called spatial adaptive shadow compensation (SASC), is proposed to eliminate shadows in the face image due to different lighting directions. First, spatial adaptive histogram equalization (SAHE), which uses face intensity prior model, is proposed to enhance the contrast of each local face region without generating visible noises in smooth face areas. Adaptive shadow compensation (ASC), which performs shadow compensation in each local image block, is then used to produce a wellcompensated face image appropriate for face feature extraction and recognition. Finally, null-space linear discriminant analysis (NLDA) is employed to extract discriminant features from SASC compensated images. Experiments performed on the Yale B, Yale B extended, and CMU PIE face databases have shown that the proposed SASC always yields the best face recognition accuracy. That is, SASC is more robust to face recognition under illumination variations than other shadow compensation approaches.
The effect of inversion on face recognition in adults with autism spectrum disorder.
Hedley, Darren; Brewer, Neil; Young, Robyn
2015-05-01
Face identity recognition has widely been shown to be impaired in individuals with autism spectrum disorders (ASD). In this study we examined the influence of inversion on face recognition in 26 adults with ASD and 33 age and IQ matched controls. Participants completed a recognition test comprising upright and inverted faces. Participants with ASD performed worse than controls on the recognition task but did not show an advantage for inverted face recognition. Both groups directed more visual attention to the eye than the mouth region and gaze patterns were not found to be associated with recognition performance. These results provide evidence of a normal effect of inversion on face recognition in adults with ASD.
A multi-view face recognition system based on cascade face detector and improved Dlib
NASA Astrophysics Data System (ADS)
Zhou, Hongjun; Chen, Pei; Shen, Wei
2018-03-01
In this research, we present a framework for multi-view face detect and recognition system based on cascade face detector and improved Dlib. This method is aimed to solve the problems of low efficiency and low accuracy in multi-view face recognition, to build a multi-view face recognition system, and to discover a suitable monitoring scheme. For face detection, the cascade face detector is used to extracted the Haar-like feature from the training samples, and Haar-like feature is used to train a cascade classifier by combining Adaboost algorithm. Next, for face recognition, we proposed an improved distance model based on Dlib to improve the accuracy of multiview face recognition. Furthermore, we applied this proposed method into recognizing face images taken from different viewing directions, including horizontal view, overlooks view, and looking-up view, and researched a suitable monitoring scheme. This method works well for multi-view face recognition, and it is also simulated and tested, showing satisfactory experimental results.
Oxytocin improves facial emotion recognition in young adults with antisocial personality disorder.
Timmermann, Marion; Jeung, Haang; Schmitt, Ruth; Boll, Sabrina; Freitag, Christine M; Bertsch, Katja; Herpertz, Sabine C
2017-11-01
Deficient facial emotion recognition has been suggested to underlie aggression in individuals with antisocial personality disorder (ASPD). As the neuropeptide oxytocin (OT) has been shown to improve facial emotion recognition, it might also exert beneficial effects in individuals providing so much harm to the society. In a double-blind, randomized, placebo-controlled crossover trial, 22 individuals with ASPD and 29 healthy control (HC) subjects (matched for age, sex, intelligence, and education) were intranasally administered either OT (24 IU) or a placebo 45min before participating in an emotion classification paradigm with fearful, angry, and happy faces. We assessed the number of correct classifications and reaction times as indicators of emotion recognition ability. Significant group×substance×emotion interactions were found in correct classifications and reaction times. Compared to HC, individuals with ASPD showed deficits in recognizing fearful and happy faces; these group differences were no longer observable under OT. Additionally, reaction times for angry faces differed significantly between the ASPD and HC group in the placebo condition. This effect was mainly driven by longer reaction times in HC subjects after placebo administration compared to OT administration while individuals with ASPD revealed descriptively the contrary response pattern. Our data indicate an improvement of the recognition of fearful and happy facial expressions by OT in young adults with ASPD. Particularly the increased recognition of facial fear is of high importance since the correct perception of distress signals in others is thought to inhibit aggression. Beneficial effects of OT might be further mediated by improved recognition of facial happiness probably reflecting increased social reward responsiveness. Copyright © 2017. Published by Elsevier Ltd.
Face recognition system and method using face pattern words and face pattern bytes
Zheng, Yufeng
2014-12-23
The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.
Rupp, Claudia I; Derntl, Birgit; Osthaus, Friederike; Kemmler, Georg; Fleischhacker, W Wolfgang
2017-12-01
Despite growing evidence for neurobehavioral deficits in social cognition in alcohol use disorder (AUD), the clinical relevance remains unclear, and little is known about its impact on treatment outcome. This study prospectively investigated the impact of neurocognitive social abilities at treatment onset on treatment completion. Fifty-nine alcohol-dependent patients were assessed with measures of social cognition including 3 core components of empathy via paradigms measuring: (i) emotion recognition (the ability to recognize emotions via facial expression), (ii) emotional perspective taking, and (iii) affective responsiveness at the beginning of inpatient treatment for alcohol dependence. Subjective measures were also obtained, including estimates of task performance and a self-report measure of empathic abilities (Interpersonal Reactivity Index). According to treatment outcomes, patients were divided into a patient group with a regular treatment course (e.g., with planned discharge and without relapse during treatment) or an irregular treatment course (e.g., relapse and/or premature and unplanned termination of treatment, "dropout"). Compared with patients completing treatment in a regular fashion, patients with relapse and/or dropout of treatment had significantly poorer facial emotion recognition ability at treatment onset. Additional logistic regression analyses confirmed these results and identified poor emotion recognition performance as a significant predictor for relapse/dropout. Self-report (subjective) measures did not correspond with neurobehavioral social cognition measures, respectively objective task performance. Analyses of individual subtypes of facial emotions revealed poorer recognition particularly of disgust, anger, and no (neutral faces) emotion in patients with relapse/dropout. Social cognition in AUD is clinically relevant. Less successful treatment outcome was associated with poorer facial emotion recognition ability at the beginning of treatment. Impaired facial emotion recognition represents a neurocognitive risk factor that should be taken into account in alcohol dependence treatment. Treatments targeting the improvement of these social cognition deficits in AUD may offer a promising future approach. Copyright © 2017 by the Research Society on Alcoholism.
The Bangor Voice Matching Test: A standardized test for the assessment of voice perception ability.
Mühl, Constanze; Sheil, Orla; Jarutytė, Lina; Bestelmeyer, Patricia E G
2017-11-09
Recognising the identity of conspecifics is an important yet highly variable skill. Approximately 2 % of the population suffers from a socially debilitating deficit in face recognition. More recently the existence of a similar deficit in voice perception has emerged (phonagnosia). Face perception tests have been readily available for years, advancing our understanding of underlying mechanisms in face perception. In contrast, voice perception has received less attention, and the construction of standardized voice perception tests has been neglected. Here we report the construction of the first standardized test for voice perception ability. Participants make a same/different identity decision after hearing two voice samples. Item Response Theory guided item selection to ensure the test discriminates between a range of abilities. The test provides a starting point for the systematic exploration of the cognitive and neural mechanisms underlying voice perception. With a high test-retest reliability (r=.86) and short assessment duration (~10 min) this test examines individual abilities reliably and quickly and therefore also has potential for use in developmental and neuropsychological populations.
Understanding gender bias in face recognition: effects of divided attention at encoding.
Palmer, Matthew A; Brewer, Neil; Horry, Ruth
2013-03-01
Prior research has demonstrated a female own-gender bias in face recognition, with females better at recognizing female faces than male faces. We explored the basis for this effect by examining the effect of divided attention during encoding on females' and males' recognition of female and male faces. For female participants, divided attention impaired recognition performance for female faces to a greater extent than male faces in a face recognition paradigm (Study 1; N=113) and an eyewitness identification paradigm (Study 2; N=502). Analysis of remember-know judgments (Study 2) indicated that divided attention at encoding selectively reduced female participants' recollection of female faces at test. For male participants, divided attention selectively reduced recognition performance (and recollection) for male stimuli in Study 2, but had similar effects on recognition of male and female faces in Study 1. Overall, the results suggest that attention at encoding contributes to the female own-gender bias by facilitating the later recollection of female faces. Copyright © 2013 Elsevier B.V. All rights reserved.
The “eye avoidance” hypothesis of autism face processing
Tanaka, James W.; Sung, Andrew
2013-01-01
Although a growing body of research indicates that children with autism spectrum disorder (ASD) exhibit selective deficits in their ability to recognize facial identities and expressions, the source of their face impairment is, as yet, undetermined. In this paper, we consider three possible accounts of the autism face deficit: 1) the holistic hypothesis, 2) the local perceptual bias hypothesis and 3) the eye avoidance hypothesis. A review of the literature indicates that contrary to the holistic hypothesis, there is little evidence to suggest that individuals with autism do not perceive faces holistically. The local perceptual bias account also fails to explain the selective advantage that ASD individuals demonstrate for objects and their selective disadvantage for faces. The eye avoidance hypothesis provides a plausible explanation of face recognition deficits where individuals with ASD avoid the eye region because it is perceived as socially threatening. Direct eye contact elicits a heightened physiological response as indicated by heightened skin conductance and increased amgydala activity. For individuals with autism, avoiding the eyes is an adaptive strategy, however, this approach interferes with the ability to process facial cues of identity, expressions and intentions, The “eye avoidance” strategy has negative effects on the ability to decode facial information about identity, expression, and intentions, exacerbating the social challenges for persons with ASD. PMID:24150885
NASA Astrophysics Data System (ADS)
Petpairote, Chayanut; Madarasmi, Suthep; Chamnongthai, Kosin
2018-01-01
The practical identification of individuals using facial recognition techniques requires the matching of faces with specific expressions to faces from a neutral face database. A method for facial recognition under varied expressions against neutral face samples of individuals via recognition of expression warping and the use of a virtual expression-face database is proposed. In this method, facial expressions are recognized and the input expression faces are classified into facial expression groups. To aid facial recognition, the virtual expression-face database is sorted into average facial-expression shapes and by coarse- and fine-featured facial textures. Wrinkle information is also employed in classification by using a process of masking to adjust input faces to match the expression-face database. We evaluate the performance of the proposed method using the CMU multi-PIE, Cohn-Kanade, and AR expression-face databases, and we find that it provides significantly improved results in terms of face recognition accuracy compared to conventional methods and is acceptable for facial recognition under expression variation.
Li, Tianbi; Wang, Xueqin; Pan, Junhao; Feng, Shuyuan; Gong, Mengyuan; Wu, Yaxue; Li, Guoxiang; Li, Sheng; Yi, Li
2017-11-01
The processing of social stimuli, such as human faces, is impaired in individuals with autism spectrum disorder (ASD), which could be accounted for by their lack of social motivation. The current study examined how the attentional processing of faces in children with ASD could be modulated by the learning of face-reward associations. Sixteen high-functioning children with ASD and 20 age- and ability-matched typically developing peers participated in the experiments. All children started with a reward learning task, in which the children were presented with three female faces that were attributed with positive, negative, and neutral values, and were required to remember the faces and their associated values. After this, they were tested on the recognition of the learned faces and a visual search task in which the learned faces served as the distractor. We found a modulatory effect of the face-reward associations on the visual search but not the recognition performance in both groups despite the lower efficacy among children with ASD in learning the face-reward associations. Specifically, both groups responded faster when one of the distractor faces was associated with positive or negative values than when the distractor face was neutral, suggesting an efficient attentional processing of these reward-associated faces. Our findings provide direct evidence for the perceptual-level modulatory effect of reward learning on the attentional processing of faces in individuals with ASD. Autism Res 2017, 10: 1797-1807. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. In our study, we tested whether the face processing of individuals with ASD could be changed when the faces were associated with different social meanings. We found no effect of social meanings on face recognition, but both groups responded faster in the visual search task when one of the distractor faces was associated with positive or negative values than when the neutral face. The findings suggest that children with ASD could efficiently process faces associated with different values like typical children. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
Equipping African American Clergy to Recognize Depression.
Anthony, Jean Spann; Morris, Edith; Collins, Charles W; Watson, Albert; Williams, Jennifer E; Ferguson, Bʼnai; Ruhlman, Deborah L
2016-01-01
Many African Americans (AAs) use clergy as their primary source of help for depression, with few being referred to mental health providers. This study used face-to-face workshops to train AA clergy to recognize the symptoms and levels of severity of depression. A pretest/posttest format was used to test knowledge (N = 42) about depression symptoms. Results showed that the participation improved the clergy's ability to recognize depression symptoms. Faith community nurses can develop workshops for clergy to improve recognition and treatment of depression.
Activation of the right fronto-temporal cortex during maternal facial recognition in young infants.
Carlsson, Jakob; Lagercrantz, Hugo; Olson, Linus; Printz, Gordana; Bartocci, Marco
2008-09-01
Within the first days of life infants can already recognize their mother. This ability is based on several sensory mechanisms and increases during the first year of life, having its most crucial phase between 6 and 9 months when cortical circuits develop. The underlying cortical structures that are involved in this process are still unknown. Herein we report how the prefrontal cortices of healthy 6- to 9-month-old infants react to the sight of their mother's faces compared to that of an unknown female face. Concentrations of oxygenated haemoglobin [HbO2] and deoxygenated haemoglobin [HHb] were measured using near infrared spectroscopy (NIRS) in both fronto-temporal and occipital areas on the right side during the exposure to maternal and unfamiliar faces. The infants exhibited a distinct and significantly higher activation-related haemodynamic response in the right fronto-temporal cortex following exposure to the image of their mother's face, [HbO2] (0.75 micromol/L, p < 0.001), as compared to that of an unknown face (0.25 micromol/L, p < 0.001). Event-related haemodynamic changes, suggesting cortical activation, in response to the sight of human faces were detected in 6- to 9-month old children. The right fronto-temporal cortex appears to be involved in face recognition processes at this age.
Exploring the nature of facial affect processing deficits in schizophrenia.
van 't Wout, Mascha; Aleman, André; Kessels, Roy P C; Cahn, Wiepke; de Haan, Edward H F; Kahn, René S
2007-04-15
Schizophrenia has been associated with deficits in facial affect processing, especially negative emotions. However, the exact nature of the deficit remains unclear. The aim of the present study was to investigate whether schizophrenia patients have problems in automatic allocation of attention as well as in controlled evaluation of facial affect. Thirty-seven patients with schizophrenia were compared with 41 control subjects on incidental facial affect processing (gender decision of faces with a fearful, angry, happy, disgusted, and neutral expression) and degraded facial affect labeling (labeling of fearful, angry, happy, and neutral faces). The groups were matched on estimates of verbal and performance intelligence (National Adult Reading Test; Raven's Matrices), general face recognition ability (Benton Face Recognition), and other demographic variables. The results showed that patients with schizophrenia as well as control subjects demonstrate the normal threat-related interference during incidental facial affect processing. Conversely, on controlled evaluation patients were specifically worse in the labeling of fearful faces. In particular, patients with high levels of negative symptoms may be characterized by deficits in labeling fear. We suggest that patients with schizophrenia show no evidence of deficits in the automatic allocation of attention resources to fearful (threat-indicating) faces, but have a deficit in the controlled processing of facial emotions that may be specific for fearful faces.
Jemel, Boutheina; Schuller, Anne-Marie; Goffaux, Valérie
2010-10-01
Although it is generally acknowledged that familiar face recognition is fast, mandatory, and proceeds outside conscious control, it is still unclear whether processes leading to familiar face recognition occur in a linear (i.e., gradual) or a nonlinear (i.e., all-or-none) manner. To test these two alternative accounts, we recorded scalp ERPs while participants indicated whether they recognize as familiar the faces of famous and unfamiliar persons gradually revealed in a descending sequence of frames, from the noisier to the least noisy. This presentation procedure allowed us to characterize the changes in scalp ERP responses occurring prior to and up to overt recognition. Our main finding is that gradual and all-or-none processes are possibly involved during overt recognition of familiar faces. Although the N170 and the N250 face-sensitive responses displayed an abrupt activity change at the moment of overt recognition of famous faces, later ERPs encompassing the N400 and late positive component exhibited an incremental increase in amplitude as the point of recognition approached. In addition, famous faces that were not overtly recognized at one trial before recognition elicited larger ERP potentials than unfamiliar faces, probably reflecting a covert recognition process. Overall, these findings present evidence that recognition of familiar faces implicates spatio-temporally complex neural processes exhibiting differential pattern activity changes as a function of recognition state.
Denmark, Tanya; Atkinson, Joanna; Campbell, Ruth; Swettenham, John
2014-10-01
Facial expressions in sign language carry a variety of communicative features. While emotion can modulate a spoken utterance through changes in intonation, duration and intensity, in sign language specific facial expressions presented concurrently with a manual sign perform this function. When deaf adult signers cannot see facial features, their ability to judge emotion in a signed utterance is impaired (Reilly et al. in Sign Lang Stud 75:113-118, 1992). We examined the role of the face in the comprehension of emotion in sign language in a group of typically developing (TD) deaf children and in a group of deaf children with autism spectrum disorder (ASD). We replicated Reilly et al.'s (Sign Lang Stud 75:113-118, 1992) adult results in the TD deaf signing children, confirming the importance of the face in understanding emotion in sign language. The ASD group performed more poorly on the emotion recognition task than the TD children. The deaf children with ASD showed a deficit in emotion recognition during sign language processing analogous to the deficit in vocal emotion recognition that has been observed in hearing children with ASD.
Cho, Woon; Jang, Jinbeum; Koschan, Andreas; Abidi, Mongi A; Paik, Joonki
2016-11-28
A fundamental limitation of hyperspectral imaging is the inter-band misalignment correlated with subject motion during data acquisition. One way of resolving this problem is to assess the alignment quality of hyperspectral image cubes derived from the state-of-the-art alignment methods. In this paper, we present an automatic selection framework for the optimal alignment method to improve the performance of face recognition. Specifically, we develop two qualitative prediction models based on: 1) a principal curvature map for evaluating the similarity index between sequential target bands and a reference band in the hyperspectral image cube as a full-reference metric; and 2) the cumulative probability of target colors in the HSV color space for evaluating the alignment index of a single sRGB image rendered using all of the bands of the hyperspectral image cube as a no-reference metric. We verify the efficacy of the proposed metrics on a new large-scale database, demonstrating a higher prediction accuracy in determining improved alignment compared to two full-reference and five no-reference image quality metrics. We also validate the ability of the proposed framework to improve hyperspectral face recognition.
Near infrared and visible face recognition based on decision fusion of LBP and DCT features
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan
2018-03-01
Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In order to extract the discriminative complementary features between near infrared and visible images, in this paper, we proposed a novel near infrared and visible face fusion recognition algorithm based on DCT and LBP features. Firstly, the effective features in near-infrared face image are extracted by the low frequency part of DCT coefficients and the partition histograms of LBP operator. Secondly, the LBP features of visible-light face image are extracted to compensate for the lacking detail features of the near-infrared face image. Then, the LBP features of visible-light face image, the DCT and LBP features of near-infrared face image are sent to each classifier for labeling. Finally, decision level fusion strategy is used to obtain the final recognition result. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. The experiment results show that the proposed method extracts the complementary features of near-infrared and visible face images and improves the robustness of unconstrained face recognition. Especially for the circumstance of small training samples, the recognition rate of proposed method can reach 96.13%, which has improved significantly than 92.75 % of the method based on statistical feature fusion.
Do congenital prosopagnosia and the other-race effect affect the same face recognition mechanisms?
Esins, Janina; Schultz, Johannes; Wallraven, Christian; Bülthoff, Isabelle
2014-01-01
Congenital prosopagnosia (CP), an innate impairment in recognizing faces, as well as the other-race effect (ORE), a disadvantage in recognizing faces of foreign races, both affect face recognition abilities. Are the same face processing mechanisms affected in both situations? To investigate this question, we tested three groups of 21 participants: German congenital prosopagnosics, South Korean participants and German controls on three different tasks involving faces and objects. First we tested all participants on the Cambridge Face Memory Test in which they had to recognize Caucasian target faces in a 3-alternative-forced-choice task. German controls performed better than Koreans who performed better than prosopagnosics. In the second experiment, participants rated the similarity of Caucasian faces that differed parametrically in either features or second-order relations (configuration). Prosopagnosics were less sensitive to configuration changes than both other groups. In addition, while all groups were more sensitive to changes in features than in configuration, this difference was smaller in Koreans. In the third experiment, participants had to learn exemplars of artificial objects, natural objects, and faces and recognize them among distractors of the same category. Here prosopagnosics performed worse than participants in the other two groups only when they were tested on face stimuli. In sum, Koreans and prosopagnosic participants differed from German controls in different ways in all tests. This suggests that German congenital prosopagnosics perceive Caucasian faces differently than do Korean participants. Importantly, our results suggest that different processing impairments underlie the ORE and CP. PMID:25324757
Fusion of LBP and SWLD using spatio-spectral information for hyperspectral face recognition
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Jiang, Peng; Zhang, Shuai; Xiong, Jinquan
2018-01-01
Hyperspectral imaging, recording intrinsic spectral information of the skin cross different spectral bands, become an important issue for robust face recognition. However, the main challenges for hyperspectral face recognition are high data dimensionality, low signal to noise ratio and inter band misalignment. In this paper, hyperspectral face recognition based on LBP (Local binary pattern) and SWLD (Simplified Weber local descriptor) is proposed to extract discriminative local features from spatio-spectral fusion information. Firstly, the spatio-spectral fusion strategy based on statistical information is used to attain discriminative features of hyperspectral face images. Secondly, LBP is applied to extract the orientation of the fusion face edges. Thirdly, SWLD is proposed to encode the intensity information in hyperspectral images. Finally, we adopt a symmetric Kullback-Leibler distance to compute the encoded face images. The hyperspectral face recognition is tested on Hong Kong Polytechnic University Hyperspectral Face database (PolyUHSFD). Experimental results show that the proposed method has higher recognition rate (92.8%) than the state of the art hyperspectral face recognition algorithms.
Intact anger recognition in depression despite aberrant visual facial information usage.
Clark, Cameron M; Chiu, Carina G; Diaz, Ruth L; Goghari, Vina M
2014-08-01
Previous literature has indicated abnormalities in facial emotion recognition abilities, as well as deficits in basic visual processes in major depression. However, the literature is unclear on a number of important factors including whether or not these abnormalities represent deficient or enhanced emotion recognition abilities compared to control populations, and the degree to which basic visual deficits might impact this process. The present study investigated emotion recognition abilities for angry versus neutral facial expressions in a sample of undergraduate students with Beck Depression Inventory-II (BDI-II) scores indicative of moderate depression (i.e., ≥20), compared to matched low-BDI-II score (i.e., ≤2) controls via the Bubbles Facial Emotion Perception Task. Results indicated unimpaired behavioural performance in discriminating angry from neutral expressions in the high depressive symptoms group relative to the minimal depressive symptoms group, despite evidence of an abnormal pattern of visual facial information usage. The generalizability of the current findings is limited by the highly structured nature of the facial emotion recognition task used, as well as the use of an analog sample undergraduates scoring high in self-rated symptoms of depression rather than a clinical sample. Our findings suggest that basic visual processes are involved in emotion recognition abnormalities in depression, demonstrating consistency with the emotion recognition literature in other psychopathologies (e.g., schizophrenia, autism, social anxiety). Future research should seek to replicate these findings in clinical populations with major depression, and assess the association between aberrant face gaze behaviours and symptom severity and social functioning. Copyright © 2014 Elsevier B.V. All rights reserved.
When the face fits: recognition of celebrities from matching and mismatching faces and voices.
Stevenage, Sarah V; Neil, Greg J; Hamlin, Iain
2014-01-01
The results of two experiments are presented in which participants engaged in a face-recognition or a voice-recognition task. The stimuli were face-voice pairs in which the face and voice were co-presented and were either "matched" (same person), "related" (two highly associated people), or "mismatched" (two unrelated people). Analysis in both experiments confirmed that accuracy and confidence in face recognition was consistently high regardless of the identity of the accompanying voice. However accuracy of voice recognition was increasingly affected as the relationship between voice and accompanying face declined. Moreover, when considering self-reported confidence in voice recognition, confidence remained high for correct responses despite the proportion of these responses declining across conditions. These results converged with existing evidence indicating the vulnerability of voice recognition as a relatively weak signaller of identity, and results are discussed in the context of a person-recognition framework.
Formal implementation of a performance evaluation model for the face recognition system.
Shin, Yong-Nyuo; Kim, Jason; Lee, Yong-Jun; Shin, Woochang; Choi, Jin-Young
2008-01-01
Due to usability features, practical applications, and its lack of intrusiveness, face recognition technology, based on information, derived from individuals' facial features, has been attracting considerable attention recently. Reported recognition rates of commercialized face recognition systems cannot be admitted as official recognition rates, as they are based on assumptions that are beneficial to the specific system and face database. Therefore, performance evaluation methods and tools are necessary to objectively measure the accuracy and performance of any face recognition system. In this paper, we propose and formalize a performance evaluation model for the biometric recognition system, implementing an evaluation tool for face recognition systems based on the proposed model. Furthermore, we performed evaluations objectively by providing guidelines for the design and implementation of a performance evaluation system, formalizing the performance test process.
Impaired face recognition is associated with social inhibition
Avery, Suzanne N; VanDerKlok, Ross M; Heckers, Stephan; Blackford, Jennifer U
2016-01-01
Face recognition is fundamental to successful social interaction. Individuals with deficits in face recognition are likely to have social functioning impairments that may lead to heightened risk for social anxiety. A critical component of social interaction is how quickly a face is learned during initial exposure to a new individual. Here, we used a novel Repeated Faces task to assess how quickly memory for faces is established. Face recognition was measured over multiple exposures in 52 young adults ranging from low to high in social inhibition, a core dimension of social anxiety. High social inhibition was associated with a smaller slope of change in recognition memory over repeated face exposure, indicating participants with higher social inhibition showed smaller improvements in recognition memory after seeing faces multiple times. We propose that impaired face learning is an important mechanism underlying social inhibition and may contribute to, or maintain, social anxiety. PMID:26776300
Impaired face recognition is associated with social inhibition.
Avery, Suzanne N; VanDerKlok, Ross M; Heckers, Stephan; Blackford, Jennifer U
2016-02-28
Face recognition is fundamental to successful social interaction. Individuals with deficits in face recognition are likely to have social functioning impairments that may lead to heightened risk for social anxiety. A critical component of social interaction is how quickly a face is learned during initial exposure to a new individual. Here, we used a novel Repeated Faces task to assess how quickly memory for faces is established. Face recognition was measured over multiple exposures in 52 young adults ranging from low to high in social inhibition, a core dimension of social anxiety. High social inhibition was associated with a smaller slope of change in recognition memory over repeated face exposure, indicating participants with higher social inhibition showed smaller improvements in recognition memory after seeing faces multiple times. We propose that impaired face learning is an important mechanism underlying social inhibition and may contribute to, or maintain, social anxiety. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Song, Sunbin; Garrido, Lúcia; Nagy, Zoltan; Mohammadi, Siawoosh; Steel, Adam; Driver, Jon; Dolan, Ray J.; Duchaine, Bradley; Furl, Nicholas
2015-01-01
Individuals with developmental prosopagnosia (DP) experience face recognition impairments despite normal intellect and low-level vision and no history of brain damage. Prior studies using diffusion tensor imaging in small samples of subjects with DP (n=6 or n=8) offer conflicting views on the neurobiological bases for DP, with one suggesting white matter differences in two major long-range tracts running through the temporal cortex, and another suggesting white matter differences confined to fibers local to ventral temporal face-specific functional regions of interest (fROIs) in the fusiform gyrus. Here, we address these inconsistent findings using a comprehensive set of analyzes in a sample of DP subjects larger than both prior studies combined (n=16). While we found no microstructural differences in long-range tracts between DP and age-matched control participants, we found differences local to face-specific fROIs, and relationships between these microstructural measures with face recognition ability. We conclude that subtle differences in local rather than long-range tracts in the ventral temporal lobe are more likely associated with developmental prosopagnosia. PMID:26456436
Psychopaths lack the automatic avoidance of social threat: relation to instrumental aggression.
Louise von Borries, Anna Katinka; Volman, Inge; de Bruijn, Ellen Rosalia Aloïs; Bulten, Berend Hendrik; Verkes, Robbert Jan; Roelofs, Karin
2012-12-30
Psychopathy (PP) is associated with marked abnormalities in social emotional behaviour, such as high instrumental aggression (IA). A crucial but largely ignored question is whether automatic social approach-avoidance tendencies may underlie this condition. We tested whether offenders with PP show lack of automatic avoidance tendencies, usually activated when (healthy) individuals are confronted with social threat stimuli (angry faces). We applied a computerized approach-avoidance task (AAT), where participants pushed or pulled pictures of emotional faces using a joystick, upon which the faces decreased or increased in size, respectively. Furthermore, participants completed an emotion recognition task which was used to control for differences in recognition of facial emotions. In contrast to healthy controls (HC), PP patients showed total absence of avoidance tendencies towards angry faces. Interestingly, those responses were related to levels of instrumental aggression and the (in)ability to experience personal distress (PD). These findings suggest that social performance in psychopaths is disturbed on a basic level of automatic action tendencies. The lack of implicit threat avoidance tendencies may underlie their aggressive behaviour. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Face averages enhance user recognition for smartphone security.
Robertson, David J; Kramer, Robin S S; Burton, A Mike
2015-01-01
Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.
Gender differences in recognition of toy faces suggest a contribution of experience.
Ryan, Kaitlin F; Gauthier, Isabel
2016-12-01
When there is a gender effect, women perform better then men in face recognition tasks. Prior work has not documented a male advantage on a face recognition task, suggesting that women may outperform men at face recognition generally either due to evolutionary reasons or the influence of social roles. Here, we question the idea that women excel at all face recognition and provide a proof of concept based on a face category for which men outperform women. We developed a test of face learning to measures individual differences with face categories for which men and women may differ in experience, using the faces of Barbie dolls and of Transformers. The results show a crossover interaction between subject gender and category, where men outperform women with Transformers' faces. We demonstrate that men can outperform women with some categories of faces, suggesting that explanations for a general face recognition advantage for women are in fact not needed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Hyperspectral face recognition with spatiospectral information fusion and PLS regression.
Uzair, Muhammad; Mahmood, Arif; Mian, Ajmal
2015-03-01
Hyperspectral imaging offers new opportunities for face recognition via improved discrimination along the spectral dimension. However, it poses new challenges, including low signal-to-noise ratio, interband misalignment, and high data dimensionality. Due to these challenges, the literature on hyperspectral face recognition is not only sparse but is limited to ad hoc dimensionality reduction techniques and lacks comprehensive evaluation. We propose a hyperspectral face recognition algorithm using a spatiospectral covariance for band fusion and partial least square regression for classification. Moreover, we extend 13 existing face recognition techniques, for the first time, to perform hyperspectral face recognition.We formulate hyperspectral face recognition as an image-set classification problem and evaluate the performance of seven state-of-the-art image-set classification techniques. We also test six state-of-the-art grayscale and RGB (color) face recognition algorithms after applying fusion techniques on hyperspectral images. Comparison with the 13 extended and five existing hyperspectral face recognition techniques on three standard data sets show that the proposed algorithm outperforms all by a significant margin. Finally, we perform band selection experiments to find the most discriminative bands in the visible and near infrared response spectrum.
Face matching impairment in developmental prosopagnosia.
White, David; Rivolta, Davide; Burton, A Mike; Al-Janabi, Shahd; Palermo, Romina
2017-02-01
Developmental prosopagnosia (DP) is commonly referred to as 'face blindness', a term that implies a perceptual basis to the condition. However, DP presents as a deficit in face recognition and is diagnosed using memory-based tasks. Here, we test face identification ability in six people with DP, who are severely impaired on face memory tasks, using tasks that do not rely on memory. First, we compared DP to control participants on a standardized test of unfamiliar face matching using facial images taken on the same day and under standardized studio conditions (Glasgow Face Matching Test; GFMT). Scores for DP participants did not differ from normative accuracy scores on the GFMT. Second, we tested face matching performance on a test created using images that were sourced from the Internet and so varied substantially due to changes in viewing conditions and in a person's appearance (Local Heroes Test; LHT). DP participants showed significantly poorer matching accuracy on the LHT than control participants, for both unfamiliar and familiar face matching. Interestingly, this deficit is specific to 'match' trials, suggesting that people with DP may have particular difficulty in matching images of the same person that contain natural day-to-day variations in appearance. We discuss these results in the broader context of individual differences in face matching ability.
Monocular Advantage for Face Perception Implicates Subcortical Mechanisms in Adult Humans
Gabay, Shai; Nestor, Adrian; Dundas, Eva; Behrmann, Marlene
2014-01-01
The ability to recognize faces accurately and rapidly is an evolutionarily adaptive process. Most studies examining the neural correlates of face perception in adult humans have focused on a distributed cortical network of face-selective regions. There is, however, robust evidence from phylogenetic and ontogenetic studies that implicates subcortical structures, and recently, some investigations in adult humans indicate subcortical correlates of face perception as well. The questions addressed here are whether low-level subcortical mechanisms for face perception (in the absence of changes in expression) are conserved in human adults, and if so, what is the nature of these subcortical representations. In a series of four experiments, we presented pairs of images to the same or different eyes. Participants’ performance demonstrated that subcortical mechanisms, indexed by monocular portions of the visual system, play a functional role in face perception. These mechanisms are sensitive to face-like configurations and afford a coarse representation of a face, comprised of primarily low spatial frequency information, which suffices for matching faces but not for more complex aspects of face perception such as sex differentiation. Importantly, these subcortical mechanisms are not implicated in the perception of other visual stimuli, such as cars or letter strings. These findings suggest a conservation of phylogenetically and ontogenetically lower-order systems in adult human face perception. The involvement of subcortical structures in face recognition provokes a reconsideration of current theories of face perception, which are reliant on cortical level processing, inasmuch as it bolsters the cross-species continuity of the biological system for face recognition. PMID:24236767
Face recognition based on symmetrical virtual image and original training image
NASA Astrophysics Data System (ADS)
Ke, Jingcheng; Peng, Yali; Liu, Shigang; Li, Jun; Pei, Zhao
2018-02-01
In face representation-based classification methods, we are able to obtain high recognition rate if a face has enough available training samples. However, in practical applications, we only have limited training samples to use. In order to obtain enough training samples, many methods simultaneously use the original training samples and corresponding virtual samples to strengthen the ability of representing the test sample. One is directly using the original training samples and corresponding mirror samples to recognize the test sample. However, when the test sample is nearly symmetrical while the original training samples are not, the integration of the original training and mirror samples might not well represent the test samples. To tackle the above-mentioned problem, in this paper, we propose a novel method to obtain a kind of virtual samples which are generated by averaging the original training samples and corresponding mirror samples. Then, the original training samples and the virtual samples are integrated to recognize the test sample. Experimental results on five face databases show that the proposed method is able to partly overcome the challenges of the various poses, facial expressions and illuminations of original face image.
Nonmotor Symptoms in Parkinson Disease: A Descriptive Review on Social Cognition Ability.
Palmeri, Rosanna; Lo Buono, Viviana; Corallo, Francesco; Foti, Maria; Di Lorenzo, Giuseppe; Bramanti, Placido; Marino, Silvia
2017-03-01
Parkinson disease (PD) is a neurodegenerative disorder characterized by motor and nonmotor symptoms. Nonmotor symptoms include cognitive deficits and impairment in emotions recognition ability associated with loss of dopaminergic neurons in the substantia nigra and with alteration in frontostriatal circuits. In this review, we analyzed the studies on social cognition ability in patients with PD. We searched on PubMed and Web of Science databases and screening references of included studied and review articles for additional citations. From initial 260 articles, only 18 met search criteria. A total of 496 patients were compared with 514 health controls, through 16 different tests that assessed some subcomponents of social cognition, such as theory of mind, decision-making, and emotional face recognition. Studies on cognitive function in patients with PD have focused on executive function. Patients with PD showed impairment in social cognition from the earliest stages of disease. This ability seems to not be significantly associated with other cognitive functions.
Image preprocessing study on KPCA-based face recognition
NASA Astrophysics Data System (ADS)
Li, Xuan; Li, Dehua
2015-12-01
Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.
A real time mobile-based face recognition with fisherface methods
NASA Astrophysics Data System (ADS)
Arisandi, D.; Syahputra, M. F.; Putri, I. L.; Purnamawati, S.; Rahmat, R. F.; Sari, P. P.
2018-03-01
Face Recognition is a field research in Computer Vision that study about learning face and determine the identity of the face from a picture sent to the system. By utilizing this face recognition technology, learning process about people’s identity between students in a university will become simpler. With this technology, student won’t need to browse student directory in university’s server site and look for the person with certain face trait. To obtain this goal, face recognition application use image processing methods consist of two phase, pre-processing phase and recognition phase. In pre-processing phase, system will process input image into the best image for recognition phase. Purpose of this pre-processing phase is to reduce noise and increase signal in image. Next, to recognize face phase, we use Fisherface Methods. This methods is chosen because of its advantage that would help system of its limited data. Therefore from experiment the accuracy of face recognition using fisherface is 90%.
Impairment in the recognition of emotion across different media following traumatic brain injury.
Williams, Claire; Wood, Rodger Ll
2010-02-01
The current study examined emotion recognition following traumatic brain injury (TBI) and examined whether performance differed according to the affective valence and type of media presentation of the stimuli. A total of 64 patients with TBI and matched controls completed the Emotion Evaluation Test (EET) and Ekman 60 Faces Test (E-60-FT). Patients with TBI also completed measures of information processing and verbal ability. Results revealed that the TBI group were significantly impaired compared to controls when recognizing emotion on the EET and E-60-FT. A significant main effect of valence was found in both groups, with poor recognition of negative emotions. However, the difference between the recognition of positive and negative emotions was larger in the TBI group. The TBI group were also more accurate recognizing emotion displayed in audiovisual media (EET) than that displayed in still media (E-60-FT). No significant relationship was obtained between emotion recognition tasks and information-processing speed. A significant positive relationship was found between the E-60-FT and one measure of verbal ability. These findings support models of emotion that specify separate neurological pathways for certain emotions and different media and confirm that patients with TBI are vulnerable to experiencing emotion recognition difficulties.
Gender interactions in the recognition of emotions and conduct symptoms in adolescents.
Halász, József; Aspán, Nikoletta; Bozsik, Csilla; Gádoros, Júlia; Inántsy-Pap, Judit
2014-01-01
According to literature data, impairment in the recognition of emotions might be related to antisocial developmental pathway. In the present study, the relationship between gender-specific interaction of emotion recognition and conduct symptoms were studied in non-clinical adolescents. After informed consent, 29 boys and 24 girls (13-16 years, 14 ± 0.1 years) participated in the study. The parent version of the Strengths and Difficulties Questionnaire was used to assess behavioral problems. The recognition of basic emotions was analyzed according to both the gender of the participants and the gender of the stimulus faces via the "Facial Expressions of Emotion- Stimuli and Tests". Girls were significantly better than boys in the recognition of disgust, irrespective from the gender of the stimulus faces, albeit both genders were significantly better in the recognition of disgust in the case of male stimulus faces compared to female stimulus faces. Both boys and girls were significantly better in the recognition of sadness in the case of female stimulus faces compared to male stimulus faces. There was no gender effect (neither participant nor stimulus faces) in the recognition of other emotions. Conduct scores in boys were inversely correlated with the recognition of fear in male stimulus faces (R=-0.439, p<0.05) and with overall emotion recognition in male stimulus faces (R=-0.558, p<0.01). In girls, conduct scores were shown a tendency for positive correlation with disgust recognition in female stimulus faces (R=0.376, p<0.07). A gender-specific interaction between the recognition of emotions and antisocial developmentalpathway is suggested.
From face processing to face recognition: Comparing three different processing levels.
Besson, G; Barragan-Jason, G; Thorpe, S J; Fabre-Thorpe, M; Puma, S; Ceccaldi, M; Barbeau, E J
2017-01-01
Verifying that a face is from a target person (e.g. finding someone in the crowd) is a critical ability of the human face processing system. Yet how fast this can be performed is unknown. The 'entry-level shift due to expertise' hypothesis suggests that - since humans are face experts - processing faces should be as fast - or even faster - at the individual than at superordinate levels. In contrast, the 'superordinate advantage' hypothesis suggests that faces are processed from coarse to fine, so that the opposite pattern should be observed. To clarify this debate, three different face processing levels were compared: (1) a superordinate face categorization level (i.e. detecting human faces among animal faces), (2) a face familiarity level (i.e. recognizing famous faces among unfamiliar ones) and (3) verifying that a face is from a target person, our condition of interest. The minimal speed at which faces can be categorized (∼260ms) or recognized as familiar (∼360ms) has largely been documented in previous studies, and thus provides boundaries to compare our condition of interest to. Twenty-seven participants were included. The recent Speed and Accuracy Boosting procedure paradigm (SAB) was used since it constrains participants to use their fastest strategy. Stimuli were presented either upright or inverted. Results revealed that verifying that a face is from a target person (minimal RT at ∼260ms) was remarkably fast but longer than the face categorization level (∼240ms) and was more sensitive to face inversion. In contrast, it was much faster than recognizing a face as familiar (∼380ms), a level severely affected by face inversion. Face recognition corresponding to finding a specific person in a crowd thus appears achievable in only a quarter of a second. In favor of the 'superordinate advantage' hypothesis or coarse-to-fine account of the face visual hierarchy, these results suggest a graded engagement of the face processing system across processing levels as reflected by the face inversion effects. Furthermore, they underline how verifying that a face is from a target person and detecting a face as familiar - both often referred to as "Face Recognition" - in fact differs. Copyright © 2016 Elsevier B.V. All rights reserved.
Impairment in face processing in autism spectrum disorder: a developmental perspective.
Greimel, Ellen; Schulte-Rüther, Martin; Kamp-Becker, Inge; Remschmidt, Helmut; Herpertz-Dahlmann, Beate; Konrad, Kerstin
2014-09-01
Findings on face identity and facial emotion recognition in autism spectrum disorder (ASD) are inconclusive. Moreover, little is known about the developmental trajectory of face processing skills in ASD. Taking a developmental perspective, the aim of this study was to extend previous findings on face processing skills in a sample of adolescents and adults with ASD. N = 38 adolescents and adults (13-49 years) with high-functioning ASD and n = 37 typically developing (TD) control subjects matched for age and IQ participated in the study. Moreover, n = 18 TD children between the ages of 8 and 12 were included to address the question whether face processing skills in ASD follow a delayed developmental pattern. Face processing skills were assessed using computerized tasks of face identity recognition (FR) and identification of facial emotions (IFE). ASD subjects showed impaired performance on several parameters of the FR and IFE task compared to TD control adolescents and adults. Whereas TD adolescents and adults outperformed TD children in both tasks, performance in ASD adolescents and adults was similar to the group of TD children. Within the groups of ASD and control adolescents and adults, no age-related changes in performance were found. Our findings corroborate and extend previous studies showing that ASD is characterised by broad impairments in the ability to process faces. These impairments seem to reflect a developmentally delayed pattern that remains stable throughout adolescence and adulthood.
Arnold, Aiden E G F; Iaria, Giuseppe; Goghari, Vina M
2016-02-28
Schizophrenia is associated with deficits in face perception and emotion recognition. Despite consistent behavioural results, the neural mechanisms underlying these cognitive abilities have been difficult to isolate, in part due to differences in neuroimaging methods used between studies for identifying regions in the face processing system. Given this problem, we aimed to validate a recently developed fMRI-based dynamic functional localizer task for use in studies of psychiatric populations and specifically schizophrenia. Previously, this functional localizer successfully identified each of the core face processing regions (i.e. fusiform face area, occipital face area, superior temporal sulcus), and regions within an extended system (e.g. amygdala) in healthy individuals. In this study, we tested the functional localizer success rate in 27 schizophrenia patients and in 24 community controls. Overall, the core face processing regions were localized equally between both the schizophrenia and control group. Additionally, the amygdala, a candidate brain region from the extended system, was identified in nearly half the participants from both groups. These results indicate the effectiveness of a dynamic functional localizer at identifying regions of interest associated with face perception and emotion recognition in schizophrenia. The use of dynamic functional localizers may help standardize the investigation of the facial and emotion processing system in this and other clinical populations. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Rossion, Bruno; Michel, Caroline
2018-03-16
We report normative data from a large (N = 307) sample of young adult participants tested with a computerized version of the long form of the classical Benton Facial Recognition Test (BFRT; Benton & Van Allen, 1968). The BFRT-c requires participants to match a target face photograph to either one or three of six face photographs presented simultaneously. We found that the percent accuracy on the BFRT-c (81%-83%) was below ceiling yet well above chance level, with little interindividual variance in this typical population sample, two important aspects of a sensitive clinical test. Although the split-half reliability on response accuracy was relatively low, due to the large variability in difficulty across items, the correct response times measured in this version-completed in 3 min, on average-provide a reliable and critical complementary measure of performance at individual unfamiliar-face matching. In line with previous observations from other measures, females outperformed male participants at the BFRT-c, especially for female faces. In general, performance was also lower following lighting changes than following head rotations, in line with previous studies that have emphasized participants' limited ability to match pictures of unfamiliar faces with important variations in illumination. Overall, this normative data set supports the validity of the BFRT-c as a key component of a battery of tests to identify clinical impairments in individual face recognition, such as observed in acquired prosopagnosia. However, this analysis strongly recommends that researchers consider the full test results: Beyond global indexes of performance based on accuracy rates only, they should consider the time taken to match individual faces as well as the variability in performance across items.
Correlation based efficient face recognition and color change detection
NASA Astrophysics Data System (ADS)
Elbouz, M.; Alfalou, A.; Brosseau, C.; Alam, M. S.; Qasmi, S.
2013-01-01
Identifying the human face via correlation is a topic attracting widespread interest. At the heart of this technique lies the comparison of an unknown target image to a known reference database of images. However, the color information in the target image remains notoriously difficult to interpret. In this paper, we report a new technique which: (i) is robust against illumination change, (ii) offers discrimination ability to detect color change between faces having similar shape, and (iii) is specifically designed to detect red colored stains (i.e. facial bleeding). We adopt the Vanderlugt correlator (VLC) architecture with a segmented phase filter and we decompose the color target image using normalized red, green, and blue (RGB), and hue, saturation, and value (HSV) scales. We propose a new strategy to effectively utilize color information in signatures for further increasing the discrimination ability. The proposed algorithm has been found to be very efficient for discriminating face subjects with different skin colors, and those having color stains in different areas of the facial image.
Fast and accurate face recognition based on image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Blasch, Erik
2017-05-01
Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.
Developmental Differences in Holistic Interference of Facial Part Recognition
Nakabayashi, Kazuyo; Liu, Chang Hong
2013-01-01
Research has shown that adults’ recognition of a facial part can be disrupted if the part is learnt without a face context but tested in a whole face. This has been interpreted as the holistic interference effect. The present study investigated whether children of 6- and 9–10-year-olds would show a similar effect. Participants were asked to judge whether a probe part was the same as or different from a test part whereby the part was presented either in isolation or in a whole face. The results showed that while all the groups were susceptible to a holistic interference, the youngest group was most severely affected. Contrary to the view that piecemeal processing precedes holistic processing in the cognitive development, our findings demonstrate that holistic processing is already present at 6 years of age. It is the ability to inhibit the influence of holistic information on piecemeal processing that seems to require a longer period of development into at an older and adult age. PMID:24204847
Sfärlea, Anca; Greimel, Ellen; Platt, Belinda; Bartling, Jürgen; Schulte-Körne, Gerd; Dieler, Alica C
2016-09-01
The present study explored the neurophysiological correlates of perception and recognition of emotional facial expressions in adolescent anorexia nervosa (AN) patients using event-related potentials (ERPs). We included 20 adolescent girls with AN and 24 healthy girls and recorded ERPs during a passive viewing task and three active tasks requiring processing of emotional faces in varying processing depths; one of the tasks also assessed emotion recognition abilities behaviourally. Despite the absence of behavioural differences, we found that across all tasks AN patients exhibited a less pronounced early posterior negativity (EPN) in response to all facial expressions compared to controls. The EPN is an ERP component reflecting an automatic, perceptual processing stage which is modulated by the intrinsic salience of a stimulus. Hence, the less pronounced EPN in anorexic girls suggests that they might perceive other people's faces as less intrinsically relevant, i.e. as less "important" than do healthy girls. Copyright © 2016 Elsevier B.V. All rights reserved.
Gerlach, Christian; Starrfelt, Randi
2018-03-20
There has been an increase in studies adopting an individual difference approach to examine visual cognition and in particular in studies trying to relate face recognition performance with measures of holistic processing (the face composite effect and the part-whole effect). In the present study we examine whether global precedence effects, measured by means of non-face stimuli in Navon's paradigm, can also account for individual differences in face recognition and, if so, whether the effect is of similar magnitude for faces and objects. We find evidence that global precedence effects facilitate both face and object recognition, and to a similar extent. Our results suggest that both face and object recognition are characterized by a coarse-to-fine temporal dynamic, where global shape information is derived prior to local shape information, and that the efficiency of face and object recognition is related to the magnitude of the global precedence effect.
Ma, Yina; Han, Shihui
2010-06-01
Human adults usually respond faster to their own faces rather than to those of others. We tested the hypothesis that an implicit positive association (IPA) with self mediates self-advantage in face recognition through 4 experiments. Using a self-concept threat (SCT) priming that associated the self with negative personal traits and led to a weakened IPA with self, we found that self-face advantage in an implicit face-recognition task that required identification of face orientation was eliminated by the SCT priming. Moreover, the SCT effect on self-face recognition was evident only with the left-hand responses. Furthermore, the SCT effect on self-face recognition was observed in both Chinese and American participants. Our findings support the IPA hypothesis that defines a social cognitive mechanism of self-advantage in face recognition.
Two areas for familiar face recognition in the primate brain.
Landi, Sofia M; Freiwald, Winrich A
2017-08-11
Familiarity alters face recognition: Familiar faces are recognized more accurately than unfamiliar ones and under difficult viewing conditions when unfamiliar face recognition fails. The neural basis for this fundamental difference remains unknown. Using whole-brain functional magnetic resonance imaging, we found that personally familiar faces engage the macaque face-processing network more than unfamiliar faces. Familiar faces also recruited two hitherto unknown face areas at anatomically conserved locations within the perirhinal cortex and the temporal pole. These two areas, but not the core face-processing network, responded to familiar faces emerging from a blur with a characteristic nonlinear surge, akin to the abruptness of familiar face recognition. In contrast, responses to unfamiliar faces and objects remained linear. Thus, two temporal lobe areas extend the core face-processing network into a familiar face-recognition system. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
NASA Astrophysics Data System (ADS)
Karam, Lina J.; Zhu, Tong
2015-03-01
The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.
Tracking and recognition face in videos with incremental local sparse representation model
NASA Astrophysics Data System (ADS)
Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang
2013-10-01
This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.
Face recognition in the thermal infrared domain
NASA Astrophysics Data System (ADS)
Kowalski, M.; Grudzień, A.; Palka, N.; Szustakowski, M.
2017-10-01
Biometrics refers to unique human characteristics. Each unique characteristic may be used to label and describe individuals and for automatic recognition of a person based on physiological or behavioural properties. One of the most natural and the most popular biometric trait is a face. The most common research methods on face recognition are based on visible light. State-of-the-art face recognition systems operating in the visible light spectrum achieve very high level of recognition accuracy under controlled environmental conditions. Thermal infrared imagery seems to be a promising alternative or complement to visible range imaging due to its relatively high resistance to illumination changes. A thermal infrared image of the human face presents its unique heat-signature and can be used for recognition. The characteristics of thermal images maintain advantages over visible light images, and can be used to improve algorithms of human face recognition in several aspects. Mid-wavelength or far-wavelength infrared also referred to as thermal infrared seems to be promising alternatives. We present the study on 1:1 recognition in thermal infrared domain. The two approaches we are considering are stand-off face verification of non-moving person as well as stop-less face verification on-the-move. The paper presents methodology of our studies and challenges for face recognition systems in the thermal infrared domain.
Infrared and visible fusion face recognition based on NSCT domain
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan
2018-01-01
Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In this paper, a novel fusion algorithm in non-subsampled contourlet transform (NSCT) domain is proposed for Infrared and visible face fusion recognition. Firstly, NSCT is used respectively to process the infrared and visible face images, which exploits the image information at multiple scales, orientations, and frequency bands. Then, to exploit the effective discriminant feature and balance the power of high-low frequency band of NSCT coefficients, the local Gabor binary pattern (LGBP) and Local Binary Pattern (LBP) are applied respectively in different frequency parts to obtain the robust representation of infrared and visible face images. Finally, the score-level fusion is used to fuse the all the features for final classification. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. Experiments results show that the proposed method extracts the complementary features of near-infrared and visible-light images and improves the robustness of unconstrained face recognition.
Using eye movements as an index of implicit face recognition in autism spectrum disorder.
Hedley, Darren; Young, Robyn; Brewer, Neil
2012-10-01
Individuals with an autism spectrum disorder (ASD) typically show impairment on face recognition tasks. Performance has usually been assessed using overt, explicit recognition tasks. Here, a complementary method involving eye tracking was used to examine implicit face recognition in participants with ASD and in an intelligence quotient-matched non-ASD control group. Differences in eye movement indices between target and foil faces were used as an indicator of implicit face recognition. Explicit face recognition was assessed using old-new discrimination and reaction time measures. Stimuli were faces of studied (target) or unfamiliar (foil) persons. Target images at test were either identical to the images presented at study or altered by changing the lighting, pose, or by masking with visual noise. Participants with ASD performed worse than controls on the explicit recognition task. Eye movement-based measures, however, indicated that implicit recognition may not be affected to the same degree as explicit recognition. Autism Res 2012, 5: 363-379. © 2012 International Society for Autism Research, Wiley Periodicals, Inc. © 2012 International Society for Autism Research, Wiley Periodicals, Inc.
Holistic face training enhances face processing in developmental prosopagnosia
Cohan, Sarah; Nakayama, Ken
2014-01-01
Prosopagnosia has largely been regarded as an untreatable disorder. However, recent case studies using cognitive training have shown that it is possible to enhance face recognition abilities in individuals with developmental prosopagnosia. Our goal was to determine if this approach could be effective in a larger population of developmental prosopagnosics. We trained 24 developmental prosopagnosics using a 3-week online face-training program targeting holistic face processing. Twelve subjects with developmental prosopagnosia were assessed before and after training, and the other 12 were assessed before and after a waiting period, they then performed the training, and were then assessed again. The assessments included measures of front-view face discrimination, face discrimination with view-point changes, measures of holistic face processing, and a 5-day diary to quantify potential real-world improvements. Compared with the waiting period, developmental prosopagnosics showed moderate but significant overall training-related improvements on measures of front-view face discrimination. Those who reached the more difficult levels of training (‘better’ trainees) showed the strongest improvements in front-view face discrimination and showed significantly increased holistic face processing to the point of being similar to that of unimpaired control subjects. Despite challenges in characterizing developmental prosopagnosics’ everyday face recognition and potential biases in self-report, results also showed modest but consistent self-reported diary improvements. In summary, we demonstrate that by using cognitive training that targets holistic processing, it is possible to enhance face perception across a group of developmental prosopagnosics and further suggest that those who improved the most on the training task received the greatest benefits. PMID:24691394
Face Averages Enhance User Recognition for Smartphone Security
Robertson, David J.; Kramer, Robin S. S.; Burton, A. Mike
2015-01-01
Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual’s ‘face-average’ – a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user’s face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings. PMID:25807251
Face Recognition in Humans and Machines
NASA Astrophysics Data System (ADS)
O'Toole, Alice; Tistarelli, Massimo
The study of human face recognition by psychologists and neuroscientists has run parallel to the development of automatic face recognition technologies by computer scientists and engineers. In both cases, there are analogous steps of data acquisition, image processing, and the formation of representations that can support the complex and diverse tasks we accomplish with faces. These processes can be understood and compared in the context of their neural and computational implementations. In this chapter, we present the essential elements of face recognition by humans and machines, taking a perspective that spans psychological, neural, and computational approaches. From the human side, we overview the methods and techniques used in the neurobiology of face recognition, the underlying neural architecture of the system, the role of visual attention, and the nature of the representations that emerges. From the computational side, we discuss face recognition technologies and the strategies they use to overcome challenges to robust operation over viewing parameters. Finally, we conclude the chapter with a look at some recent studies that compare human and machine performances at face recognition.
Role of fusiform and anterior temporal cortical areas in facial recognition.
Nasr, Shahin; Tootell, Roger B H
2012-11-15
Recent fMRI studies suggest that cortical face processing extends well beyond the fusiform face area (FFA), including unspecified portions of the anterior temporal lobe. However, the exact location of such anterior temporal region(s), and their role during active face recognition, remain unclear. Here we demonstrate that (in addition to FFA) a small bilateral site in the anterior tip of the collateral sulcus ('AT'; the anterior temporal face patch) is selectively activated during recognition of faces but not houses (a non-face object). In contrast to the psychophysical prediction that inverted and contrast reversed faces are processed like other non-face objects, both FFA and AT (but not other visual areas) were also activated during recognition of inverted and contrast reversed faces. However, response accuracy was better correlated to recognition-driven activity in AT, compared to FFA. These data support a segregated, hierarchical model of face recognition processing, extending to the anterior temporal cortex. Copyright © 2012 Elsevier Inc. All rights reserved.
Role of Fusiform and Anterior Temporal Cortical Areas in Facial Recognition
Nasr, Shahin; Tootell, Roger BH
2012-01-01
Recent FMRI studies suggest that cortical face processing extends well beyond the fusiform face area (FFA), including unspecified portions of the anterior temporal lobe. However, the exact location of such anterior temporal region(s), and their role during active face recognition, remain unclear. Here we demonstrate that (in addition to FFA) a small bilateral site in the anterior tip of the collateral sulcus (‘AT’; the anterior temporal face patch) is selectively activated during recognition of faces but not houses (a non-face object). In contrast to the psychophysical prediction that inverted and contrast reversed faces are processed like other non-face objects, both FFA and AT (but not other visual areas) were also activated during recognition of inverted and contrast reversed faces. However, response accuracy was better correlated to recognition-driven activity in AT, compared to FFA. These data support a segregated, hierarchical model of face recognition processing, extending to the anterior temporal cortex. PMID:23034518
Roark, Dana A; O'Toole, Alice J; Abdi, Hervé; Barrett, Susan E
2006-01-01
Familiarity with a face or person can support recognition in tasks that require generalization to novel viewing contexts. Using naturalistic viewing conditions requiring recognition of people from face or whole body gait stimuli, we investigated the effects of familiarity, facial motion, and direction of learning/test transfer on person recognition. Participants were familiarized with previously unknown people from gait videos and were tested on faces (experiment 1a) or were familiarized with faces and were tested with gait videos (experiment 1b). Recognition was more accurate when learning from the face and testing with the gait videos, than when learning from the gait videos and testing with the face. The repetition of a single stimulus, either the face or gait, produced strong recognition gains across transfer conditions. Also, the presentation of moving faces resulted in better performance than that of static faces. In experiment 2, we investigated the role of facial motion further by testing recognition with static profile images. Motion provided no benefit for recognition, indicating that structure-from-motion is an unlikely source of the motion advantage found in the first set of experiments.
Gender-Based Prototype Formation in Face Recognition
ERIC Educational Resources Information Center
Baudouin, Jean-Yves; Brochard, Renaud
2011-01-01
The role of gender categories in prototype formation during face recognition was investigated in 2 experiments. The participants were asked to learn individual faces and then to recognize them. During recognition, individual faces were mixed with faces, which were blended faces of same or different genders. The results of the 2 experiments showed…
Face Age and Eye Gaze Influence Older Adults' Emotion Recognition.
Campbell, Anna; Murray, Janice E; Atkinson, Lianne; Ruffman, Ted
2017-07-01
Eye gaze has been shown to influence emotion recognition. In addition, older adults (over 65 years) are not as influenced by gaze direction cues as young adults (18-30 years). Nevertheless, these differences might stem from the use of young to middle-aged faces in emotion recognition research because older adults have an attention bias toward old-age faces. Therefore, using older face stimuli might allow older adults to process gaze direction cues to influence emotion recognition. To investigate this idea, young and older adults completed an emotion recognition task with young and older face stimuli displaying direct and averted gaze, assessing labeling accuracy for angry, disgusted, fearful, happy, and sad faces. Direct gaze rather than averted gaze improved young adults' recognition of emotions in young and older faces, but for older adults this was true only for older faces. The current study highlights the impact of stimulus face age and gaze direction on emotion recognition in young and older adults. The use of young face stimuli with direct gaze in most research might contribute to age-related emotion recognition differences. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Lacroix, Agnes; Guidetti, Michele; Roge, Bernadette; Reilly, Judy
2009-01-01
The aim of our study was to compare two neurodevelopmental disorders (Williams syndrome and autism) in terms of the ability to recognize emotional and nonemotional facial expressions. The comparison of these two disorders is particularly relevant to the investigation of face processing and should contribute to a better understanding of social…
The Development of Facial Emotion Recognition: The Role of Configural Information
ERIC Educational Resources Information Center
Durand, Karine; Gallay, Mathieu; Seigneuric, Alix; Robichon, Fabrice; Baudouin, Jean-Yves
2007-01-01
The development of children's ability to recognize facial emotions and the role of configural information in this development were investigated. In the study, 100 5-, 7-, 9-, and 11-year-olds and 26 adults needed to recognize the emotion displayed by upright and upside-down faces. The same participants needed to recognize the emotion displayed by…
Infants' Recognition of Objects Using Canonical Color
ERIC Educational Resources Information Center
Kimura, Atsushi; Wada, Yuji; Yang, Jiale; Otsuka, Yumiko; Dan, Ippeita; Masuda, Tomohiro; Kanazawa, So; Yamaguchi, Masami K.
2010-01-01
We explored infants' ability to recognize the canonical colors of daily objects, including two color-specific objects (human face and fruit) and a non-color-specific object (flower), by using a preferential looking technique. A total of 58 infants between 5 and 8 months of age were tested with a stimulus composed of two color pictures of an object…
The many faces of research on face perception.
Little, Anthony C; Jones, Benedict C; DeBruine, Lisa M
2011-06-12
Face perception is fundamental to human social interaction. Many different types of important information are visible in faces and the processes and mechanisms involved in extracting this information are complex and can be highly specialized. The importance of faces has long been recognized by a wide range of scientists. Importantly, the range of perspectives and techniques that this breadth has brought to face perception research has, in recent years, led to many important advances in our understanding of face processing. The articles in this issue on face perception each review a particular arena of interest in face perception, variously focusing on (i) the social aspects of face perception (attraction, recognition and emotion), (ii) the neural mechanisms underlying face perception (using brain scanning, patient data, direct stimulation of the brain, visual adaptation and single-cell recording), and (iii) comparative aspects of face perception (comparing adult human abilities with those of chimpanzees and children). Here, we introduce the central themes of the issue and present an overview of the articles.
Jehna, Margit; Neuper, Christa; Petrovic, Katja; Wallner-Blazek, Mirja; Schmidt, Reinhold; Fuchs, Siegrid; Fazekas, Franz; Enzinger, Christian
2010-07-01
Multiple sclerosis (MS) is a chronic multifocal CNS disorder which can affect higher order cognitive processes. Whereas cognitive disturbances in MS are increasingly better characterised, emotional facial expression (EFE) has rarely been tested, despite its importance for adequate social behaviour. We tested 20 patients with a clinically isolated syndrome suggestive of MS (CIS) or MS and 23 healthy controls (HC) for the ability to differ between emotional facial stimuli, controlling for the influence of depressive mood (ADS-L). We screened for cognitive dysfunction using The Faces Symbol Test (FST). The patients demonstrated significant decreased reaction-times regarding emotion recognition tests compared to HC. However, the results also suggested worse cognitive abilities in the patients. Emotional and cognitive test results were correlated. This exploratory pilot study suggests that emotion recognition deficits might be prevalent in MS. However, future studies will be needed to overcome the limitations of this study. Copyright 2010 Elsevier B.V. All rights reserved.
Training facial expression production in children on the autism spectrum.
Gordon, Iris; Pierce, Matthew D; Bartlett, Marian S; Tanaka, James W
2014-10-01
Children with autism spectrum disorder (ASD) show deficits in their ability to produce facial expressions. In this study, a group of children with ASD and IQ-matched, typically developing (TD) children were trained to produce "happy" and "angry" expressions with the FaceMaze computer game. FaceMaze uses an automated computer recognition system that analyzes the child's facial expression in real time. Before and after playing the Angry and Happy versions of FaceMaze, children posed "happy" and "angry" expressions. Naïve raters judged the post-FaceMaze "happy" and "angry" expressions of the ASD group as higher in quality than their pre-FaceMaze productions. Moreover, the post-game expressions of the ASD group were rated as equal in quality as the expressions of the TD group.
Liu, Shaoying; Quinn, Paul C; Xiao, Naiqi G; Wu, Zhijun; Liu, Guangxi; Lee, Kang
2018-06-01
Infants typically see more own-race faces than other-race faces. Existing evidence shows that this difference in face race experience has profound consequences for face processing: as early as 6 months of age, infants scan own- and other-race faces differently and display superior recognition for own- relative to other-race faces. However, it is unclear whether scanning of own-race faces is related to the own-race recognition advantage in infants. To bridge this gap in the literature, the current study used eye tracking to investigate the relation between own-race face scanning and recognition in 6- and 9-month-old Asian infants (N = 82). The infants were familiarized with dynamic own- and other-race faces, and then their face recognition was tested with static face images. Both age groups recognized own- but not other-race faces. Also, regardless of race, the more infants scanned the eyes of the novel versus familiar faces at test, the better their face-recognition performance. In addition, both 6- and 9-month-olds fixated significantly longer on the nose of own-race faces, and greater fixation on the nose during test trials correlated positively with individual novelty preference scores in the own- but not other-race condition. The results suggest that some aspects of the relation between recognition and scanning are independent of differential experience with face race, whereas other aspects are affected by such experience. More broadly, the findings imply that scanning and recognition may become linked during infancy at least in part through the influence of perceptual experience. © 2018 The Institute of Psychology, Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.
The Effect of Inversion on Face Recognition in Adults with Autism Spectrum Disorder
ERIC Educational Resources Information Center
Hedley, Darren; Brewer, Neil; Young, Robyn
2015-01-01
Face identity recognition has widely been shown to be impaired in individuals with autism spectrum disorders (ASD). In this study we examined the influence of inversion on face recognition in 26 adults with ASD and 33 age and IQ matched controls. Participants completed a recognition test comprising upright and inverted faces. Participants with ASD…
The hierarchical brain network for face recognition.
Zhen, Zonglei; Fang, Huizhen; Liu, Jia
2013-01-01
Numerous functional magnetic resonance imaging (fMRI) studies have identified multiple cortical regions that are involved in face processing in the human brain. However, few studies have characterized the face-processing network as a functioning whole. In this study, we used fMRI to identify face-selective regions in the entire brain and then explore the hierarchical structure of the face-processing network by analyzing functional connectivity among these regions. We identified twenty-five regions mainly in the occipital, temporal and frontal cortex that showed a reliable response selective to faces (versus objects) across participants and across scan sessions. Furthermore, these regions were clustered into three relatively independent sub-networks in a face-recognition task on the basis of the strength of functional connectivity among them. The functionality of the sub-networks likely corresponds to the recognition of individual identity, retrieval of semantic knowledge and representation of emotional information. Interestingly, when the task was switched to object recognition from face recognition, the functional connectivity between the inferior occipital gyrus and the rest of the face-selective regions were significantly reduced, suggesting that this region may serve as an entry node in the face-processing network. In sum, our study provides empirical evidence for cognitive and neural models of face recognition and helps elucidate the neural mechanisms underlying face recognition at the network level.
Tanaka, James W; Kaiser, Martha D; Hagen, Simen; Pierce, Lara J
2014-05-01
Given that all faces share the same set of features-two eyes, a nose, and a mouth-that are arranged in similar configuration, recognition of a specific face must depend on our ability to discern subtle differences in its featural and configural properties. An enduring question in the face-processing literature is whether featural or configural information plays a larger role in the recognition process. To address this question, the face dimensions task was designed, in which the featural and configural properties in the upper (eye) and lower (mouth) regions of a face were parametrically and independently manipulated. In a same-different task, two faces were sequentially presented and tested in their upright or in their inverted orientation. Inversion disrupted the perception of featural size (Exp. 1), featural shape (Exp. 2), and configural changes in the mouth region, but it had relatively little effect on the discrimination of featural size and shape and configural differences in the eye region. Inversion had little effect on the perception of information in the top and bottom halves of houses (Exp. 3), suggesting that the lower-half impairment was specific to faces. Spatial cueing to the mouth region eliminated the inversion effect (Exp. 4), suggesting that participants have a bias to attend to the eye region of an inverted face. The collective findings from these experiments suggest that inversion does not differentially impair featural or configural face perceptions, but rather impairs the perception of information in the mouth region of the face.
Cross-modal face recognition using multi-matcher face scores
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Blasch, Erik
2015-05-01
The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.
"What" precedes "which": developmental neural tuning in face- and place-related cortex.
Scherf, K Suzanne; Luna, Beatriz; Avidan, Galia; Behrmann, Marlene
2011-09-01
Although category-specific activation for faces in the ventral visual pathway appears adult-like in adolescence, recognition abilities for individual faces are still immature. We investigated how the ability to represent "individual" faces and houses develops at the neural level. Category-selective regions of interest (ROIs) for faces in the fusiform gyrus (FG) and for places in the parahippocampal place area (PPA) were identified individually in children, adolescents, and adults. Then, using an functional magnetic resonance imaging adaptation paradigm, we measured category selectivity and individual-level adaptation for faces and houses in each ROI. Only adults exhibited both category selectivity and individual-level adaptation bilaterally for faces in the FG and for houses in the PPA. Adolescents showed category selectivity bilaterally for faces in the FG and houses in the PPA. Despite this profile of category selectivity, adolescents only exhibited individual-level adaptation for houses bilaterally in the PPA and for faces in the "left" FG. Children only showed category-selective responses for houses in the PPA, and they failed to exhibit category-selective responses for faces in the FG and individual-level adaptation effects anywhere in the brain. These results indicate that category-level neural tuning develops prior to individual-level neural tuning and that face-related cortex is disproportionately slower in this developmental transition than is place-related cortex.
“What” Precedes “Which”: Developmental Neural Tuning in Face- and Place-Related Cortex
Luna, Beatriz; Avidan, Galia; Behrmann, Marlene
2011-01-01
Although category-specific activation for faces in the ventral visual pathway appears adult-like in adolescence, recognition abilities for individual faces are still immature. We investigated how the ability to represent “individual” faces and houses develops at the neural level. Category-selective regions of interest (ROIs) for faces in the fusiform gyrus (FG) and for places in the parahippocampal place area (PPA) were identified individually in children, adolescents, and adults. Then, using an functional magnetic resonance imaging adaptation paradigm, we measured category selectivity and individual-level adaptation for faces and houses in each ROI. Only adults exhibited both category selectivity and individual-level adaptation bilaterally for faces in the FG and for houses in the PPA. Adolescents showed category selectivity bilaterally for faces in the FG and houses in the PPA. Despite this profile of category selectivity, adolescents only exhibited individual-level adaptation for houses bilaterally in the PPA and for faces in the “left” FG. Children only showed category-selective responses for houses in the PPA, and they failed to exhibit category-selective responses for faces in the FG and individual-level adaptation effects anywhere in the brain. These results indicate that category-level neural tuning develops prior to individual-level neural tuning and that face-related cortex is disproportionately slower in this developmental transition than is place-related cortex. PMID:21257673
Age differences in accuracy and choosing in eyewitness identification and face recognition.
Searcy, J H; Bartlett, J C; Memon, A
1999-05-01
Studies of aging and face recognition show age-related increases in false recognitions of new faces. To explore implications of this false alarm effect, we had young and senior adults perform (1) three eye-witness identification tasks, using both target present and target absent lineups, and (2) and old/new recognition task in which a study list of faces was followed by a test including old and new faces, along with conjunctions of old faces. Compared with the young, seniors had lower accuracy and higher choosing rates on the lineups, and they also falsely recognized more new faces on the recognition test. However, after screening for perceptual processing deficits, there was no age difference in false recognition of conjunctions, or in discriminating old faces from conjunctions. We conclude that the false alarm effect generalizes to lineup identification, but does not extend to conjunction faces. The findings are consistent with age-related deficits in recollection of context and relative age invariance in perceptual integrative processes underlying the experience of familiarity.
Multisensory emotion perception in congenitally, early, and late deaf CI users
Nava, Elena; Villwock, Agnes K.; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte
2017-01-01
Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences. PMID:29023525
Multisensory emotion perception in congenitally, early, and late deaf CI users.
Fengler, Ineke; Nava, Elena; Villwock, Agnes K; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte
2017-01-01
Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences.
Further insight into self-face recognition in schizophrenia patients: Why ambiguity matters.
Bortolon, Catherine; Capdevielle, Delphine; Salesse, Robin N; Raffard, Stephane
2016-03-01
Although some studies reported specifically self-face processing deficits in patients with schizophrenia disorder (SZ), it remains unclear whether these deficits rather reflect a more global face processing deficit. Contradictory results are probably due to the different methodologies employed and the lack of control of other confounding factors. Moreover, no study has so far evaluated possible daily life self-face recognition difficulties in SZ. Therefore, our primary objective was to investigate self-face recognition in patients suffering from SZ compared to healthy controls (HC) using an "objective measure" (reaction time and accuracy) and a "subjective measure" (self-report of daily self-face recognition difficulties). Twenty-four patients with SZ and 23 HC performed a self-face recognition task and completed a questionnaire evaluating daily difficulties in self-face recognition. Recognition task material consisted in three different faces (the own, a famous and an unknown) being morphed in steps of 20%. Results showed that SZ were overall slower than HC regardless of the face identity, but less accurate only for the faces containing 60%-40% morphing. Moreover, SZ and HC reported a similar amount of daily problems with self/other face recognition. No significant correlations were found between objective and subjective measures (p > 0.05). The small sample size and relatively mild severity of psychopathology does not allow us to generalize our results. These results suggest that: (1) patients with SZ are as capable of recognizing their own face as HC, although they are susceptible to ambiguity; (2) there are far less self recognition deficits in schizophrenia patients than previously postulated. Copyright © 2015 Elsevier Ltd. All rights reserved.
Unger, Ashley; Alm, Kylie H.; Collins, Jessica A.; O’Leary, Jacqueline M.; Olson, Ingrid R.
2017-01-01
Objective The extended face network contains clusters of neurons that perform distinct functions on facial stimuli. Regions in the posterior ventral visual stream appear to perform basic perceptual functions on faces, while more anterior regions, such as the ventral anterior temporal lobe and amygdala, function to link mnemonic and affective information to faces. Anterior and posterior regions are interconnected by a long-range white matter tracts however it is not known if variation in connectivity of these pathways explains cognitive performance. Methods Here, we used diffusion imaging and deterministic tractography in a cohort of 28 neurologically normal adults ages 18–28 to examine microstructural properties of visual fiber pathways and their relationship to certain mnemonic and affective functions involved in face processing. We investigated how inter-individual variability in two tracts, the inferior longitudinal fasciculus (ILF) and the inferior fronto-occipital fasciculus (IFOF), related to performance on tests of facial emotion recognition and face memory. Results Results revealed that microstructure of both tracts predicted variability in behavioral performance indexed by both tasks, suggesting that the ILF and IFOF play a role in facilitating our ability to discriminate emotional expressions in faces, as well as to remember unique faces. Variation in a control tract, the uncinate fasciculus, did not predict performance on these tasks. Conclusions These results corroborate and extend the findings of previous neuropsychology studies investigating the effects of damage to the ILF and IFOF, and demonstrate that differences in face processing abilities are related to white matter microstructure, even in healthy individuals. PMID:26888615
Looking for myself: current multisensory input alters self-face recognition.
Tsakiris, Manos
2008-01-01
How do I know the person I see in the mirror is really me? Is it because I know the person simply looks like me, or is it because the mirror reflection moves when I move, and I see it being touched when I feel touch myself? Studies of face-recognition suggest that visual recognition of stored visual features inform self-face recognition. In contrast, body-recognition studies conclude that multisensory integration is the main cue to selfhood. The present study investigates for the first time the specific contribution of current multisensory input for self-face recognition. Participants were stroked on their face while they were looking at a morphed face being touched in synchrony or asynchrony. Before and after the visuo-tactile stimulation participants performed a self-recognition task. The results show that multisensory signals have a significant effect on self-face recognition. Synchronous tactile stimulation while watching another person's face being similarly touched produced a bias in recognizing one's own face, in the direction of the other person included in the representation of one's own face. Multisensory integration can update cognitive representations of one's body, such as the sense of ownership. The present study extends this converging evidence by showing that the correlation of synchronous multisensory signals also updates the representation of one's face. The face is a key feature of our identity, but at the same time is a source of rich multisensory experiences used to maintain or update self-representations.
You Look Familiar: How Malaysian Chinese Recognize Faces
Tan, Chrystalle B. Y.; Stephen, Ian D.; Whitehead, Ross; Sheppard, Elizabeth
2012-01-01
East Asian and white Western observers employ different eye movement strategies for a variety of visual processing tasks, including face processing. Recent eye tracking studies on face recognition found that East Asians tend to integrate information holistically by focusing on the nose while white Westerners perceive faces featurally by moving between the eyes and mouth. The current study examines the eye movement strategy that Malaysian Chinese participants employ when recognizing East Asian, white Western, and African faces. Rather than adopting the Eastern or Western fixation pattern, Malaysian Chinese participants use a mixed strategy by focusing on the eyes and nose more than the mouth. The combination of Eastern and Western strategies proved advantageous in participants' ability to recognize East Asian and white Western faces, suggesting that individuals learn to use fixation patterns that are optimized for recognizing the faces with which they are more familiar. PMID:22253762
Facial Expressions and Ability to Recognize Emotions From Eyes or Mouth in Children
Guarnera, Maria; Hichy, Zira; Cascio, Maura I.; Carrubba, Stefano
2015-01-01
This research aims to contribute to the literature on the ability to recognize anger, happiness, fear, surprise, sadness, disgust and neutral emotions from facial information. By investigating children’s performance in detecting these emotions from a specific face region, we were interested to know whether children would show differences in recognizing these expressions from the upper or lower face, and if any difference between specific facial regions depended on the emotion in question. For this purpose, a group of 6-7 year-old children was selected. Participants were asked to recognize emotions by using a labeling task with three stimulus types (region of the eyes, of the mouth, and full face). The findings seem to indicate that children correctly recognize basic facial expressions when pictures represent the whole face, except for a neutral expression, which was recognized from the mouth, and sadness, which was recognized from the eyes. Children are also able to identify anger from the eyes as well as from the whole face. With respect to gender differences, there is no female advantage in emotional recognition. The results indicate a significant interaction ‘gender x face region’ only for anger and neutral emotions. PMID:27247651
The "Eye Avoidance" Hypothesis of Autism Face Processing.
Tanaka, James W; Sung, Andrew
2016-05-01
Although a growing body of research indicates that children with autism spectrum disorder (ASD) exhibit selective deficits in their ability to recognize facial identities and expressions, the source of their face impairment is, as yet, undetermined. In this paper, we consider three possible accounts of the autism face deficit: (1) the holistic hypothesis, (2) the local perceptual bias hypothesis and (3) the eye avoidance hypothesis. A review of the literature indicates that contrary to the holistic hypothesis, there is little evidence to suggest that individuals with autism do perceive faces holistically. The local perceptual bias account also fails to explain the selective advantage that ASD individuals demonstrate for objects and their selective disadvantage for faces. The eye avoidance hypothesis provides a plausible explanation of face recognition deficits where individuals with ASD avoid the eye region because it is perceived as socially threatening. Direct eye contact elicits a increased physiological response as indicated by heightened skin conductance and amygdala activity. For individuals with autism, avoiding the eyes is an adaptive strategy, however, this approach interferes with the ability to process facial cues of identity, expressions and intentions, exacerbating the social challenges for persons with ASD.
The Role of Active Exploration of 3D Face Stimuli on Recognition Memory of Facial Information
ERIC Educational Resources Information Center
Liu, Chang Hong; Ward, James; Markall, Helena
2007-01-01
Research on face recognition has mainly relied on methods in which observers are relatively passive viewers of face stimuli. This study investigated whether active exploration of three-dimensional (3D) face stimuli could facilitate recognition memory. A standard recognition task and a sequential matching task were employed in a yoked design.…
ERIC Educational Resources Information Center
Chawarska, Katarzyna; Volkmar, Fred
2007-01-01
Face recognition impairments are well documented in older children with Autism Spectrum Disorders (ASD); however, the developmental course of the deficit is not clear. This study investigates the progressive specialization of face recognition skills in children with and without ASD. Experiment 1 examines human and monkey face recognition in…
Own-Group Face Recognition Bias: The Effects of Location and Reputation
Yan, Linlin; Wang, Zhe; Huang, Jianling; Sun, Yu-Hao P.; Judges, Rebecca A.; Xiao, Naiqi G.; Lee, Kang
2017-01-01
In the present study, we examined whether social categorization based on university affiliation can induce an advantage in recognizing faces. Moreover, we investigated how the reputation or location of the university affected face recognition performance using an old/new paradigm. We assigned five different university labels to the faces: participants’ own university and four other universities. Among the four other university labels, we manipulated the academic reputation and geographical location of these universities relative to the participants’ own university. The results showed that an own-group face recognition bias emerged for faces with own-university labels comparing to those with other-university labels. Furthermore, we found a robust own-group face recognition bias only when the other university was located in a different city far away from participants’ own university. Interestingly, we failed to find the influence of university reputation on own-group face recognition bias. These results suggest that categorizing a face as a member of one’s own university is sufficient to enhance recognition accuracy and the location will play a more important role in the effect of social categorization on face recognition than reputation. The results provide insight into the role of motivational factors underlying the university membership in face perception. PMID:29066989
Sub-pattern based multi-manifold discriminant analysis for face recognition
NASA Astrophysics Data System (ADS)
Dai, Jiangyan; Guo, Changlu; Zhou, Wei; Shi, Yanjiao; Cong, Lin; Yi, Yugen
2018-04-01
In this paper, we present a Sub-pattern based Multi-manifold Discriminant Analysis (SpMMDA) algorithm for face recognition. Unlike existing Multi-manifold Discriminant Analysis (MMDA) approach which is based on holistic information of face image for recognition, SpMMDA operates on sub-images partitioned from the original face image and then extracts the discriminative local feature from the sub-images separately. Moreover, the structure information of different sub-images from the same face image is considered in the proposed method with the aim of further improve the recognition performance. Extensive experiments on three standard face databases (Extended YaleB, CMU PIE and AR) demonstrate that the proposed method is effective and outperforms some other sub-pattern based face recognition methods.
Facial patterns in a tropical social wasp correlate with colony membership
NASA Astrophysics Data System (ADS)
Baracchi, David; Turillazzi, Stefano; Chittka, Lars
2016-10-01
Social insects excel in discriminating nestmates from intruders, typically relying on colony odours. Remarkably, some wasp species achieve such discrimination using visual information. However, while it is universally accepted that odours mediate a group level recognition, the ability to recognise colony members visually has been considered possible only via individual recognition by which wasps discriminate `friends' and `foes'. Using geometric morphometric analysis, which is a technique based on a rigorous statistical theory of shape allowing quantitative multivariate analyses on structure shapes, we first quantified facial marking variation of Liostenogaster flavolineata wasps. We then compared this facial variation with that of chemical profiles (generated by cuticular hydrocarbons) within and between colonies. Principal component analysis and discriminant analysis applied to sets of variables containing pure shape information showed that despite appreciable intra-colony variation, the faces of females belonging to the same colony resemble one another more than those of outsiders. This colony-specific variation in facial patterns was on a par with that observed for odours. While the occurrence of face discrimination at the colony level remains to be tested by behavioural experiments, overall our results suggest that, in this species, wasp faces display adequate information that might be potentially perceived and used by wasps for colony level recognition.
Rieffe, Carolien; Wiefferink, Carin H
2017-03-01
The capacity for emotion recognition and understanding is crucial for daily social functioning. We examined to what extent this capacity is impaired in young children with a Language Impairment (LI). In typical development, children learn to recognize emotions in faces and situations through social experiences and social learning. Children with LI have less access to these experiences and are therefore expected to fall behind their peers without LI. In this study, 89 preschool children with LI and 202 children without LI (mean age 3 years and 10 months in both groups) were tested on three indices for facial emotion recognition (discrimination, identification, and attribution in emotion evoking situations). Parents reported on their children's emotion vocabulary and ability to talk about their own emotions. Preschoolers with and without LI performed similarly on the non-verbal task for emotion discrimination. Children with LI fell behind their peers without LI on both other tasks for emotion recognition that involved labelling the four basic emotions (happy, sad, angry, fear). The outcomes of these two tasks were also related to children's level of emotion language. These outcomes emphasize the importance of 'emotion talk' at the youngest age possible for children with LI. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cascaded K-means convolutional feature learner and its application to face recognition
NASA Astrophysics Data System (ADS)
Zhou, Daoxiang; Yang, Dan; Zhang, Xiaohong; Huang, Sheng; Feng, Shu
2017-09-01
Currently, considerable efforts have been devoted to devise image representation. However, handcrafted methods need strong domain knowledge and show low generalization ability, and conventional feature learning methods require enormous training data and rich parameters tuning experience. A lightened feature learner is presented to solve these problems with application to face recognition, which shares similar topology architecture as a convolutional neural network. Our model is divided into three components: cascaded convolution filters bank learning layer, nonlinear processing layer, and feature pooling layer. Specifically, in the filters learning layer, we use K-means to learn convolution filters. Features are extracted via convoluting images with the learned filters. Afterward, in the nonlinear processing layer, hyperbolic tangent is employed to capture the nonlinear feature. In the feature pooling layer, to remove the redundancy information and incorporate the spatial layout, we exploit multilevel spatial pyramid second-order pooling technique to pool the features in subregions and concatenate them together as the final representation. Extensive experiments on four representative datasets demonstrate the effectiveness and robustness of our model to various variations, yielding competitive recognition results on extended Yale B and FERET. In addition, our method achieves the best identification performance on AR and labeled faces in the wild datasets among the comparative methods.
Smartphone based face recognition tool for the blind.
Kramer, K M; Hedin, D S; Rolkosky, D J
2010-01-01
The inability to identify people during group meetings is a disadvantage for blind people in many professional and educational situations. To explore the efficacy of face recognition using smartphones in these settings, we have prototyped and tested a face recognition tool for blind users. The tool utilizes Smartphone technology in conjunction with a wireless network to provide audio feedback of the people in front of the blind user. Testing indicated that the face recognition technology can tolerate up to a 40 degree angle between the direction a person is looking and the camera's axis and a 96% success rate with no false positives. Future work will be done to further develop the technology for local face recognition on the smartphone in addition to remote server based face recognition.
Ding, Xiao Pan; Fu, Genyue; Lee, Kang
2013-01-01
The present study used the functional Near-infrared Spectroscopy (fNIRS) methodology to investigate the neural correlates of elementary school children’s own- and other-race face processing. An old-new paradigm was used to assess children’s recognition ability of own- and other-race faces. FNIRS data revealed that other-race faces elicited significantly greater [oxy-Hb] changes than own-race faces in the right middle frontal gyrus and inferior frontal gyrus regions (BA9) and the left cuneus (BA18). With increased age, the [oxy-Hb] activity differences between own- and other-race faces, or the neural other-race effect (NORE), underwent significant changes in these two cortical areas: at younger ages, the neural response to the other-race faces was modestly greater than that to the own-race faces, but with increased age, the neural response to the own-race faces became increasingly greater than that to the other-race faces. Moreover, these areas had strong regional functional connectivity with a swath of the cortical regions in terms of the neural other-race effect that also changed with increased age. We also found significant and positive correlations between the behavioral other-race effect (reaction time) and the neural other-race effect in the right middle frontal gyrus and inferior frontal gyrus regions (BA9). These results taken together suggest that children, like adults, devote different amounts of neural resources to processing own- and other-race faces, but the size and direction of the neural other-race effect and associated functional regional connectivity change with increased age. PMID:23891903
Ding, Xiao Pan; Fu, Genyue; Lee, Kang
2014-01-15
The present study used the functional Near-infrared Spectroscopy (fNIRS) methodology to investigate the neural correlates of elementary school children's own- and other-race face processing. An old-new paradigm was used to assess children's recognition ability of own- and other-race faces. FNIRS data revealed that other-race faces elicited significantly greater [oxy-Hb] changes than own-race faces in the right middle frontal gyrus and inferior frontal gyrus regions (BA9) and the left cuneus (BA18). With increased age, the [oxy-Hb] activity differences between own- and other-race faces, or the neural other-race effect (NORE), underwent significant changes in these two cortical areas: at younger ages, the neural response to the other-race faces was modestly greater than that to the own-race faces, but with increased age, the neural response to the own-race faces became increasingly greater than that to the other-race faces. Moreover, these areas had strong regional functional connectivity with a swath of the cortical regions in terms of the neural other-race effect that also changed with increased age. We also found significant and positive correlations between the behavioral other-race effect (reaction time) and the neural other-race effect in the right middle frontal gyrus and inferior frontal gyrus regions (BA9). These results taken together suggest that children, like adults, devote different amounts of neural resources to processing own- and other-race faces, but the size and direction of the neural other-race effect and associated functional regional connectivity change with increased age. © 2013.
Working Memory Impairment in People with Williams Syndrome: Effects of Delay, Task and Stimuli
O'Hearn, Kirsten; Courtney, Susan; Street, Whitney; Landau, Barbara
2009-01-01
Williams syndrome (WS) is a neurodevelopmental disorder associated with impaired visuospatial representations subserved by the dorsal stream and relatively strong object recognition abilities subserved by the ventral stream. There is conflicting evidence on whether this uneven pattern extends to working memory (WM) in WS. The present studies provide a new perspective, testing WM for a single stimulus using a delayed recognition paradigm in individuals with WS and typically developing children matched for mental age (MA matches). In three experiments, participants judged whether a second stimulus ‘matched’ an initial sample, either in location or identity. We first examined memory for faces, houses and locations using a 5 s delay (Experiment 1) and a 2 s delay (Experiment 2). We then tested memory for human faces, houses, cat faces, and shoes with a 2 s delay using a new set of stimuli that were better controlled for expression, hairline and orientation (Experiment 3). With the 5 s delay (Experiment 1), the WS group was impaired overall compared to MA matches. While participants with WS tended to perform more poorly than MA matches with the 2 s delay, they also exhibited an uneven profile compared to MA matches. Face recognition was relatively preserved in WS with friendly faces (Experiment 2) but not when the faces had a neutral expression and were less natural looking (Experiment 3). Experiment 3 indicated that memory for object identity was relatively stronger than memory for location in WS. These findings reveal an overall WM impairment in WS that can be overcome under some conditions. Abnormalities in the parietal lobe/dorsal stream in WS may damage not only the representation of spatial location but also may impact WM for visual stimuli more generally. PMID:19084315
Working memory impairment in people with Williams syndrome: effects of delay, task and stimuli.
O'Hearn, Kirsten; Courtney, Susan; Street, Whitney; Landau, Barbara
2009-04-01
Williams syndrome (WS) is a neurodevelopmental disorder associated with impaired visuospatial representations subserved by the dorsal stream and relatively strong object recognition abilities subserved by the ventral stream. There is conflicting evidence on whether this uneven pattern in WS extends to working memory (WM). The present studies provide a new perspective, testing WM for a single stimulus using a delayed recognition paradigm in individuals with WS and typically developing children matched for mental age (MA matches). In three experiments, participants judged whether a second stimulus 'matched' an initial sample, either in location or identity. We first examined memory for faces, houses and locations using a 5s delay (Experiment 1) and a 2s delay (Experiment 2). We then tested memory for human faces, houses, cat faces, and shoes with a 2s delay using a new set of stimuli that were better controlled for expression, hairline and orientation (Experiment 3). With the 5s delay (Experiment 1), the WS group was impaired overall compared to MA matches. While participants with WS tended to perform more poorly than MA matches with the 2s delay, they also exhibited an uneven profile compared to MA matches. Face recognition was relatively preserved in WS with friendly faces (Experiment 2) but not when the faces had a neutral expression and were less natural looking (Experiment 3). Experiment 3 indicated that memory for object identity was relatively stronger than memory for location in WS. These findings reveal an overall WM impairment in WS that can be overcome under some conditions. Abnormalities in the parietal lobe/dorsal stream in WS may damage not only the representation of spatial location but may also impact WM for visual stimuli more generally.
Hedley, Darren; Brewer, Neil; Young, Robyn
2011-12-01
Although face recognition deficits in individuals with Autism Spectrum Disorder (ASD), including Asperger syndrome (AS), are widely acknowledged, the empirical evidence is mixed. This in part reflects the failure to use standardized and psychometrically sound tests. We contrasted standardized face recognition scores on the Cambridge Face Memory Test (CFMT) for 34 individuals with AS with those for 42, IQ-matched non-ASD individuals, and age-standardized scores from a large Australian cohort. We also examined the influence of IQ, autistic traits, and negative affect on face recognition performance. Overall, participants with AS performed significantly worse on the CFMT than the non-ASD participants and when evaluated against standardized test norms. However, while 24% of participants with AS presented with severe face recognition impairment (>2 SDs below the mean), many individuals performed at or above the typical level for their age: 53% scored within +/- 1 SD of the mean and 9% demonstrated superior performance (>1 SD above the mean). Regression analysis provided no evidence that IQ, autistic traits, or negative affect significantly influenced face recognition: diagnostic group membership was the only significant predictor of face recognition performance. In sum, face recognition performance in ASD is on a continuum, but with average levels significantly below non-ASD levels of performance. Copyright © 2011, International Society for Autism Research, Wiley-Liss, Inc.
Brown, Laura S
2017-03-01
Children with autism spectrum disorder (ASD) often struggle with social skills, including the ability to perceive emotions based on facial expressions. Research evidence suggests that many individuals with ASD can perceive emotion in music. Examining whether music can be used to enhance recognition of facial emotion by children with ASD would inform development of music therapy interventions. The purpose of this study was to investigate the influence of music with a strong emotional valance (happy; sad) on children with ASD's ability to label emotions depicted in facial photographs, and their response time. Thirty neurotypical children and 20 children with high-functioning ASD rated expressions of happy, neutral, and sad in 30 photographs under two music listening conditions (sad music; happy music). During each music listening condition, participants rated the 30 images using a 7-point scale that ranged from very sad to very happy. Response time data were also collected across both conditions. A significant two-way interaction revealed that participants' ratings of happy and neutral faces were unaffected by music conditions, but sad faces were perceived to be sadder with sad music than with happy music. Across both conditions, neurotypical children rated the happy faces as happier and the sad faces as sadder than did participants with ASD. Response times of the neurotypical children were consistently shorter than response times of the children with ASD; both groups took longer to rate sad faces than happy faces. Response times of neurotypical children were generally unaffected by the valence of the music condition; however, children with ASD took longer to respond when listening to sad music. Music appears to affect perceptions of emotion in children with ASD, and perceptions of sad facial expressions seem to be more affected by emotionally congruent background music than are perceptions of happy or neutral faces. © the American Music Therapy Association 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Implications of holistic face processing in autism and schizophrenia
Watson, Tamara L.
2013-01-01
People with autism and schizophrenia have been shown to have a local bias in sensory processing and face recognition difficulties. A global or holistic processing strategy is known to be important when recognizing faces. Studies investigating face recognition in these populations are reviewed and show that holistic processing is employed despite lower overall performance in the tasks used. This implies that holistic processing is necessary but not sufficient for optimal face recognition and new avenues for research into face recognition based on network models of autism and schizophrenia are proposed. PMID:23847581
Decoding facial expressions based on face-selective and motion-sensitive areas.
Liang, Yin; Liu, Baolin; Xu, Junhai; Zhang, Gaoyan; Li, Xianglin; Wang, Peiyuan; Wang, Bin
2017-06-01
Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes-obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye-related information on facial expression decoding were also examined. It was found that motion-sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face-selective and motion-sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye-related information was also observed. Overall, the findings showed that emotional expressions are represented in motion-sensitive areas in addition to conventional face-selective areas, suggesting that motion-sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye-related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113-3125, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Movement cues aid face recognition in developmental prosopagnosia.
Bennetts, Rachel J; Butcher, Natalie; Lander, Karen; Udale, Robert; Bate, Sarah
2015-11-01
Seeing a face in motion can improve face recognition in the general population, and studies of face matching indicate that people with face recognition difficulties (developmental prosopagnosia; DP) may be able to use movement cues as a supplementary strategy to help them process faces. However, the use of facial movement cues in DP has not been examined in the context of familiar face recognition. This study examined whether people with DP were better at recognizing famous faces presented in motion, compared to static. Nine participants with DP and 14 age-matched controls completed a famous face recognition task. Each face was presented twice across 2 blocks: once in motion and once as a still image. Discriminability (A) was calculated for each block. Participants with DP showed a significant movement advantage overall. This was driven by a movement advantage in the first block, but not in the second block. Participants with DP were significantly worse than controls at identifying faces from static images, but there was no difference between those with DP and controls for moving images. Seeing a familiar face in motion can improve face recognition in people with DP, at least in some circumstances. The mechanisms behind this effect are unclear, but these results suggest that some people with DP are able to learn and recognize patterns of facial motion, and movement can act as a useful cue when face recognition is impaired. (c) 2015 APA, all rights reserved).
A Benchmark and Comparative Study of Video-Based Face Recognition on COX Face Database.
Huang, Zhiwu; Shan, Shiguang; Wang, Ruiping; Zhang, Haihong; Lao, Shihong; Kuerban, Alifu; Chen, Xilin
2015-12-01
Face recognition with still face images has been widely studied, while the research on video-based face recognition is inadequate relatively, especially in terms of benchmark datasets and comparisons. Real-world video-based face recognition applications require techniques for three distinct scenarios: 1) Videoto-Still (V2S); 2) Still-to-Video (S2V); and 3) Video-to-Video (V2V), respectively, taking video or still image as query or target. To the best of our knowledge, few datasets and evaluation protocols have benchmarked for all the three scenarios. In order to facilitate the study of this specific topic, this paper contributes a benchmarking and comparative study based on a newly collected still/video face database, named COX(1) Face DB. Specifically, we make three contributions. First, we collect and release a largescale still/video face database to simulate video surveillance with three different video-based face recognition scenarios (i.e., V2S, S2V, and V2V). Second, for benchmarking the three scenarios designed on our database, we review and experimentally compare a number of existing set-based methods. Third, we further propose a novel Point-to-Set Correlation Learning (PSCL) method, and experimentally show that it can be used as a promising baseline method for V2S/S2V face recognition on COX Face DB. Extensive experimental results clearly demonstrate that video-based face recognition needs more efforts, and our COX Face DB is a good benchmark database for evaluation.
Familiarity and face emotion recognition in patients with schizophrenia.
Lahera, Guillermo; Herrera, Sara; Fernández, Cristina; Bardón, Marta; de los Ángeles, Victoria; Fernández-Liria, Alberto
2014-01-01
To assess the emotion recognition in familiar and unknown faces in a sample of schizophrenic patients and healthy controls. Face emotion recognition of 18 outpatients diagnosed with schizophrenia (DSM-IVTR) and 18 healthy volunteers was assessed with two Emotion Recognition Tasks using familiar faces and unknown faces. Each subject was accompanied by 4 familiar people (parents, siblings or friends), which were photographed by expressing the 6 Ekman's basic emotions. Face emotion recognition in familiar faces was assessed with this ad hoc instrument. In each case, the patient scored (from 1 to 10) the subjective familiarity and affective valence corresponding to each person. Patients with schizophrenia not only showed a deficit in the recognition of emotions on unknown faces (p=.01), but they also showed an even more pronounced deficit on familiar faces (p=.001). Controls had a similar success rate in the unknown faces task (mean: 18 +/- 2.2) and the familiar face task (mean: 17.4 +/- 3). However, patients had a significantly lower score in the familiar faces task (mean: 13.2 +/- 3.8) than in the unknown faces task (mean: 16 +/- 2.4; p<.05). In both tests, the highest number of errors was with emotions of anger and fear. Subjectively, the patient group showed a lower level of familiarity and emotional valence to their respective relatives (p<.01). The sense of familiarity may be a factor involved in the face emotion recognition and it may be disturbed in schizophrenia. © 2013.
The Hierarchical Brain Network for Face Recognition
Zhen, Zonglei; Fang, Huizhen; Liu, Jia
2013-01-01
Numerous functional magnetic resonance imaging (fMRI) studies have identified multiple cortical regions that are involved in face processing in the human brain. However, few studies have characterized the face-processing network as a functioning whole. In this study, we used fMRI to identify face-selective regions in the entire brain and then explore the hierarchical structure of the face-processing network by analyzing functional connectivity among these regions. We identified twenty-five regions mainly in the occipital, temporal and frontal cortex that showed a reliable response selective to faces (versus objects) across participants and across scan sessions. Furthermore, these regions were clustered into three relatively independent sub-networks in a face-recognition task on the basis of the strength of functional connectivity among them. The functionality of the sub-networks likely corresponds to the recognition of individual identity, retrieval of semantic knowledge and representation of emotional information. Interestingly, when the task was switched to object recognition from face recognition, the functional connectivity between the inferior occipital gyrus and the rest of the face-selective regions were significantly reduced, suggesting that this region may serve as an entry node in the face-processing network. In sum, our study provides empirical evidence for cognitive and neural models of face recognition and helps elucidate the neural mechanisms underlying face recognition at the network level. PMID:23527282
Starrfelt, Randi; Klargaard, Solja K; Petersen, Anders; Gerlach, Christian
2018-02-01
Recent models suggest that face and word recognition may rely on overlapping cognitive processes and neural regions. In support of this notion, face recognition deficits have been demonstrated in developmental dyslexia. Here we test whether the opposite association can also be found, that is, impaired reading in developmental prosopagnosia. We tested 10 adults with developmental prosopagnosia and 20 matched controls. All participants completed the Cambridge Face Memory Test, the Cambridge Face Perception test and a Face recognition questionnaire used to quantify everyday face recognition experience. Reading was measured in four experimental tasks, testing different levels of letter, word, and text reading: (a) single word reading with words of varying length,(b) vocal response times in single letter and short word naming, (c) recognition of single letters and short words at brief exposure durations (targeting the word superiority effect), and d) text reading. Participants with developmental prosopagnosia performed strikingly similar to controls across the four reading tasks. Formal analysis revealed a significant dissociation between word and face recognition, as the difference in performance with faces and words was significantly greater for participants with developmental prosopagnosia than for controls. Adult developmental prosopagnosics read as quickly and fluently as controls, while they are seemingly unable to learn efficient strategies for recognizing faces. We suggest that this is due to the differing demands that face and word recognition put on the perceptual system. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
An Investigation of Emotion Recognition and Theory of Mind in People with Chronic Heart Failure
Habota, Tina; McLennan, Skye N.; Cameron, Jan; Ski, Chantal F.; Thompson, David R.; Rendell, Peter G.
2015-01-01
Objectives Cognitive deficits are common in patients with chronic heart failure (CHF), but no study has investigated whether these deficits extend to social cognition. The present study provided the first empirical assessment of emotion recognition and theory of mind (ToM) in patients with CHF. In addition, it assessed whether each of these social cognitive constructs was associated with more general cognitive impairment. Methods A group comparison design was used, with 31 CHF patients compared to 38 demographically matched controls. The Ekman Faces test was used to assess emotion recognition, and the Mind in the Eyes test to measure ToM. Measures assessing global cognition, executive functions, and verbal memory were also administered. Results There were no differences between groups on emotion recognition or ToM. The CHF group’s performance was poorer on some executive measures, but memory was relatively preserved. In the CHF group, both emotion recognition performance and ToM ability correlated moderately with global cognition (r = .38, p = .034; r = .49, p = .005, respectively), but not with executive function or verbal memory. Conclusion CHF patients with lower cognitive ability were more likely to have difficulty recognizing emotions and inferring the mental states of others. Clinical implications of these findings are discussed. PMID:26529409
Colzato, Lorenza S; Sellaro, Roberta; Beste, Christian
2017-07-01
Charles Darwin proposed that via the vagus nerve, the tenth cranial nerve, emotional facial expressions are evolved, adaptive and serve a crucial communicative function. In line with this idea, the later-developed polyvagal theory assumes that the vagus nerve is the key phylogenetic substrate that regulates emotional and social behavior. The polyvagal theory assumes that optimal social interaction, which includes the recognition of emotion in faces, is modulated by the vagus nerve. So far, in humans, it has not yet been demonstrated that the vagus plays a causal role in emotion recognition. To investigate this we employed transcutaneous vagus nerve stimulation (tVNS), a novel non-invasive brain stimulation technique that modulates brain activity via bottom-up mechanisms. A sham/placebo-controlled, randomized cross-over within-subjects design was used to infer a causal relation between the stimulated vagus nerve and the related ability to recognize emotions as indexed by the Reading the Mind in the Eyes Test in 38 healthy young volunteers. Active tVNS, compared to sham stimulation, enhanced emotion recognition for easy items, suggesting that it promoted the ability to decode salient social cues. Our results confirm that the vagus nerve is causally involved in emotion recognition, supporting Darwin's argumentation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Neural microgenesis of personally familiar face recognition
Ramon, Meike; Vizioli, Luca; Liu-Shuang, Joan; Rossion, Bruno
2015-01-01
Despite a wealth of information provided by neuroimaging research, the neural basis of familiar face recognition in humans remains largely unknown. Here, we isolated the discriminative neural responses to unfamiliar and familiar faces by slowly increasing visual information (i.e., high-spatial frequencies) to progressively reveal faces of unfamiliar or personally familiar individuals. Activation in ventral occipitotemporal face-preferential regions increased with visual information, independently of long-term face familiarity. In contrast, medial temporal lobe structures (perirhinal cortex, amygdala, hippocampus) and anterior inferior temporal cortex responded abruptly when sufficient information for familiar face recognition was accumulated. These observations suggest that following detailed analysis of individual faces in core posterior areas of the face-processing network, familiar face recognition emerges categorically in medial temporal and anterior regions of the extended cortical face network. PMID:26283361
Neural microgenesis of personally familiar face recognition.
Ramon, Meike; Vizioli, Luca; Liu-Shuang, Joan; Rossion, Bruno
2015-09-01
Despite a wealth of information provided by neuroimaging research, the neural basis of familiar face recognition in humans remains largely unknown. Here, we isolated the discriminative neural responses to unfamiliar and familiar faces by slowly increasing visual information (i.e., high-spatial frequencies) to progressively reveal faces of unfamiliar or personally familiar individuals. Activation in ventral occipitotemporal face-preferential regions increased with visual information, independently of long-term face familiarity. In contrast, medial temporal lobe structures (perirhinal cortex, amygdala, hippocampus) and anterior inferior temporal cortex responded abruptly when sufficient information for familiar face recognition was accumulated. These observations suggest that following detailed analysis of individual faces in core posterior areas of the face-processing network, familiar face recognition emerges categorically in medial temporal and anterior regions of the extended cortical face network.
Fenske, Sabrina; Lis, Stefanie; Liebke, Lisa; Niedtfeld, Inga; Kirsch, Peter; Mier, Daniela
2015-01-01
Borderline Personality Disorder (BPD) is characterized by severe deficits in social interactions, which might be linked to deficits in emotion recognition. Research on emotion recognition abilities in BPD revealed heterogeneous results, ranging from deficits to heightened sensitivity. The most stable findings point to an impairment in the evaluation of neutral facial expressions as neutral, as well as to a negative bias in emotion recognition; that is the tendency to attribute negative emotions to neutral expressions, or in a broader sense to report a more negative emotion category than depicted. However, it remains unclear which contextual factors influence the occurrence of this negative bias. Previous studies suggest that priming by preceding emotional information and also constrained processing time might augment the emotion recognition deficit in BPD. To test these assumptions, 32 female BPD patients and 31 healthy females, matched for age and education, participated in an emotion recognition study, in which every facial expression was preceded by either a positive, neutral or negative scene. Furthermore, time constraints for processing were varied by presenting the facial expressions with short (100 ms) or long duration (up to 3000 ms) in two separate blocks. BPD patients showed a significant deficit in emotion recognition for neutral and positive facial expression, associated with a significant negative bias. In BPD patients, this emotion recognition deficit was differentially affected by preceding emotional information and time constraints, with a greater influence of emotional information during long face presentations and a greater influence of neutral information during short face presentations. Our results are in line with previous findings supporting the existence of a negative bias in emotion recognition in BPD patients, and provide further insights into biased social perceptions in BPD patients.
Dimitriou, D; Leonard, H C; Karmiloff-Smith, A; Johnson, M H; Thomas, M S C
2015-05-01
Configural processing in face recognition is a sensitivity to the spacing between facial features. It has been argued both that its presence represents a high level of expertise in face recognition, and also that it is a developmentally vulnerable process. We report a cross-syndrome investigation of the development of configural face recognition in school-aged children with autism, Down syndrome and Williams syndrome compared with a typically developing comparison group. Cross-sectional trajectory analyses were used to compare configural and featural face recognition utilising the 'Jane faces' task. Trajectories were constructed linking featural and configural performance either to chronological age or to different measures of mental age (receptive vocabulary, visuospatial construction), as well as the Benton face recognition task. An emergent inversion effect across age for detecting configural but not featural changes in faces was established as the marker of typical development. Children from clinical groups displayed atypical profiles that differed across all groups. We discuss the implications for the nature of face processing within the respective developmental disorders, and how the cross-sectional syndrome comparison informs the constraints that shape the typical development of face recognition. © 2014 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.
Motion facilitates face perception across changes in viewpoint and expression in older adults.
Maguinness, Corrina; Newell, Fiona N
2014-12-01
Faces are inherently dynamic stimuli. However, face perception in younger adults appears to be mediated by the ability to extract structural cues from static images and a benefit of motion is inconsistent. In contrast, static face processing is poorer and more image-dependent in older adults. We therefore compared the role of facial motion in younger and older adults to assess whether motion can enhance perception when static cues are insufficient. In our studies, older and younger adults learned faces presented in motion or in a sequence of static images, containing rigid (viewpoint) or nonrigid (expression) changes. Immediately following learning, participants matched a static test image to the learned face which varied by viewpoint (Experiment 1) or expression (Experiment 2) and was either learned or novel. First, we found an age effect with better face matching performance in younger than in older adults. However, we observed face matching performance improved in the older adult group, across changes in viewpoint and expression, when faces were learned in motion relative to static presentation. There was no benefit for facial (nonrigid) motion when the task involved matching inverted faces (Experiment 3), suggesting that the ability to use dynamic face information for the purpose of recognition reflects motion encoding which is specific to upright faces. Our results suggest that ageing may offer a unique insight into how dynamic cues support face processing, which may not be readily observed in younger adults' performance. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Facial Expression Influences Face Identity Recognition During the Attentional Blink
2014-01-01
Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry—suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another. PMID:25286076
Facial expression influences face identity recognition during the attentional blink.
Bach, Dominik R; Schmidt-Daffy, Martin; Dolan, Raymond J
2014-12-01
Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry-suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another.
Pose-Invariant Face Recognition via RGB-D Images.
Sang, Gaoli; Li, Jing; Zhao, Qijun
2016-01-01
Three-dimensional (3D) face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D) face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions.
Using Speech Recognition Software to Improve Writing Skills
ERIC Educational Resources Information Center
Diaz, Felix
2014-01-01
Orthopedically impaired (OI) students face a formidable challenge during the writing process due to their limited or non-existing ability to use their hands to hold a pen or pencil or even to press the keys on a keyboard. While they may have a clear mental picture of what they want to write, the biggest hurdle comes well before having to tackle…
It's All in Your Head: Why Is the Body Inversion Effect Abolished for Headless Bodies?
ERIC Educational Resources Information Center
Yovel, Galit; Pelc, Tatiana; Lubetzky, Ida
2010-01-01
It has been recently argued that human bodies are processed by a specialized processing mechanism. Central evidence was that body inversion reduces recognition abilities (body inversion effect; BIE) as much as it does for faces, but more than for other objects. Here we showed that the BIE is markedly reduced for headless bodies and examined the…
Van Strien, Jan W; Glimmerveen, Johanna C; Franken, Ingmar H A; Martens, Vanessa E G; de Bruin, Eveline A
2011-09-01
To examine the development of recognition memory in primary-school children, 36 healthy younger children (8-9 years old) and 36 healthy older children (11-12 years old) participated in an ERP study with an extended continuous face recognition task (Study 1). Each face of a series of 30 faces was shown randomly six times interspersed with distracter faces. The children were required to make old vs. new decisions. Older children responded faster than younger children, but younger children exhibited a steeper decrease in latencies across the five repetitions. Older children exhibited better accuracy for new faces, but there were no age differences in recognition accuracy for repeated faces. For the N2, N400 and late positive complex (LPC), we analyzed the old/new effects (repetition 1 vs. new presentation) and the extended repetition effects (repetitions 1 through 5). Compared to older children, younger children exhibited larger frontocentral N2 and N400 old/new effects. For extended face repetitions, negativity of the N2 and N400 decreased in a linear fashion in both age groups. For the LPC, an ERP component thought to reflect recollection, no significant old/new or extended repetition effects were found. Employing the same face recognition paradigm in 20 adults (Study 2), we found a significant N400 old/new effect at lateral frontal sites and a significant LPC repetition effect at parietal sites, with LPC amplitudes increasing linearly with the number of repetitions. This study clearly demonstrates differential developmental courses for the N400 and LPC pertaining to recognition memory for faces. It is concluded that face recognition in children is mediated by early and probably more automatic than conscious recognition processes. In adults, the LPC extended repetition effect indicates that adult face recognition memory is related to a conscious and graded recollection process rather than to an automatic recognition process. © 2011 Blackwell Publishing Ltd.
Successful decoding of famous faces in the fusiform face area.
Axelrod, Vadim; Yovel, Galit
2015-01-01
What are the neural mechanisms of face recognition? It is believed that the network of face-selective areas, which spans the occipital, temporal, and frontal cortices, is important in face recognition. A number of previous studies indeed reported that face identity could be discriminated based on patterns of multivoxel activity in the fusiform face area and the anterior temporal lobe. However, given the difficulty in localizing the face-selective area in the anterior temporal lobe, its role in face recognition is still unknown. Furthermore, previous studies limited their analysis to occipito-temporal regions without testing identity decoding in more anterior face-selective regions, such as the amygdala and prefrontal cortex. In the current high-resolution functional Magnetic Resonance Imaging study, we systematically examined the decoding of the identity of famous faces in the temporo-frontal network of face-selective and adjacent non-face-selective regions. A special focus has been put on the face-area in the anterior temporal lobe, which was reliably localized using an optimized scanning protocol. We found that face-identity could be discriminated above chance level only in the fusiform face area. Our results corroborate the role of the fusiform face area in face recognition. Future studies are needed to further explore the role of the more recently discovered anterior face-selective areas in face recognition.
How Fast is Famous Face Recognition?
Barragan-Jason, Gladys; Lachat, Fanny; Barbeau, Emmanuel J.
2012-01-01
The rapid recognition of familiar faces is crucial for social interactions. However the actual speed with which recognition can be achieved remains largely unknown as most studies have been carried out without any speed constraints. Different paradigms have been used, leading to conflicting results, and although many authors suggest that face recognition is fast, the speed of face recognition has not been directly compared to “fast” visual tasks. In this study, we sought to overcome these limitations. Subjects performed three tasks, a familiarity categorization task (famous faces among unknown faces), a superordinate categorization task (human faces among animal ones), and a gender categorization task. All tasks were performed under speed constraints. The results show that, despite the use of speed constraints, subjects were slow when they had to categorize famous faces: minimum reaction time was 467 ms, which is 180 ms more than during superordinate categorization and 160 ms more than in the gender condition. Our results are compatible with a hierarchy of face processing from the superordinate level to the familiarity level. The processes taking place between detection and recognition need to be investigated in detail. PMID:23162503
Head pose estimation in computer vision: a survey.
Murphy-Chutorian, Erik; Trivedi, Mohan Manubhai
2009-04-01
The capacity to estimate the head pose of another person is a common human ability that presents a unique challenge for computer vision systems. Compared to face detection and recognition, which have been the primary foci of face-related vision research, identity-invariant head pose estimation has fewer rigorously evaluated systems or generic solutions. In this paper, we discuss the inherent difficulties in head pose estimation and present an organized survey describing the evolution of the field. Our discussion focuses on the advantages and disadvantages of each approach and spans 90 of the most innovative and characteristic papers that have been published on this topic. We compare these systems by focusing on their ability to estimate coarse and fine head pose, highlighting approaches that are well suited for unconstrained environments.
The role of skin colour in face recognition.
Bar-Haim, Yair; Saidel, Talia; Yovel, Galit
2009-01-01
People have better memory for faces from their own racial group than for faces from other races. It has been suggested that this own-race recognition advantage depends on an initial categorisation of faces into own and other race based on racial markers, resulting in poorer encoding of individual variations in other-race faces. Here, we used a study--test recognition task with stimuli in which the skin colour of African and Caucasian faces was manipulated to produce four categories representing the cross-section between skin colour and facial features. We show that, despite the notion that skin colour plays a major role in categorising faces into own and other-race faces, its effect on face recognition is minor relative to differences across races in facial features.
Hills, Peter J; Eaton, Elizabeth; Pake, J Michael
2016-01-01
Psychometric schizotypy in the general population correlates negatively with face recognition accuracy, potentially due to deficits in inhibition, social withdrawal, or eye-movement abnormalities. We report an eye-tracking face recognition study in which participants were required to match one of two faces (target and distractor) to a cue face presented immediately before. All faces could be presented with or without paraphernalia (e.g., hats, glasses, facial hair). Results showed that paraphernalia distracted participants, and that the most distracting condition was when the cue and the distractor face had paraphernalia but the target face did not, while there was no correlation between distractibility and participants' scores on the Schizotypal Personality Questionnaire (SPQ). Schizotypy was negatively correlated with proportion of time fixating on the eyes and positively correlated with not fixating on a feature. It was negatively correlated with scan path length and this variable correlated with face recognition accuracy. These results are interpreted as schizotypal traits being associated with a restricted scan path leading to face recognition deficits.
The Oxytocin Receptor Gene ( OXTR) and Face Recognition.
Verhallen, Roeland J; Bosten, Jenny M; Goodbourn, Patrick T; Lawrance-Owen, Adam J; Bargary, Gary; Mollon, J D
2017-01-01
A recent study has linked individual differences in face recognition to rs237887, a single-nucleotide polymorphism (SNP) of the oxytocin receptor gene ( OXTR; Skuse et al., 2014). In that study, participants were assessed using the Warrington Recognition Memory Test for Faces, but performance on Warrington's test has been shown not to rely purely on face recognition processes. We administered the widely used Cambridge Face Memory Test-a purer test of face recognition-to 370 participants. Performance was not significantly associated with rs237887, with 16 other SNPs of OXTR that we genotyped, or with a further 75 imputed SNPs. We also administered three other tests of face processing (the Mooney Face Test, the Glasgow Face Matching Test, and the Composite Face Test), but performance was never significantly associated with rs237887 or with any of the other genotyped or imputed SNPs, after corrections for multiple testing. In addition, we found no associations between OXTR and Autism-Spectrum Quotient scores.
Deficits in long-term recognition memory reveal dissociated subtypes in congenital prosopagnosia.
Stollhoff, Rainer; Jost, Jürgen; Elze, Tobias; Kennerknecht, Ingo
2011-01-25
The study investigates long-term recognition memory in congenital prosopagnosia (CP), a lifelong impairment in face identification that is present from birth. Previous investigations of processing deficits in CP have mostly relied on short-term recognition tests to estimate the scope and severity of individual deficits. We firstly report on a controlled test of long-term (one year) recognition memory for faces and objects conducted with a large group of participants with CP. Long-term recognition memory is significantly impaired in eight CP participants (CPs). In all but one case, this deficit was selective to faces and didn't extend to intra-class recognition of object stimuli. In a test of famous face recognition, long-term recognition deficits were less pronounced, even after accounting for differences in media consumption between controls and CPs. Secondly, we combined test results on long-term and short-term recognition of faces and objects, and found a large heterogeneity in severity and scope of individual deficits. Analysis of the observed heterogeneity revealed a dissociation of CP into subtypes with a homogeneous phenotypical profile. Thirdly, we found that among CPs self-assessment of real-life difficulties, based on a standardized questionnaire, and experimentally assessed face recognition deficits are strongly correlated. Our results demonstrate that controlled tests of long-term recognition memory are needed to fully assess face recognition deficits in CP. Based on controlled and comprehensive experimental testing, CP can be dissociated into subtypes with a homogeneous phenotypical profile. The CP subtypes identified align with those found in prosopagnosia caused by cortical lesions; they can be interpreted with respect to a hierarchical neural system for face perception.
Deficits in Long-Term Recognition Memory Reveal Dissociated Subtypes in Congenital Prosopagnosia
Stollhoff, Rainer; Jost, Jürgen; Elze, Tobias; Kennerknecht, Ingo
2011-01-01
The study investigates long-term recognition memory in congenital prosopagnosia (CP), a lifelong impairment in face identification that is present from birth. Previous investigations of processing deficits in CP have mostly relied on short-term recognition tests to estimate the scope and severity of individual deficits. We firstly report on a controlled test of long-term (one year) recognition memory for faces and objects conducted with a large group of participants with CP. Long-term recognition memory is significantly impaired in eight CP participants (CPs). In all but one case, this deficit was selective to faces and didn't extend to intra-class recognition of object stimuli. In a test of famous face recognition, long-term recognition deficits were less pronounced, even after accounting for differences in media consumption between controls and CPs. Secondly, we combined test results on long-term and short-term recognition of faces and objects, and found a large heterogeneity in severity and scope of individual deficits. Analysis of the observed heterogeneity revealed a dissociation of CP into subtypes with a homogeneous phenotypical profile. Thirdly, we found that among CPs self-assessment of real-life difficulties, based on a standardized questionnaire, and experimentally assessed face recognition deficits are strongly correlated. Our results demonstrate that controlled tests of long-term recognition memory are needed to fully assess face recognition deficits in CP. Based on controlled and comprehensive experimental testing, CP can be dissociated into subtypes with a homogeneous phenotypical profile. The CP subtypes identified align with those found in prosopagnosia caused by cortical lesions; they can be interpreted with respect to a hierarchical neural system for face perception. PMID:21283572
FaceIt: face recognition from static and live video for law enforcement
NASA Astrophysics Data System (ADS)
Atick, Joseph J.; Griffin, Paul M.; Redlich, A. N.
1997-01-01
Recent advances in image and pattern recognition technology- -especially face recognition--are leading to the development of a new generation of information systems of great value to the law enforcement community. With these systems it is now possible to pool and manage vast amounts of biometric intelligence such as face and finger print records and conduct computerized searches on them. We review one of the enabling technologies underlying these systems: the FaceIt face recognition engine; and discuss three applications that illustrate its benefits as a problem-solving technology and an efficient and cost effective investigative tool.
Burns, Edwin J.; Tree, Jeremy J.; Weidemann, Christoph T.
2014-01-01
Dual process models of recognition memory propose two distinct routes for recognizing a face: recollection and familiarity. Recollection is characterized by the remembering of some contextual detail from a previous encounter with a face whereas familiarity is the feeling of finding a face familiar without any contextual details. The Remember/Know (R/K) paradigm is thought to index the relative contributions of recollection and familiarity to recognition performance. Despite researchers measuring face recognition deficits in developmental prosopagnosia (DP) through a variety of methods, none have considered the distinct contributions of recollection and familiarity to recognition performance. The present study examined recognition memory for faces in eight individuals with DP and a group of controls using an R/K paradigm while recording electroencephalogram (EEG) data at the scalp. Those with DP were found to produce fewer correct “remember” responses and more false alarms than controls. EEG results showed that posterior “remember” old/new effects were delayed and restricted to the right posterior (RP) area in those with DP in comparison to the controls. A posterior “know” old/new effect commonly associated with familiarity for faces was only present in the controls whereas individuals with DP exhibited a frontal “know” old/new effect commonly associated with words, objects and pictures. These results suggest that individuals with DP do not utilize normal face-specific routes when making face recognition judgments but instead process faces using a pathway more commonly associated with objects. PMID:25177283
Self-organized Evaluation of Dynamic Hand Gestures for Sign Language Recognition
NASA Astrophysics Data System (ADS)
Buciu, Ioan; Pitas, Ioannis
Two main theories exist with respect to face encoding and representation in the human visual system (HVS). The first one refers to the dense (holistic) representation of the face, where faces have "holon"-like appearance. The second one claims that a more appropriate face representation is given by a sparse code, where only a small fraction of the neural cells corresponding to face encoding is activated. Theoretical and experimental evidence suggest that the HVS performs face analysis (encoding, storing, face recognition, facial expression recognition) in a structured and hierarchical way, where both representations have their own contribution and goal. According to neuropsychological experiments, it seems that encoding for face recognition, relies on holistic image representation, while a sparse image representation is used for facial expression analysis and classification. From the computer vision perspective, the techniques developed for automatic face and facial expression recognition fall into the same two representation types. Like in Neuroscience, the techniques which perform better for face recognition yield a holistic image representation, while those techniques suitable for facial expression recognition use a sparse or local image representation. The proposed mathematical models of image formation and encoding try to simulate the efficient storing, organization and coding of data in the human cortex. This is equivalent with embedding constraints in the model design regarding dimensionality reduction, redundant information minimization, mutual information minimization, non-negativity constraints, class information, etc. The presented techniques are applied as a feature extraction step followed by a classification method, which also heavily influences the recognition results.
A Survey on Sentiment Classification in Face Recognition
NASA Astrophysics Data System (ADS)
Qian, Jingyu
2018-01-01
Face recognition has been an important topic for both industry and academia for a long time. K-means clustering, autoencoder, and convolutional neural network, each representing a design idea for face recognition method, are three popular algorithms to deal with face recognition problems. It is worthwhile to summarize and compare these three different algorithms. This paper will focus on one specific face recognition problem-sentiment classification from images. Three different algorithms for sentiment classification problems will be summarized, including k-means clustering, autoencoder, and convolutional neural network. An experiment with the application of these algorithms on a specific dataset of human faces will be conducted to illustrate how these algorithms are applied and their accuracy. Finally, the three algorithms are compared based on the accuracy result.
Extracted facial feature of racial closely related faces
NASA Astrophysics Data System (ADS)
Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu
2010-02-01
Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.
Spence, Morgan L; Storrs, Katherine R; Arnold, Derek H
2014-07-29
Humans are experts at face recognition. The mechanisms underlying this complex capacity are not fully understood. Recently, it has been proposed that face recognition is supported by a coarse-scale analysis of visual information contained in horizontal bands of contrast distributed along the vertical image axis-a biological facial "barcode" (Dakin & Watt, 2009). A critical prediction of the facial barcode hypothesis is that the distribution of image contrast along the vertical axis will be more important for face recognition than image distributions along the horizontal axis. Using a novel paradigm involving dynamic image distortions, a series of experiments are presented examining famous face recognition impairments from selectively disrupting image distributions along the vertical or horizontal image axes. Results show that disrupting the image distribution along the vertical image axis is more disruptive for recognition than matched distortions along the horizontal axis. Consistent with the facial barcode hypothesis, these results suggest that human face recognition relies disproportionately on appropriately scaled distributions of image contrast along the vertical image axis. © 2014 ARVO.
Effects of exposure to facial expression variation in face learning and recognition.
Liu, Chang Hong; Chen, Wenfeng; Ward, James
2015-11-01
Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions.
Wegbreit, Ezra; Weissman, Alexandra B; Cushman, Grace K; Puzia, Megan E; Kim, Kerri L; Leibenluft, Ellen; Dickstein, Daniel P
2015-08-01
Bipolar disorder (BD) is a severe mental illness with high healthcare costs and poor outcomes. Increasing numbers of youths are diagnosed with BD, and many adults with BD report that their symptoms started in childhood, suggesting that BD can be a developmental disorder. Studies advancing our understanding of BD have shown alterations in facial emotion recognition both in children and adults with BD compared to healthy comparison (HC) participants, but none have evaluated the development of these deficits. To address this, we examined the effect of age on facial emotion recognition in a sample that included children and adults with confirmed childhood-onset type-I BD, with the adults having been diagnosed and followed since childhood by the Course and Outcome in Bipolar Youth study. Using the Diagnostic Analysis of Non-Verbal Accuracy, we compared facial emotion recognition errors among participants with BD (n = 66; ages 7-26 years) and HC participants (n = 87; ages 7-25 years). Complementary analyses investigated errors for child and adult faces. A significant diagnosis-by-age interaction indicated that younger BD participants performed worse than expected relative to HC participants their own age. The deficits occurred both for child and adult faces and were particularly strong for angry child faces, which were most often mistaken as sad. Our results were not influenced by medications, comorbidities/substance use, or mood state/global functioning. Younger individuals with BD are worse than their peers at this important social skill. This deficit may be an important developmentally salient treatment target - that is, for cognitive remediation to improve BD youths' emotion recognition abilities. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Wegbreit, Ezra; Weissman, Alexandra B; Cushman, Grace K; Puzia, Megan E; Kim, Kerri L; Leibenluft, Ellen; Dickstein, Daniel P
2015-01-01
Objectives Bipolar disorder (BD) is a severe mental illness with high healthcare costs and poor outcomes. Increasing numbers of youths are diagnosed with BD, and many adults with BD report their symptoms started in childhood, suggesting BD can be a developmental disorder. Studies advancing our understanding of BD have shown alterations in facial emotion recognition in both children and adults with BD compared to healthy comparison (HC) participants, but none have evaluated the development of these deficits. To address this, we examined the effect of age on facial emotion recognition in a sample that included children and adults with confirmed childhood-onset type-I BD, with the adults having been diagnosed and followed since childhood by the Course and Outcome in Bipolar Youth study. Methods Using the Diagnostic Analysis of Non-Verbal Accuracy, we compared facial emotion recognition errors among participants with BD (n = 66; ages 7–26 years) and HC participants (n = 87; ages 7–25 years). Complementary analyses investigated errors for child and adult faces. Results A significant diagnosis-by-age interaction indicated that younger BD participants performed worse than expected relative to HC participants their own age. The deficits occurred for both child and adult faces and were particularly strong for angry child faces, which were most often mistaken as sad. Our results were not influenced by medications, comorbidities/substance use, or mood state/global functioning. Conclusions Younger individuals with BD are worse than their peers at this important social skill. This deficit may be an important developmentally salient treatment target, i.e., for cognitive remediation to improve BD youths’ emotion recognition abilities. PMID:25951752
Revisiting the earliest electrophysiological correlate of familiar face recognition.
Huang, Wanyi; Wu, Xia; Hu, Liping; Wang, Lei; Ding, Yulong; Qu, Zhe
2017-10-01
The present study used event-related potentials (ERPs) to reinvestigate the earliest face familiarity effect (FFE: ERP differences between familiar and unfamiliar faces) that genuinely reflects cognitive processes underlying recognition of familiar faces in long-term memory. To trigger relatively early FFEs, participants were required to categorize upright and inverted famous faces and unknown faces in a task that placed high demand on face recognition. More importantly, to determine whether an observed FFE was linked to on-line face recognition, systematical investigation about the relationship between the FFE and behavioral performance of face recognition was conducted. The results showed significant FFEs on P1, N170, N250, and P300 waves. The FFEs on occipital P1 and N170 (<200ms) showed reversed polarities for upright and inverted faces, and were not correlated with any behavioral measure (accuracy, response time) or modulated by learning, indicating that they might merely reflect low-level visual differences between face sets. In contrast, the later FFEs on occipito-temporal N250 (~230ms) and centro-parietal P300 (~350ms) showed consistent polarities for upright and inverted faces. The N250 FFE was individually correlated with recognition speed for upright faces, and could be obtained for inverted faces through learning. The P300 FFE was also related to behavior in many aspects. These findings provide novel evidence supporting that cognitive discrimination of familiar and unfamiliar faces starts no less than 200ms after stimulus onset, and the familiarity effect on N250 may be the first electrophysiological correlate underlying recognition of familiar faces in long-term memory. Copyright © 2017 Elsevier B.V. All rights reserved.
Oliver, Lindsay D; Virani, Karim; Finger, Elizabeth C; Mitchell, Derek G V
2014-07-01
Frontotemporal dementia (FTD) is a debilitating neurodegenerative disorder characterized by severely impaired social and emotional behaviour, including emotion recognition deficits. Though fear recognition impairments seen in particular neurological and developmental disorders can be ameliorated by reallocating attention to critical facial features, the possibility that similar benefits can be conferred to patients with FTD has yet to be explored. In the current study, we examined the impact of presenting distinct regions of the face (whole face, eyes-only, and eyes-removed) on the ability to recognize expressions of anger, fear, disgust, and happiness in 24 patients with FTD and 24 healthy controls. A recognition deficit was demonstrated across emotions by patients with FTD relative to controls. Crucially, removal of diagnostic facial features resulted in an appropriate decline in performance for both groups; furthermore, patients with FTD demonstrated a lack of disproportionate improvement in emotion recognition accuracy as a result of isolating critical facial features relative to controls. Thus, unlike some neurological and developmental disorders featuring amygdala dysfunction, the emotion recognition deficit observed in FTD is not likely driven by selective inattention to critical facial features. Patients with FTD also mislabelled negative facial expressions as happy more often than controls, providing further evidence for abnormalities in the representation of positive affect in FTD. This work suggests that the emotional expression recognition deficit associated with FTD is unlikely to be rectified by adjusting selective attention to diagnostic features, as has proven useful in other select disorders. Copyright © 2014 Elsevier Ltd. All rights reserved.
Nomi, Jason S; Rhodes, Matthew G; Cleary, Anne M
2013-01-01
This study examined how participants' predictions of future memory performance are influenced by emotional facial expressions. Participants made judgements of learning (JOLs) predicting the likelihood that they would correctly identify a face displaying a happy, angry, or neutral emotional expression in a future two-alternative forced-choice recognition test of identity (i.e., recognition that a person's face was seen before). JOLs were higher for studied faces with happy and angry emotional expressions than for neutral faces. However, neutral test faces with studied neutral expressions had significantly higher identity recognition rates than neutral test faces studied with happy or angry expressions. Thus, these data are the first to demonstrate that people believe happy and angry emotional expressions will lead to better identity recognition in the future relative to neutral expressions. This occurred despite the fact that neutral expressions elicited better identity recognition than happy and angry expressions. These findings contribute to the growing literature examining the interaction of cognition and emotion.
Facial recognition in education system
NASA Astrophysics Data System (ADS)
Krithika, L. B.; Venkatesh, K.; Rathore, S.; Kumar, M. Harish
2017-11-01
Human beings exploit emotions comprehensively for conveying messages and their resolution. Emotion detection and face recognition can provide an interface between the individuals and technologies. The most successful applications of recognition analysis are recognition of faces. Many different techniques have been used to recognize the facial expressions and emotion detection handle varying poses. In this paper, we approach an efficient method to recognize the facial expressions to track face points and distances. This can automatically identify observer face movements and face expression in image. This can capture different aspects of emotion and facial expressions.
Discriminating Power of Localized Three-Dimensional Facial Morphology
Hammond, Peter; Hutton, Tim J.; Allanson, Judith E.; Buxton, Bernard; Campbell, Linda E.; Clayton-Smith, Jill; Donnai, Dian; Karmiloff-Smith, Annette; Metcalfe, Kay; Murphy, Kieran C.; Patton, Michael; Pober, Barbara; Prescott, Katrina; Scambler, Pete; Shaw, Adam; Smith, Ann C. M.; Stevens, Angela F.; Temple, I. Karen; Hennekam, Raoul; Tassabehji, May
2005-01-01
Many genetic syndromes involve a facial gestalt that suggests a preliminary diagnosis to an experienced clinical geneticist even before a clinical examination and genotyping are undertaken. Previously, using visualization and pattern recognition, we showed that dense surface models (DSMs) of full face shape characterize facial dysmorphology in Noonan and in 22q11 deletion syndromes. In this much larger study of 696 individuals, we extend the use of DSMs of the full face to establish accurate discrimination between controls and individuals with Williams, Smith-Magenis, 22q11 deletion, or Noonan syndromes and between individuals with different syndromes in these groups. However, the full power of the DSM approach is demonstrated by the comparable discriminating abilities of localized facial features, such as periorbital, perinasal, and perioral patches, and the correlation of DSM-based predictions and molecular findings. This study demonstrates the potential of face shape models to assist clinical training through visualization, to support clinical diagnosis of affected individuals through pattern recognition, and to enable the objective comparison of individuals sharing other phenotypic or genotypic properties. PMID:16380911
Face recognition by applying wavelet subband representation and kernel associative memory.
Zhang, Bai-Ling; Zhang, Haihong; Ge, Shuzhi Sam
2004-01-01
In this paper, we propose an efficient face recognition scheme which has two features: 1) representation of face images by two-dimensional (2-D) wavelet subband coefficients and 2) recognition by a modular, personalised classification method based on kernel associative memory models. Compared to PCA projections and low resolution "thumb-nail" image representations, wavelet subband coefficients can efficiently capture substantial facial features while keeping computational complexity low. As there are usually very limited samples, we constructed an associative memory (AM) model for each person and proposed to improve the performance of AM models by kernel methods. Specifically, we first applied kernel transforms to each possible training pair of faces sample and then mapped the high-dimensional feature space back to input space. Our scheme using modular autoassociative memory for face recognition is inspired by the same motivation as using autoencoders for optical character recognition (OCR), for which the advantages has been proven. By associative memory, all the prototypical faces of one particular person are used to reconstruct themselves and the reconstruction error for a probe face image is used to decide if the probe face is from the corresponding person. We carried out extensive experiments on three standard face recognition datasets, the FERET data, the XM2VTS data, and the ORL data. Detailed comparisons with earlier published results are provided and our proposed scheme offers better recognition accuracy on all of the face datasets.
Powell, Jane; Letson, Susan; Davidoff, Jules; Valentine, Tim; Greenwood, Richard
2008-04-01
Twenty patients with impairments of face recognition, in the context of a broader pattern of cognitive deficits, were administered three new training procedures derived from contemporary theories of face processing to enhance their learning of new faces: semantic association (being given additional verbal information about the to-be-learned faces); caricaturing (presentation of caricatured versions of the faces during training and veridical versions at recognition testing); and part recognition (focusing patients on distinctive features during the training phase). Using a within-subjects design, each training procedure was applied to a different set of 10 previously unfamiliar faces and entailed six presentations of each face. In a "simple exposure" control procedure (SE), participants were given six presentations of another set of faces using the same basic protocol but with no further elaboration. Order of the four procedures was counterbalanced, and each condition was administered on a different day. A control group of 12 patients with similar levels of face recognition impairment were trained on all four sets of faces under SE conditions. Compared to the SE condition, all three training procedures resulted in more accurate discrimination between the 10 studied faces and 10 distractor faces in a post-training recognition test. This did not reflect any intrinsic lesser memorability of the faces used in the SE condition, as evidenced by the comparable performance across face sets by the control group. At the group level, the three experimental procedures were of similar efficacy, and associated cognitive deficits did not predict which technique would be most beneficial to individual patients; however, there was limited power to detect such associations. Interestingly, a pure prosopagnosic patient who was tested separately showed benefit only from the part recognition technique. Possible mechanisms for the observed effects, and implications for rehabilitation, are discussed.
De Winter, François-Laurent; Timmers, Dorien; de Gelder, Beatrice; Van Orshoven, Marc; Vieren, Marleen; Bouckaert, Miriam; Cypers, Gert; Caekebeke, Jo; Van de Vliet, Laura; Goffin, Karolien; Van Laere, Koen; Sunaert, Stefan; Vandenberghe, Rik; Vandenbulcke, Mathieu; Van den Stock, Jan
2016-01-01
Deficits in face processing have been described in the behavioral variant of fronto-temporal dementia (bvFTD), primarily regarding the recognition of facial expressions. Less is known about face shape and face identity processing. Here we used a hierarchical strategy targeting face shape and face identity recognition in bvFTD and matched healthy controls. Participants performed 3 psychophysical experiments targeting face shape detection (Experiment 1), unfamiliar face identity matching (Experiment 2), familiarity categorization and famous face-name matching (Experiment 3). The results revealed group differences only in Experiment 3, with a deficit in the bvFTD group for both familiarity categorization and famous face-name matching. Voxel-based morphometry regression analyses in the bvFTD group revealed an association between grey matter volume of the left ventral anterior temporal lobe and familiarity recognition, while face-name matching correlated with grey matter volume of the bilateral ventral anterior temporal lobes. Subsequently, we quantified familiarity-specific and name-specific recognition deficits as the sum of the celebrities of which respectively only the name or only the familiarity was accurately recognized. Both indices were associated with grey matter volume of the bilateral anterior temporal cortices. These findings extent previous results by documenting the involvement of the left anterior temporal lobe (ATL) in familiarity detection and the right ATL in name recognition deficits in fronto-temporal lobar degeneration.
Beneficial effects of verbalization and visual distinctiveness on remembering and knowing faces.
Brown, Charity; Lloyd-Jones, Toby J
2006-03-01
We examined the effect of verbally describing faces upon visual memory. In particular, we examined the locus of the facilitative effects of verbalization by manipulating the visual distinctiveness ofthe to-be-remembered faces and using the remember/know procedure as a measure of recognition performance (i.e., remember vs. know judgments). Participants were exposed to distinctive faces intermixed with typical faces and described (or not, in the control condition) each face following its presentation. Subsequently, the participants discriminated the original faces from distinctive and typical distractors in a yes/no recognition decision and made remember/know judgments. Distinctive faces elicited better discrimination performance than did typical faces. Furthermore, for both typical and distinctive faces, better discrimination performance was obtained in the description than in the control condition. Finally, these effects were evident for both recollection- and familiarity-based recognition decisions. We argue that verbalization and visual distinctiveness independently benefit face recognition, and we discuss these findings in terms of the nature of verbalization and the role of recollective and familiarity-based processes in recognition.
Kruskal-Wallis-based computationally efficient feature selection for face recognition.
Ali Khan, Sajid; Hussain, Ayyaz; Basit, Abdul; Akram, Sheeraz
2014-01-01
Face recognition in today's technological world, and face recognition applications attain much more importance. Most of the existing work used frontal face images to classify face image. However these techniques fail when applied on real world face images. The proposed technique effectively extracts the prominent facial features. Most of the features are redundant and do not contribute to representing face. In order to eliminate those redundant features, computationally efficient algorithm is used to select the more discriminative face features. Extracted features are then passed to classification step. In the classification step, different classifiers are ensemble to enhance the recognition accuracy rate as single classifier is unable to achieve the high accuracy. Experiments are performed on standard face database images and results are compared with existing techniques.
Corcoran, C M; Keilp, J G; Kayser, J; Klim, C; Butler, P D; Bruder, G E; Gur, R C; Javitt, D C
2015-10-01
Schizophrenia is characterized by profound and disabling deficits in the ability to recognize emotion in facial expression and tone of voice. Although these deficits are well documented in established schizophrenia using recently validated tasks, their predictive utility in at-risk populations has not been formally evaluated. The Penn Emotion Recognition and Discrimination tasks, and recently developed measures of auditory emotion recognition, were administered to 49 clinical high-risk subjects prospectively followed for 2 years for schizophrenia outcome, and 31 healthy controls, and a developmental cohort of 43 individuals aged 7-26 years. Deficit in emotion recognition in at-risk subjects was compared with deficit in established schizophrenia, and with normal neurocognitive growth curves from childhood to early adulthood. Deficits in emotion recognition significantly distinguished at-risk patients who transitioned to schizophrenia. By contrast, more general neurocognitive measures, such as attention vigilance or processing speed, were non-predictive. The best classification model for schizophrenia onset included both face emotion processing and negative symptoms, with accuracy of 96%, and area under the receiver-operating characteristic curve of 0.99. In a parallel developmental study, emotion recognition abilities were found to reach maturity prior to traditional age of risk for schizophrenia, suggesting they may serve as objective markers of early developmental insult. Profound deficits in emotion recognition exist in at-risk patients prior to schizophrenia onset. They may serve as an index of early developmental insult, and represent an effective target for early identification and remediation. Future studies investigating emotion recognition deficits at both mechanistic and predictive levels are strongly encouraged.
Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition
ERIC Educational Resources Information Center
Freitag, Claudia; Schwarzer, Gudrun
2011-01-01
Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…
Children's Face Identity Representations Are No More View Specific than Those of Adults
ERIC Educational Resources Information Center
Jeffery, Linda; Rathbone, Cameron; Read, Ainsley; Rhodes, Gillian
2013-01-01
Face recognition performance improves during childhood, not reaching adult levels until late adolescence, yet the source of this improvement is unclear. Recognition of faces across changes in viewpoint appears particularly slow to develop. Poor cross-view recognition suggests that children's face representations may be more view specific than…
Robust kernel representation with statistical local features for face recognition.
Yang, Meng; Zhang, Lei; Shiu, Simon Chi-Keung; Zhang, David
2013-06-01
Factors such as misalignment, pose variation, and occlusion make robust face recognition a difficult problem. It is known that statistical features such as local binary pattern are effective for local feature extraction, whereas the recently proposed sparse or collaborative representation-based classification has shown interesting results in robust face recognition. In this paper, we propose a novel robust kernel representation model with statistical local features (SLF) for robust face recognition. Initially, multipartition max pooling is used to enhance the invariance of SLF to image registration error. Then, a kernel-based representation model is proposed to fully exploit the discrimination information embedded in the SLF, and robust regression is adopted to effectively handle the occlusion in face images. Extensive experiments are conducted on benchmark face databases, including extended Yale B, AR (A. Martinez and R. Benavente), multiple pose, illumination, and expression (multi-PIE), facial recognition technology (FERET), face recognition grand challenge (FRGC), and labeled faces in the wild (LFW), which have different variations of lighting, expression, pose, and occlusions, demonstrating the promising performance of the proposed method.
Minimizing Skin Color Differences Does Not Eliminate the Own-Race Recognition Advantage in Infants
Anzures, Gizelle; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; Lee, Kang
2011-01-01
An abundance of experience with own-race faces and limited to no experience with other-race faces has been associated with better recognition memory for own-race faces in infants, children, and adults. This study investigated the developmental origins of this other-race effect (ORE) by examining the role of a salient perceptual property of faces—that of skin color. Six- and 9-month-olds’ recognition memory for own- and other-race faces was examined using infant-controlled habituation and visual-paired comparison at test. Infants were shown own- or other-race faces in color or with skin color cues minimized in grayscale images. Results for the color stimuli replicated previous findings that infants show an ORE in face recognition memory. Results for the grayscale stimuli showed that even when a salient perceptual cue to race, such as skin color information, is minimized, 6- to 9-month-olds, nonetheless, show an ORE in their face recognition memory. Infants’ use of shape-based and configural cues for face recognition is discussed. PMID:22039335
Successful Decoding of Famous Faces in the Fusiform Face Area
Axelrod, Vadim; Yovel, Galit
2015-01-01
What are the neural mechanisms of face recognition? It is believed that the network of face-selective areas, which spans the occipital, temporal, and frontal cortices, is important in face recognition. A number of previous studies indeed reported that face identity could be discriminated based on patterns of multivoxel activity in the fusiform face area and the anterior temporal lobe. However, given the difficulty in localizing the face-selective area in the anterior temporal lobe, its role in face recognition is still unknown. Furthermore, previous studies limited their analysis to occipito-temporal regions without testing identity decoding in more anterior face-selective regions, such as the amygdala and prefrontal cortex. In the current high-resolution functional Magnetic Resonance Imaging study, we systematically examined the decoding of the identity of famous faces in the temporo-frontal network of face-selective and adjacent non-face-selective regions. A special focus has been put on the face-area in the anterior temporal lobe, which was reliably localized using an optimized scanning protocol. We found that face-identity could be discriminated above chance level only in the fusiform face area. Our results corroborate the role of the fusiform face area in face recognition. Future studies are needed to further explore the role of the more recently discovered anterior face-selective areas in face recognition. PMID:25714434
Recognition Memory for Realistic Synthetic Faces
Yotsumoto, Yuko; Kahana, Michael J.; Wilson, Hugh R.; Sekuler, Robert
2006-01-01
A series of experiments examined short-term recognition memory for trios of briefly-presented, synthetic human faces derived from three real human faces. The stimuli were graded series of faces, which differed by varying known amounts from the face of the average female. Faces based on each of the three real faces were transformed so as to lie along orthogonal axes in a 3-D face space. Experiment 1 showed that the synthetic faces' perceptual similarity stucture strongly influenced recognition memory. Results were fit by NEMo, a noisy exemplar model of perceptual recognition memory. The fits revealed that recognition memory was influenced both by the similarity of the probe to series items, and by the similarities among the series items themselves. Non-metric multi-dimensional scaling (MDS) showed that faces' perceptual representations largely preserved the 3-D space in which the face stimuli were arrayed. NEMo gave a better account of the results when similarity was defined as perceptual, MDS similarity rather than physical proximity of one face to another. Experiment 2 confirmed the importance of within-list homogeneity directly, without mediation of a model. We discuss the affinities and differences between visual memory for synthetic faces and memory for simpler stimuli. PMID:17948069
Recognition memory span in autopsy-confirmed Dementia with Lewy Bodies and Alzheimer's Disease.
Salmon, David P; Heindel, William C; Hamilton, Joanne M; Vincent Filoteo, J; Cidambi, Varun; Hansen, Lawrence A; Masliah, Eliezer; Galasko, Douglas
2015-08-01
Evidence from patients with amnesia suggests that recognition memory span tasks engage both long-term memory (i.e., secondary memory) processes mediated by the diencephalic-medial temporal lobe memory system and working memory processes mediated by fronto-striatal systems. Thus, the recognition memory span task may be particularly effective for detecting memory deficits in disorders that disrupt both memory systems. The presence of unique pathology in fronto-striatal circuits in Dementia with Lewy Bodies (DLB) compared to AD suggests that performance on the recognition memory span task might be differentially affected in the two disorders even though they have quantitatively similar deficits in secondary memory. In the present study, patients with autopsy-confirmed DLB or AD, and Normal Control (NC) participants, were tested on separate recognition memory span tasks that required them to retain increasing amounts of verbal, spatial, or visual object (i.e., faces) information across trials. Results showed that recognition memory spans for verbal and spatial stimuli, but not face stimuli, were lower in patients with DLB than in those with AD, and more impaired relative to NC performance. This was despite similar deficits in the two patient groups on independent measures of secondary memory such as the total number of words recalled from long-term storage on the Buschke Selective Reminding Test. The disproportionate vulnerability of recognition memory span task performance in DLB compared to AD may be due to greater fronto-striatal involvement in DLB and a corresponding decrement in cooperative interaction between working memory and secondary memory processes. Assessment of recognition memory span may contribute to the ability to distinguish between DLB and AD relatively early in the course of disease. Copyright © 2015 Elsevier Ltd. All rights reserved.
Recognition Memory Span in Autopsy-Confirmed Dementia with Lewy Bodies and Alzheimer’s Disease
Salmon, David P.; Heindel, William C.; Hamilton, Joanne M.; Filoteo, J. Vincent; Cidambi, Varun; Hansen, Lawrence A.; Masliah, Eliezer; Galasko, Douglas
2016-01-01
Evidence from patients with amnesia suggests that recognition memory span tasks engage both long-term memory (i.e., secondary memory) processes mediated by the diencephalic-medial temporal lobe memory system and working memory processes mediated by fronto-striatal systems. Thus, the recognition memory span task may be particularly effective for detecting memory deficits in disorders that disrupt both memory systems. The presence of unique pathology in fronto-striatal circuits in Dementia with Lewy Bodies (DLB) compared to AD suggests that performance on the recognition memory span task might be differentially affected in the two disorders even though they have quantitatively similar deficits in secondary memory. In the present study, patients with autopsy-confirmed DLB or AD, and normal control (NC) participants, were tested on separate recognition memory span tasks that required them to retain increasing amounts of verbal, spatial, or visual object (i.e., faces) information across trials. Results showed that recognition memory spans for verbal and spatial stimuli, but not face stimuli, were lower in patients with DLB than in those with AD, and more impaired relative to NC performance. This was despite similar deficits in the two patient groups on independent measures of secondary memory such as the total number of words recalled from Long-Term Storage on the Buschke Selective Reminding Test. The disproportionate vulnerability of recognition memory span task performance in DLB compared to AD may be due to greater fronto-striatal involvement in DLB and a corresponding decrement in cooperative interaction between working memory and secondary memory processes. Assessment of recognition memory span may contribute to the ability to distinguish between DLB and AD relatively early in the course of disease. PMID:26184443
Comparison of emotion recognition from facial expression and music.
Gaspar, Tina; Labor, Marina; Jurić, Iva; Dumancić, Dijana; Ilakovac, Vesna; Heffer, Marija
2011-01-01
The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recognizing emotions presented as facial expressions or in classical music works we conducted a survey which included 90 elementary school and 87 high school students from Osijek (Croatia). The participants had to match 8 photographs of different emotions expressed on the face and 8 pieces of classical music works with 8 offered emotions. The recognition of emotions expressed through classical music pieces was significantly less successful than the recognition of emotional facial expressions. The high school students were significantly better at recognizing facial emotions than the elementary school students, whereas girls were better than boys. The success rate in recognizing emotions from music pieces was associated with higher grades in mathematics. Basic emotions are far better recognized if presented on human faces than in music, possibly because the understanding of facial emotions is one of the oldest communication skills in human society. Female advantage in emotion recognition was selected due to the necessity of their communication with the newborns during early development. The proficiency in recognizing emotional content of music and mathematical skills probably share some general cognitive skills like attention, memory and motivation. Music pieces were differently processed in brain than facial expressions and consequently, probably differently evaluated as relevant emotional clues.
Face recognition based on matching of local features on 3D dynamic range sequences
NASA Astrophysics Data System (ADS)
Echeagaray-Patrón, B. A.; Kober, Vitaly
2016-09-01
3D face recognition has attracted attention in the last decade due to improvement of technology of 3D image acquisition and its wide range of applications such as access control, surveillance, human-computer interaction and biometric identification systems. Most research on 3D face recognition has focused on analysis of 3D still data. In this work, a new method for face recognition using dynamic 3D range sequences is proposed. Experimental results are presented and discussed using 3D sequences in the presence of pose variation. The performance of the proposed method is compared with that of conventional face recognition algorithms based on descriptors.
The roles of perceptual and conceptual information in face recognition.
Schwartz, Linoy; Yovel, Galit
2016-11-01
The representation of familiar objects is comprised of perceptual information about their visual properties as well as the conceptual knowledge that we have about them. What is the relative contribution of perceptual and conceptual information to object recognition? Here, we examined this question by designing a face familiarization protocol during which participants were either exposed to rich perceptual information (viewing each face in different angles and illuminations) or with conceptual information (associating each face with a different name). Both conditions were compared with single-view faces presented with no labels. Recognition was tested on new images of the same identities to assess whether learning generated a view-invariant representation. Results showed better recognition of novel images of the learned identities following association of a face with a name label, but no enhancement following exposure to multiple face views. Whereas these findings may be consistent with the role of category learning in object recognition, face recognition was better for labeled faces only when faces were associated with person-related labels (name, occupation), but not with person-unrelated labels (object names or symbols). These findings suggest that association of meaningful conceptual information with an image shifts its representation from an image-based percept to a view-invariant concept. They further indicate that the role of conceptual information should be considered to account for the superior recognition that we have for familiar faces and objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Apps, Matthew A. J.; Tajadura-Jiménez, Ana; Turley, Grainne; Tsakiris, Manos
2013-01-01
Mirror self-recognition is often considered as an index of self-awareness. Neuroimaging studies have identified a neural circuit specialised for the recognition of one’s own current facial appearance. However, faces change considerably over a lifespan, highlighting the necessity for representations of one’s face to continually be updated. We used fMRI to investigate the different neural circuits involved in the recognition of the childhood and current, adult, faces of one’s self. Participants viewed images of either their own face as it currently looks morphed with the face of a familiar other or their childhood face morphed with the childhood face of the familiar other. Activity in areas which have a generalised selectivity for faces, including the inferior occipital gyrus, the superior parietal lobule and the inferior temporal gyrus, varied with the amount of current self in an image. Activity in areas involved in memory encoding and retrieval, including the hippocampus and the posterior cingulate gyrus, and areas involved in creating a sense of body ownership, including the temporo-parietal junction and the inferior parietal lobule, varied with the amount of childhood self in an image. We suggest that the recognition of one’s own past or present face is underpinned by different cognitive processes in distinct neural circuits. Current self-recognition engages areas involved in perceptual face processing, whereas childhood self-recognition recruits networks involved in body ownership and memory processing. PMID:22940117
Apps, Matthew A J; Tajadura-Jiménez, Ana; Turley, Grainne; Tsakiris, Manos
2012-11-15
Mirror self-recognition is often considered as an index of self-awareness. Neuroimaging studies have identified a neural circuit specialised for the recognition of one's own current facial appearance. However, faces change considerably over a lifespan, highlighting the necessity for representations of one's face to continually be updated. We used fMRI to investigate the different neural circuits involved in the recognition of the childhood and current, adult, faces of one's self. Participants viewed images of either their own face as it currently looks morphed with the face of a familiar other or their childhood face morphed with the childhood face of the familiar other. Activity in areas which have a generalised selectivity for faces, including the inferior occipital gyrus, the superior parietal lobule and the inferior temporal gyrus, varied with the amount of current self in an image. Activity in areas involved in memory encoding and retrieval, including the hippocampus and the posterior cingulate gyrus, and areas involved in creating a sense of body ownership, including the temporo-parietal junction and the inferior parietal lobule, varied with the amount of childhood self in an image. We suggest that the recognition of one's own past or present face is underpinned by different cognitive processes in distinct neural circuits. Current self-recognition engages areas involved in perceptual face processing, whereas childhood self-recognition recruits networks involved in body ownership and memory processing. Copyright © 2012 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Brooks, Brian E.; Cooper, Eric E.
2006-01-01
Three divided visual field experiments tested current hypotheses about the types of visual shape representation tasks that recruit the cognitive and neural mechanisms underlying face recognition. Experiment 1 found a right hemisphere advantage for subordinate but not basic-level face recognition. Experiment 2 found a right hemisphere advantage for…
ERIC Educational Resources Information Center
Wiese, Holger; Komes, Jessica; Tüttenberg, Simone; Leidinger, Jana; Schweinberger, Stefan R.
2017-01-01
Difficulties in person recognition are among the common complaints associated with cognitive ageing. The present series of experiments therefore investigated face and person recognition in young and older adults. The authors examined how within-domain and cross-domain repetition as well as semantic priming affect familiar face recognition and…
Toward End-to-End Face Recognition Through Alignment Learning
NASA Astrophysics Data System (ADS)
Zhong, Yuanyi; Chen, Jiansheng; Huang, Bo
2017-08-01
Plenty of effective methods have been proposed for face recognition during the past decade. Although these methods differ essentially in many aspects, a common practice of them is to specifically align the facial area based on the prior knowledge of human face structure before feature extraction. In most systems, the face alignment module is implemented independently. This has actually caused difficulties in the designing and training of end-to-end face recognition models. In this paper we study the possibility of alignment learning in end-to-end face recognition, in which neither prior knowledge on facial landmarks nor artificially defined geometric transformations are required. Specifically, spatial transformer layers are inserted in front of the feature extraction layers in a Convolutional Neural Network (CNN) for face recognition. Only human identity clues are used for driving the neural network to automatically learn the most suitable geometric transformation and the most appropriate facial area for the recognition task. To ensure reproducibility, our model is trained purely on the publicly available CASIA-WebFace dataset, and is tested on the Labeled Face in the Wild (LFW) dataset. We have achieved a verification accuracy of 99.08\\% which is comparable to state-of-the-art single model based methods.
Shafai, Fakhri; Oruc, Ipek
2018-02-01
The other-race effect is the finding of diminished performance in recognition of other-race faces compared to those of own-race. It has been suggested that the other-race effect stems from specialized expert processes being tuned exclusively to own-race faces. In the present study, we measured recognition contrast thresholds for own- and other-race faces as well as houses for Caucasian observers. We have factored face recognition performance into two invariant aspects of visual function: efficiency, which is related to neural computations and processing demanded by the task, and equivalent input noise, related to signal degradation within the visual system. We hypothesized that if expert processes are available only to own-race faces, this should translate into substantially greater recognition efficiencies for own-race compared to other-race faces. Instead, we found similar recognition efficiencies for both own- and other-race faces. The other-race effect manifested as increased equivalent input noise. These results argue against qualitatively distinct perceptual processes. Instead they suggest that for Caucasian observers, similar neural computations underlie recognition of own- and other-race faces. Copyright © 2018 Elsevier Ltd. All rights reserved.
Evidence for view-invariant face recognition units in unfamiliar face learning.
Etchells, David B; Brooks, Joseph L; Johnston, Robert A
2017-05-01
Many models of face recognition incorporate the idea of a face recognition unit (FRU), an abstracted representation formed from each experience of a face which aids recognition under novel viewing conditions. Some previous studies have failed to find evidence of this FRU representation. Here, we report three experiments which investigated this theoretical construct by modifying the face learning procedure from that in previous work. During learning, one or two views of previously unfamiliar faces were shown to participants in a serial matching task. Later, participants attempted to recognize both seen and novel views of the learned faces (recognition phase). Experiment 1 tested participants' recognition of a novel view, a day after learning. Experiment 2 was identical, but tested participants on the same day as learning. Experiment 3 repeated Experiment 1, but tested participants on a novel view that was outside the rotation of those views learned. Results revealed a significant advantage, across all experiments, for recognizing a novel view when two views had been learned compared to single view learning. The observed view invariance supports the notion that an FRU representation is established during multi-view face learning under particular learning conditions.
Face recognition increases during saccade preparation.
Lin, Hai; Rizak, Joshua D; Ma, Yuan-ye; Yang, Shang-chuan; Chen, Lin; Hu, Xin-tian
2014-01-01
Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition.
Van Rheenen, Tamsyn E; Joshua, Nicole; Castle, David J; Rossell, Susan L
2017-03-01
Emotion recognition impairments have been demonstrated in schizophrenia (Sz), but are less consistent and lesser in magnitude in bipolar disorder (BD). This may be related to the extent to which different face processing strategies are engaged during emotion recognition in each of these disorders. We recently showed that Sz patients had impairments in the use of both featural and configural face processing strategies, whereas BD patients were impaired only in the use of the latter. Here we examine the influence that these impairments have on facial emotion recognition in these cohorts. Twenty-eight individuals with Sz, 28 individuals with BD, and 28 healthy controls completed a facial emotion labeling task with two conditions designed to separate the use of featural and configural face processing strategies; part-based and whole-face emotion recognition. Sz patients performed worse than controls on both conditions, and worse than BD patients on the whole-face condition. BD patients performed worse than controls on the whole-face condition only. Configural processing deficits appear to influence the recognition of facial emotions in BD, whereas both configural and featural processing abnormalities impair emotion recognition in Sz. This may explain discrepancies in the profiles of emotion recognition between the disorders. (JINS, 2017, 23, 287-291).
Quest Hierarchy for Hyperspectral Face Recognition
2011-03-01
numerous face recognition algorithms available, several very good literature surveys are available that include Abate [29], Samal [110], Kong [18], Zou...Perception, Japan (January 1994). [110] Samal , Ashok and P. Iyengar, Automatic Recognition and Analysis of Human Faces and Facial Expressions: A Survey
Developmental Commonalities between Object and Face Recognition in Adolescence
Jüttner, Martin; Wakui, Elley; Petters, Dean; Davidoff, Jules
2016-01-01
In the visual perception literature, the recognition of faces has often been contrasted with that of non-face objects, in terms of differences with regard to the role of parts, part relations and holistic processing. However, recent evidence from developmental studies has begun to blur this sharp distinction. We review evidence for a protracted development of object recognition that is reminiscent of the well-documented slow maturation observed for faces. The prolonged development manifests itself in a retarded processing of metric part relations as opposed to that of individual parts and offers surprising parallels to developmental accounts of face recognition, even though the interpretation of the data is less clear with regard to holistic processing. We conclude that such results might indicate functional commonalities between the mechanisms underlying the recognition of faces and non-face objects, which are modulated by different task requirements in the two stimulus domains. PMID:27014176
The "parts and wholes" of face recognition: A review of the literature.
Tanaka, James W; Simonyi, Diana
2016-10-01
It has been claimed that faces are recognized as a "whole" rather than by the recognition of individual parts. In a paper published in the Quarterly Journal of Experimental Psychology in 1993, Martha Farah and I attempted to operationalize the holistic claim using the part/whole task. In this task, participants studied a face and then their memory presented in isolation and in the whole face. Consistent with the holistic view, recognition of the part was superior when tested in the whole-face condition compared to when it was tested in isolation. The "whole face" or holistic advantage was not found for faces that were inverted, or scrambled, nor for non-face objects, suggesting that holistic encoding was specific to normal, intact faces. In this paper, we reflect on the part/whole paradigm and how it has contributed to our understanding of what it means to recognize a face as a "whole" stimulus. We describe the value of part/whole task for developing theories of holistic and non-holistic recognition of faces and objects. We discuss the research that has probed the neural substrates of holistic processing in healthy adults and people with prosopagnosia and autism. Finally, we examine how experience shapes holistic face recognition in children and recognition of own- and other-race faces in adults. The goal of this article is to summarize the research on the part/whole task and speculate on how it has informed our understanding of holistic face processing.
Chuk, Tim; Chan, Antoni B; Hsiao, Janet H
2017-12-01
The hidden Markov model (HMM)-based approach for eye movement analysis is able to reflect individual differences in both spatial and temporal aspects of eye movements. Here we used this approach to understand the relationship between eye movements during face learning and recognition, and its association with recognition performance. We discovered holistic (i.e., mainly looking at the face center) and analytic (i.e., specifically looking at the two eyes in addition to the face center) patterns during both learning and recognition. Although for both learning and recognition, participants who adopted analytic patterns had better recognition performance than those with holistic patterns, a significant positive correlation between the likelihood of participants' patterns being classified as analytic and their recognition performance was only observed during recognition. Significantly more participants adopted holistic patterns during learning than recognition. Interestingly, about 40% of the participants used different patterns between learning and recognition, and among them 90% switched their patterns from holistic at learning to analytic at recognition. In contrast to the scan path theory, which posits that eye movements during learning have to be recapitulated during recognition for the recognition to be successful, participants who used the same or different patterns during learning and recognition did not differ in recognition performance. The similarity between their learning and recognition eye movement patterns also did not correlate with their recognition performance. These findings suggested that perceptuomotor memory elicited by eye movement patterns during learning does not play an important role in recognition. In contrast, the retrieval of diagnostic information for recognition, such as the eyes for face recognition, is a better predictor for recognition performance. Copyright © 2017 Elsevier Ltd. All rights reserved.
Direct Gaze Modulates Face Recognition in Young Infants
ERIC Educational Resources Information Center
Farroni, Teresa; Massaccesi, Stefano; Menon, Enrica; Johnson, Mark H.
2007-01-01
From birth, infants prefer to look at faces that engage them in direct eye contact. In adults, direct gaze is known to modulate the processing of faces, including the recognition of individuals. In the present study, we investigate whether direction of gaze has any effect on face recognition in four-month-old infants. Four-month infants were shown…
Video-based face recognition via convolutional neural networks
NASA Astrophysics Data System (ADS)
Bao, Tianlong; Ding, Chunhui; Karmoshi, Saleem; Zhu, Ming
2017-06-01
Face recognition has been widely studied recently while video-based face recognition still remains a challenging task because of the low quality and large intra-class variation of video captured face images. In this paper, we focus on two scenarios of video-based face recognition: 1)Still-to-Video(S2V) face recognition, i.e., querying a still face image against a gallery of video sequences; 2)Video-to-Still(V2S) face recognition, in contrast to S2V scenario. A novel method was proposed in this paper to transfer still and video face images to an Euclidean space by a carefully designed convolutional neural network, then Euclidean metrics are used to measure the distance between still and video images. Identities of still and video images that group as pairs are used as supervision. In the training stage, a joint loss function that measures the Euclidean distance between the predicted features of training pairs and expanding vectors of still images is optimized to minimize the intra-class variation while the inter-class variation is guaranteed due to the large margin of still images. Transferred features are finally learned via the designed convolutional neural network. Experiments are performed on COX face dataset. Experimental results show that our method achieves reliable performance compared with other state-of-the-art methods.
Locality constrained joint dynamic sparse representation for local matching based face recognition.
Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun
2014-01-01
Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC.
Simulation of talking faces in the human brain improves auditory speech recognition
von Kriegstein, Katharina; Dogan, Özgür; Grüter, Martina; Giraud, Anne-Lise; Kell, Christian A.; Grüter, Thomas; Kleinschmidt, Andreas; Kiebel, Stefan J.
2008-01-01
Human face-to-face communication is essentially audiovisual. Typically, people talk to us face-to-face, providing concurrent auditory and visual input. Understanding someone is easier when there is visual input, because visual cues like mouth and tongue movements provide complementary information about speech content. Here, we hypothesized that, even in the absence of visual input, the brain optimizes both auditory-only speech and speaker recognition by harvesting speaker-specific predictions and constraints from distinct visual face-processing areas. To test this hypothesis, we performed behavioral and neuroimaging experiments in two groups: subjects with a face recognition deficit (prosopagnosia) and matched controls. The results show that observing a specific person talking for 2 min improves subsequent auditory-only speech and speaker recognition for this person. In both prosopagnosics and controls, behavioral improvement in auditory-only speech recognition was based on an area typically involved in face-movement processing. Improvement in speaker recognition was only present in controls and was based on an area involved in face-identity processing. These findings challenge current unisensory models of speech processing, because they show that, in auditory-only speech, the brain exploits previously encoded audiovisual correlations to optimize communication. We suggest that this optimization is based on speaker-specific audiovisual internal models, which are used to simulate a talking face. PMID:18436648
Super-resolution method for face recognition using nonlinear mappings on coherent features.
Huang, Hua; He, Huiting
2011-01-01
Low-resolution (LR) of face images significantly decreases the performance of face recognition. To address this problem, we present a super-resolution method that uses nonlinear mappings to infer coherent features that favor higher recognition of the nearest neighbor (NN) classifiers for recognition of single LR face image. Canonical correlation analysis is applied to establish the coherent subspaces between the principal component analysis (PCA) based features of high-resolution (HR) and LR face images. Then, a nonlinear mapping between HR/LR features can be built by radial basis functions (RBFs) with lower regression errors in the coherent feature space than in the PCA feature space. Thus, we can compute super-resolved coherent features corresponding to an input LR image according to the trained RBF model efficiently and accurately. And, face identity can be obtained by feeding these super-resolved features to a simple NN classifier. Extensive experiments on the Facial Recognition Technology, University of Manchester Institute of Science and Technology, and Olivetti Research Laboratory databases show that the proposed method outperforms the state-of-the-art face recognition algorithms for single LR image in terms of both recognition rate and robustness to facial variations of pose and expression.
Spoof Detection for Finger-Vein Recognition System Using NIR Camera.
Nguyen, Dat Tien; Yoon, Hyo Sik; Pham, Tuyen Danh; Park, Kang Ryoung
2017-10-01
Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake) finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD), is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor) based on the observations of the researchers about the difference between real (live) and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR) camera-based finger-vein recognition system using convolutional neural network (CNN) to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA) for dimensionality reduction of feature space and support vector machine (SVM) for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared to CNN-based methods and other previous handcrafted methods.
Spoof Detection for Finger-Vein Recognition System Using NIR Camera
Nguyen, Dat Tien; Yoon, Hyo Sik; Pham, Tuyen Danh; Park, Kang Ryoung
2017-01-01
Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake) finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD), is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor) based on the observations of the researchers about the difference between real (live) and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR) camera-based finger-vein recognition system using convolutional neural network (CNN) to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA) for dimensionality reduction of feature space and support vector machine (SVM) for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared to CNN-based methods and other previous handcrafted methods. PMID:28974031
Sex influence on face recognition memory moderated by presentation duration and reencoding.
Weirich, Sebastian; Hoffmann, Ferdinand; Meissner, Lucia; Heinz, Andreas; Bengner, Thomas
2011-11-01
It has been suggested that women have a better face recognition memory than men. Here we analyzed whether this advantage depends on a better encoding or consolidation of information and if the advantage is visible during short-term memory (STM), only, or whether it also remains evident in long-term memory (LTM). We tested short- and long-term face recognition memory in 36 nonclinical participants (19 women). We varied the duration of item presentation (1, 5, and 10 s), the time of testing (immediately after the study phase, 1 hr, and 24 hr later), and the possibility to reencode items (none, immediately after the study phase, after 1 hr). Women showed better overall face recognition memory than men (ηp² = .15, p < .05). We found this advantage, however, only with a longer duration of item presentation (interaction effect Sex × ηp² = .16, p < .05). Women's advantage in face recognition was visible mainly if participants had the possibility to reencode faces during former test trials. Our results suggest women do not have a better face recognition memory than men per se, but may profit more than men from longer durations of presentation during encoding or the possibility for reencoding. Future research on sex differences in face recognition memory should explicate possible causes for the better encoding of face information in women.
Artificial faces are harder to remember
Balas, Benjamin; Pacella, Jonathan
2015-01-01
Observers interact with artificial faces in a range of different settings and in many cases must remember and identify computer-generated faces. In general, however, most adults have heavily biased experience favoring real faces over synthetic faces. It is well known that face recognition abilities are affected by experience such that faces belonging to “out-groups” defined by race or age are more poorly remembered and harder to discriminate from one another than faces belonging to the “in-group.” Here, we examine the extent to which artificial faces form an “out-group” in this sense when other perceptual categories are matched. We rendered synthetic faces using photographs of real human faces and compared performance in a memory task and a discrimination task across real and artificial versions of the same faces. We found that real faces were easier to remember, but only slightly more discriminable than artificial faces. Artificial faces were also equally susceptible to the well-known face inversion effect, suggesting that while these patterns are still processed by the human visual system in a face-like manner, artificial appearance does compromise the efficiency of face processing. PMID:26195852
Effects of Lateral Reversal on Recognition Memory for Photographs of Faces.
ERIC Educational Resources Information Center
McKelvie, Stuart J.
1983-01-01
Examined recognition memory for photographs of faces in four experiments using students and adults. Results supported a feature (rather than Gestalt) model of facial recognition in which the two sides of the face are different in its memory representation. (JAC)
When false recognition is out of control: the case of facial conjunctions.
Jones, Todd C; Bartlett, James C
2009-03-01
In three experiments, a dual-process approach to face recognition memory is examined, with a specific focus on the idea that a recollection process can be used to retrieve configural information of a studied face. Subjects could avoid, with confidence, a recognition error to conjunction lure faces (each a reconfiguration of features from separate studied faces) or feature lure faces (each based on a set of old features and a set of new features) by recalling a studied configuration. In Experiment 1, study repetition (one vs. eight presentations) was manipulated, and in Experiments 2 and 3, retention interval over a short number of trials (0-20) was manipulated. Different measures converged on the conclusion that subjects were unable to use a recollection process to retrieve configural information in an effort to temper recognition errors for conjunction or feature lure faces. A single process, familiarity, appears to be the sole process underlying recognition of conjunction and feature faces, and familiarity contributes, perhaps in whole, to discrimination of old from conjunction faces.
Fooprateepsiri, Rerkchai; Kurutach, Werasak
2014-03-01
Face authentication is a biometric classification method that verifies the identity of a user based on image of their face. Accuracy of the authentication is reduced when the pose, illumination and expression of the training face images are different than the testing image. The methods in this paper are designed to improve the accuracy of a features-based face recognition system when the pose between the input images and training images are different. First, an efficient 2D-to-3D integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination. Second, realistic virtual faces with different poses are synthesized based on the personalized 3D face to characterize the face subspace. Finally, face recognition is conducted based on these representative virtual faces. Compared with other related works, this framework has the following advantages: (1) only one single frontal face is required for face recognition, which avoids the burdensome enrollment work; and (2) the synthesized face samples provide the capability to conduct recognition under difficult conditions like complex pose, illumination and expression. From the experimental results, we conclude that the proposed method improves the accuracy of face recognition by varying the pose, illumination and expression. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Interest and attention in facial recognition.
Burgess, Melinda C R; Weaver, George E
2003-04-01
When applied to facial recognition, the levels of processing paradigm has yielded consistent results: faces processed in deep conditions are recognized better than faces processed under shallow conditions. However, there are multiple explanations for this occurrence. The own-race advantage in facial recognition, the tendency to recognize faces from one's own race better than faces from another race, is also consistently shown but not clearly explained. This study was designed to test the hypothesis that the levels of processing findings in facial recognition are a result of interest and attention, not differences in processing. This hypothesis was tested for both own and other faces with 105 Caucasian general psychology students. Levels of processing was manipulated as a between-subjects variable; students were asked to answer one of four types of study questions, e.g., "deep" or "shallow" processing questions, while viewing the study faces. Students' recognition of a subset of previously presented Caucasian and African-American faces from a test-set with an equal number of distractor faces was tested. They indicated their interest in and attention to the task. The typical levels of processing effect was observed with better recognition performance in the deep conditions than in the shallow conditions for both own- and other-race faces. The typical own-race advantage was also observed regardless of level of processing condition. For both own- and other-race faces, level of processing explained a significant portion of the recognition variance above and beyond what was explained by interest in and attention to the task.
The “parts and wholes” of face recognition: a review of the literature
Tanaka, James W.; Simonyi, Diana
2016-01-01
It has been claimed that faces are recognized as a “whole” rather than the recognition of individual parts. In a paper published in the Quarterly Journal of Experimental Psychology in 1993, Martha Farah and I attempted to operationalize the holistic claim using the part/whole task. In this task, participants studied a face and then their memory presented in isolation and in the whole face. Consistent with the holistic view, recognition of the part was superior when tested in the whole-face condition compared to when it was tested in isolation. The “whole face” or holistic advantage was not found for faces that were inverted, or scrambled, nor for non-face objects suggesting that holistic encoding was specific to normal, intact faces. In this paper, we reflect on the part/whole paradigm and how it has contributed to our understanding of what it means to recognize a face as a “whole” stimulus. We describe the value of part/whole task for developing theories of holistic and non-holistic recognition of faces and objects. We discuss the research that has probed the neural substrates of holistic processing in healthy adults and people with prosopagnosia and autism. Finally, we examine how experience shapes holistic face recognition in children and recognition of own- and other-race faces in adults. The goal of this article is to summarize the research on the part/whole task and speculate on how it has informed our understanding of holistic face processing. PMID:26886495
Face and Word Recognition Can Be Selectively Affected by Brain Injury or Developmental Disorders.
Robotham, Ro J; Starrfelt, Randi
2017-01-01
Face and word recognition have traditionally been thought to rely on highly specialised and relatively independent cognitive processes. Some of the strongest evidence for this has come from patients with seemingly category-specific visual perceptual deficits such as pure prosopagnosia, a selective face recognition deficit, and pure alexia, a selective word recognition deficit. Together, the patterns of impaired reading with preserved face recognition and impaired face recognition with preserved reading constitute a double dissociation. The existence of these selective deficits has been questioned over the past decade. It has been suggested that studies describing patients with these pure deficits have failed to measure the supposedly preserved functions using sensitive enough measures, and that if tested using sensitive measurements, all patients with deficits in one visual category would also have deficits in the other. The implications of this would be immense, with most textbooks in cognitive neuropsychology requiring drastic revisions. In order to evaluate the evidence for dissociations, we review studies that specifically investigate whether face or word recognition can be selectively affected by acquired brain injury or developmental disorders. We only include studies published since 2004, as comprehensive reviews of earlier studies are available. Most of the studies assess the supposedly preserved functions using sensitive measurements. We found convincing evidence that reading can be preserved in acquired and developmental prosopagnosia and also evidence (though weaker) that face recognition can be preserved in acquired or developmental dyslexia, suggesting that face and word recognition are at least in part supported by independent processes.
Comparing the visual spans for faces and letters
He, Yingchen; Scholz, Jennifer M.; Gage, Rachel; Kallie, Christopher S.; Liu, Tingting; Legge, Gordon E.
2015-01-01
The visual span—the number of adjacent text letters that can be reliably recognized on one fixation—has been proposed as a sensory bottleneck that limits reading speed (Legge, Mansfield, & Chung, 2001). Like reading, searching for a face is an important daily task that involves pattern recognition. Is there a similar limitation on the number of faces that can be recognized in a single fixation? Here we report on a study in which we measured and compared the visual-span profiles for letter and face recognition. A serial two-stage model for pattern recognition was developed to interpret the data. The first stage is characterized by factors limiting recognition of isolated letters or faces, and the second stage represents the interfering effect of nearby stimuli on recognition. Our findings show that the visual span for faces is smaller than that for letters. Surprisingly, however, when differences in first-stage processing for letters and faces are accounted for, the two visual spans become nearly identical. These results suggest that the concept of visual span may describe a common sensory bottleneck that underlies different types of pattern recognition. PMID:26129858
Oxytocin increases bias, but not accuracy, in face recognition line-ups.
Bate, Sarah; Bennetts, Rachel; Parris, Benjamin A; Bindemann, Markus; Udale, Robert; Bussunt, Amanda
2015-07-01
Previous work indicates that intranasal inhalation of oxytocin improves face recognition skills, raising the possibility that it may be used in security settings. However, it is unclear whether oxytocin directly acts upon the core face-processing system itself or indirectly improves face recognition via affective or social salience mechanisms. In a double-blind procedure, 60 participants received either an oxytocin or placebo nasal spray before completing the One-in-Ten task-a standardized test of unfamiliar face recognition containing target-present and target-absent line-ups. Participants in the oxytocin condition outperformed those in the placebo condition on target-present trials, yet were more likely to make false-positive errors on target-absent trials. Signal detection analyses indicated that oxytocin induced a more liberal response bias, rather than increasing accuracy per se. These findings support a social salience account of the effects of oxytocin on face recognition and indicate that oxytocin may impede face recognition in certain scenarios. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Luzzi, Simona; Baldinelli, Sara; Ranaldi, Valentina; Fabi, Katia; Cafazzo, Viviana; Fringuelli, Fabio; Silvestrini, Mauro; Provinciali, Leandro; Reverberi, Carlo; Gainotti, Guido
2017-01-08
Famous face and voice recognition is reported to be impaired both in semantic dementia (SD) and in Alzheimer's Disease (AD), although more severely in the former. In AD a coexistence of perceptual impairment in face and voice processing has also been reported and this could contribute to the altered performance in complex semantic tasks. On the other hand, in SD both face and voice recognition disorders could be related to the prevalence of atrophy in the right temporal lobe (RTL). The aim of the present study was twofold: (1) to investigate famous faces and voices recognition in SD and AD to verify if the two diseases show a differential pattern of impairment, resulting from disruption of different cognitive mechanisms; (2) to check if face and voice recognition disorders prevail in patients with atrophy mainly affecting the RTL. To avoid the potential influence of primary perceptual problems in face and voice recognition, a pool of patients suffering from early SD and AD were administered a detailed set of tests exploring face and voice perception. Thirteen SD (8 with prevalence of right and 5 with prevalence of left temporal atrophy) and 25 CE patients, who did not show visual and auditory perceptual impairment, were finally selected and were administered an experimental battery exploring famous face and voice recognition and naming. Twelve SD patients underwent cerebral PET imaging and were classified in right and left SD according to the onset modality and to the prevalent decrease in FDG uptake in right or left temporal lobe respectively. Correlation of PET imaging and famous face and voice recognition was performed. Results showed a differential performance profile in the two diseases, because AD patients were significantly impaired in the naming tests, but showed preserved recognition, whereas SD patients were profoundly impaired both in naming and in recognition of famous faces and voices. Furthermore, face and voice recognition disorders prevailed in SD patients with RTL atrophy, who also showed a conceptual impairment on the Pyramids and Palm Trees test more important in the pictorial than in the verbal modality. Finally, in 12SD patients in whom PET was available, a strong correlation between FDG uptake and face-to-name and voice-to-name matching data was found in the right but not in the left temporal lobe. The data support the hypothesis of a different cognitive basis for impairment of face and voice recognition in the two dementias and suggest that the pattern of impairment in SD may be due to a loss of semantic representations, while a defect of semantic control, with impaired naming and preserved recognition might be hypothesized in AD. Furthermore, the correlation between face and voice recognition disorders and RTL damage are consistent with the hypothesis assuming that in the RTL person-specific knowledge may be mainly based upon non-verbal representations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Leppänen, J M; Niehaus, D J H; Koen, L; Du Toit, E; Schoeman, R; Emsley, R
2006-06-01
Schizophrenia is associated with a deficit in the recognition of negative emotions from facial expressions. The present study examined the universality of this finding by studying facial expression recognition in African Xhosa population. Forty-four Xhosa patients with schizophrenia and forty healthy controls were tested with a computerized task requiring rapid perceptual discrimination of matched positive (i.e. happy), negative (i.e. angry), and neutral faces. Patients were equally accurate as controls in recognizing happy faces but showed a marked impairment in recognition of angry faces. The impairment was particularly pronounced for high-intensity (open-mouth) angry faces. Patients also exhibited more false happy and angry responses to neutral faces than controls. No correlation between level of education or illness duration and emotion recognition was found but the deficit in the recognition of negative emotions was more pronounced in familial compared to non-familial cases of schizophrenia. These findings suggest that the deficit in the recognition of negative facial expressions may constitute a universal neurocognitive marker of schizophrenia.
Memory for angry faces, impulsivity, and problematic behavior in adolescence.
d'Acremont, Mathieu; Van der Linden, Martial
2007-04-01
Research has shown that cognitive processes like the attribution of hostile intention or angry emotion to others contribute to the development and maintenance of conduct problems. However, the role of memory has been understudied in comparison with attribution biases. The aim of this study was thus to test if a memory bias for angry faces was related to conduct problems in youth. Adolescents from a junior secondary school were presented with angry and happy faces and were later asked to recognize the same faces with a neutral expression. They also completed an impulsivity questionnaire. A teacher assessed their behavior. The results showed that a better recognition of angry faces than happy faces predicted conduct problems and hyperactivity/inattention as reported by the teacher. The memory bias effect was more pronounced for impulsive adolescents. It is suggested that a memory bias for angry faces favors disruptive behavior but that a good ability to control impulses may moderate the negative impact of this bias.
Reading Faces: From Features to Recognition.
Guntupalli, J Swaroop; Gobbini, M Ida
2017-12-01
Chang and Tsao recently reported that the monkey face patch system encodes facial identity in a space of facial features as opposed to exemplars. Here, we discuss how such coding might contribute to face recognition, emphasizing the critical role of learning and interactions with other brain areas for optimizing the recognition of familiar faces. Copyright © 2017 Elsevier Ltd. All rights reserved.
Face sketch recognition based on edge enhancement via deep learning
NASA Astrophysics Data System (ADS)
Xie, Zhenzhu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong
2017-11-01
In this paper,we address the face sketch recognition problem. Firstly, we utilize the eigenface algorithm to convert a sketch image into a synthesized sketch face image. Subsequently, considering the low-level vision problem in synthesized face sketch image .Super resolution reconstruction algorithm based on CNN(convolutional neural network) is employed to improve the visual effect. To be specific, we uses a lightweight super-resolution structure to learn a residual mapping instead of directly mapping the feature maps from the low-level space to high-level patch representations, which making the networks are easier to optimize and have lower computational complexity. Finally, we adopt LDA(Linear Discriminant Analysis) algorithm to realize face sketch recognition on synthesized face image before super resolution and after respectively. Extensive experiments on the face sketch database(CUFS) from CUHK demonstrate that the recognition rate of SVM(Support Vector Machine) algorithm improves from 65% to 69% and the recognition rate of LDA(Linear Discriminant Analysis) algorithm improves from 69% to 75%.What'more,the synthesized face image after super resolution can not only better describer image details such as hair ,nose and mouth etc, but also improve the recognition accuracy effectively.
Emotion recognition following early psychosocial deprivation
Nelson, Charles A.; Westerlund, Alissa; McDermott, Jennifer Martin; Zeanah, Charles H.; Fox, Nathan A.
2014-01-01
We examined the ability to discriminate facial expressions among 8-year-old children who had been abandoned and placed in institutions in infancy and children with no institutional rearing (Never Institutionalized Group; NIG). Following a baseline assessment (average age=22 months), half the institutionalized children were randomly assigned to a foster care intervention (foster care group; FCG) and half to remain in the institution (care as usual group; CAUG). All three groups had a more difficult time recognizing fearful as compared to neutral expressions. However, the NIG and FCG were both better at inhibiting responses to neutral and fearful faces than the CAUG. Regarding ERPs, the P1 was biggest to angry faces for the NIG, smallest among the CAUG and intermediate for the FCG. The N170 and the P300 were biggest to fear in all groups. Although the children in foster care showed improvements in their ability to recognize fear and neutral faces, and their P1 to angry was midway between the NIG and CAUG, we observed no timing of placement effects. These findings support the view that institutional rearing leads to deficits in the ability to process facial emotion, and placement in foster care partially, although incompletely, ameliorates these deficits. PMID:23627960
Roddy, S; Tiedt, L; Kelleher, I; Clarke, M C; Murphy, J; Rawdon, C; Roche, R A P; Calkins, M E; Richard, J A; Kohler, C G; Cannon, M
2012-10-01
Psychotic symptoms, also termed psychotic-like experiences (PLEs) in the absence of psychotic disorder, are common in adolescents and are associated with increased risk of schizophrenia-spectrum illness in adulthood. At the same time, schizophrenia is associated with deficits in social cognition, with deficits particularly documented in facial emotion recognition (FER). However, little is known about the relationship between PLEs and FER abilities, with only one previous prospective study examining the association between these abilities in childhood and reported PLEs in adolescence. The current study was a cross-sectional investigation of the association between PLEs and FER in a sample of Irish adolescents. The Adolescent Psychotic-Like Symptom Screener (APSS), a self-report measure of PLEs, and the Penn Emotion Recognition-40 Test (Penn ER-40), a measure of facial emotion recognition, were completed by 793 children aged 10-13 years. Children who reported PLEs performed significantly more poorly on FER (β=-0.03, p=0.035). Recognition of sad faces was the major driver of effects, with children performing particularly poorly when identifying this expression (β=-0.08, p=0.032). The current findings show that PLEs are associated with poorer FER. Further work is needed to elucidate causal relationships with implications for the design of future interventions for those at risk of developing psychosis.
Fusion of Visible and Thermal Descriptors Using Genetic Algorithms for Face Recognition Systems.
Hermosilla, Gabriel; Gallardo, Francisco; Farias, Gonzalo; San Martin, Cesar
2015-07-23
The aim of this article is to present a new face recognition system based on the fusion of visible and thermal features obtained from the most current local matching descriptors by maximizing face recognition rates through the use of genetic algorithms. The article considers a comparison of the performance of the proposed fusion methodology against five current face recognition methods and classic fusion techniques used commonly in the literature. These were selected by considering their performance in face recognition. The five local matching methods and the proposed fusion methodology are evaluated using the standard visible/thermal database, the Equinox database, along with a new database, the PUCV-VTF, designed for visible-thermal studies in face recognition and described for the first time in this work. The latter is created considering visible and thermal image sensors with different real-world conditions, such as variations in illumination, facial expression, pose, occlusion, etc. The main conclusions of this article are that two variants of the proposed fusion methodology surpass current face recognition methods and the classic fusion techniques reported in the literature, attaining recognition rates of over 97% and 99% for the Equinox and PUCV-VTF databases, respectively. The fusion methodology is very robust to illumination and expression changes, as it combines thermal and visible information efficiently by using genetic algorithms, thus allowing it to choose optimal face areas where one spectrum is more representative than the other.
Fusion of Visible and Thermal Descriptors Using Genetic Algorithms for Face Recognition Systems
Hermosilla, Gabriel; Gallardo, Francisco; Farias, Gonzalo; San Martin, Cesar
2015-01-01
The aim of this article is to present a new face recognition system based on the fusion of visible and thermal features obtained from the most current local matching descriptors by maximizing face recognition rates through the use of genetic algorithms. The article considers a comparison of the performance of the proposed fusion methodology against five current face recognition methods and classic fusion techniques used commonly in the literature. These were selected by considering their performance in face recognition. The five local matching methods and the proposed fusion methodology are evaluated using the standard visible/thermal database, the Equinox database, along with a new database, the PUCV-VTF, designed for visible-thermal studies in face recognition and described for the first time in this work. The latter is created considering visible and thermal image sensors with different real-world conditions, such as variations in illumination, facial expression, pose, occlusion, etc. The main conclusions of this article are that two variants of the proposed fusion methodology surpass current face recognition methods and the classic fusion techniques reported in the literature, attaining recognition rates of over 97% and 99% for the Equinox and PUCV-VTF databases, respectively. The fusion methodology is very robust to illumination and expression changes, as it combines thermal and visible information efficiently by using genetic algorithms, thus allowing it to choose optimal face areas where one spectrum is more representative than the other. PMID:26213932
Doi, Hirokazu; Fujisawa, Takashi X; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki
2013-09-01
This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group difference in facial expression recognition was prominent for stimuli with low or intermediate emotional intensities. In contrast to this, the individuals with Asperger syndrome exhibited lower recognition accuracy than typically-developed controls mainly for emotional prosody with high emotional intensity. In facial expression recognition, Asperger and control groups showed an inversion effect for all categories. The magnitude of this effect was less in the Asperger group for angry and sad expressions, presumably attributable to reduced recruitment of the configural mode of face processing. The individuals with Asperger syndrome outperformed the control participants in recognizing inverted sad expressions, indicating enhanced processing of local facial information representing sad emotion. These results suggest that the adults with Asperger syndrome rely on modality-specific strategies in emotion recognition from facial expression and prosodic information.
Multi-layer sparse representation for weighted LBP-patches based facial expression recognition.
Jia, Qi; Gao, Xinkai; Guo, He; Luo, Zhongxuan; Wang, Yi
2015-03-19
In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the existing training methods have quite low recognition rates. Motivated by sparse representation, the problem can be solved by finding sparse coefficients of the test image by the whole training set. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. We evaluate facial representation based on weighted local binary patterns, and Fisher separation criterion is used to calculate the weighs of patches. A multi-layer sparse representation framework is proposed for multi-intensity facial expression recognition, especially for low-intensity expressions and noisy expressions in reality, which is a critical problem but seldom addressed in the existing works. To this end, several experiments based on low-resolution and multi-intensity expressions are carried out. Promising results on publicly available databases demonstrate the potential of the proposed approach.
Unaware person recognition from the body when face identification fails.
Rice, Allyson; Phillips, P Jonathon; Natu, Vaidehi; An, Xiaobo; O'Toole, Alice J
2013-11-01
How does one recognize a person when face identification fails? Here, we show that people rely on the body but are unaware of doing so. State-of-the-art face-recognition algorithms were used to select images of people with almost no useful identity information in the face. Recognition of the face alone in these cases was near chance level, but recognition of the person was accurate. Accuracy in identifying the person without the face was identical to that in identifying the whole person. Paradoxically, people reported relying heavily on facial features over noninternal face and body features in making their identity decisions. Eye movements indicated otherwise, with gaze duration and fixations shifting adaptively toward the body and away from the face when the body was a better indicator of identity than the face. This shift occurred with no cost to accuracy or response time. Human identity processing may be partially inaccessible to conscious awareness.
Facial Emotion Recognition in Bipolar Disorder and Healthy Aging.
Altamura, Mario; Padalino, Flavia A; Stella, Eleonora; Balzotti, Angela; Bellomo, Antonello; Palumbo, Rocco; Di Domenico, Alberto; Mammarella, Nicola; Fairfield, Beth
2016-03-01
Emotional face recognition is impaired in bipolar disorder, but it is not clear whether this is specific for the illness. Here, we investigated how aging and bipolar disorder influence dynamic emotional face recognition. Twenty older adults, 16 bipolar patients, and 20 control subjects performed a dynamic affective facial recognition task and a subsequent rating task. Participants pressed a key as soon as they were able to discriminate whether the neutral face was assuming a happy or angry facial expression and then rated the intensity of each facial expression. Results showed that older adults recognized happy expressions faster, whereas bipolar patients recognized angry expressions faster. Furthermore, both groups rated emotional faces more intensely than did the control subjects. This study is one of the first to compare how aging and clinical conditions influence emotional facial recognition and underlines the need to consider the role of specific and common factors in emotional face recognition.
Face Recognition Using Local Quantized Patterns and Gabor Filters
NASA Astrophysics Data System (ADS)
Khryashchev, V.; Priorov, A.; Stepanova, O.; Nikitin, A.
2015-05-01
The problem of face recognition in a natural or artificial environment has received a great deal of researchers' attention over the last few years. A lot of methods for accurate face recognition have been proposed. Nevertheless, these methods often fail to accurately recognize the person in difficult scenarios, e.g. low resolution, low contrast, pose variations, etc. We therefore propose an approach for accurate and robust face recognition by using local quantized patterns and Gabor filters. The estimation of the eye centers is used as a preprocessing stage. The evaluation of our algorithm on different samples from a standardized FERET database shows that our method is invariant to the general variations of lighting, expression, occlusion and aging. The proposed approach allows about 20% correct recognition accuracy increase compared with the known face recognition algorithms from the OpenCV library. The additional use of Gabor filters can significantly improve the robustness to changes in lighting conditions.
Thermal-to-visible face recognition using partial least squares.
Hu, Shuowen; Choi, Jonghyun; Chan, Alex L; Schwartz, William Robson
2015-03-01
Although visible face recognition has been an active area of research for several decades, cross-modal face recognition has only been explored by the biometrics community relatively recently. Thermal-to-visible face recognition is one of the most difficult cross-modal face recognition challenges, because of the difference in phenomenology between the thermal and visible imaging modalities. We address the cross-modal recognition problem using a partial least squares (PLS) regression-based approach consisting of preprocessing, feature extraction, and PLS model building. The preprocessing and feature extraction stages are designed to reduce the modality gap between the thermal and visible facial signatures, and facilitate the subsequent one-vs-all PLS-based model building. We incorporate multi-modal information into the PLS model building stage to enhance cross-modal recognition. The performance of the proposed recognition algorithm is evaluated on three challenging datasets containing visible and thermal imagery acquired under different experimental scenarios: time-lapse, physical tasks, mental tasks, and subject-to-camera range. These scenarios represent difficult challenges relevant to real-world applications. We demonstrate that the proposed method performs robustly for the examined scenarios.
Wells, Laura Jean; Gillespie, Steven Mark; Rotshtein, Pia
2016-01-01
The identification of emotional expressions is vital for social interaction, and can be affected by various factors, including the expressed emotion, the intensity of the expression, the sex of the face, and the gender of the observer. This study investigates how these factors affect the speed and accuracy of expression recognition, as well as dwell time on the two most significant areas of the face: the eyes and the mouth. Participants were asked to identify expressions from female and male faces displaying six expressions (anger, disgust, fear, happiness, sadness, and surprise), each with three levels of intensity (low, moderate, and normal). Overall, responses were fastest and most accurate for happy expressions, but slowest and least accurate for fearful expressions. More intense expressions were also classified most accurately. Reaction time showed a different pattern, with slowest response times recorded for expressions of moderate intensity. Overall, responses were slowest, but also most accurate, for female faces. Relative to male observers, women showed greater accuracy and speed when recognizing female expressions. Dwell time analyses revealed that attention to the eyes was about three times greater than on the mouth, with fearful eyes in particular attracting longer dwell times. The mouth region was attended to the most for fearful, angry, and disgusted expressions and least for surprise. These results extend upon previous findings to show important effects of expression, emotion intensity, and sex on expression recognition and gaze behaviour, and may have implications for understanding the ways in which emotion recognition abilities break down.
Rotshtein, Pia
2016-01-01
The identification of emotional expressions is vital for social interaction, and can be affected by various factors, including the expressed emotion, the intensity of the expression, the sex of the face, and the gender of the observer. This study investigates how these factors affect the speed and accuracy of expression recognition, as well as dwell time on the two most significant areas of the face: the eyes and the mouth. Participants were asked to identify expressions from female and male faces displaying six expressions (anger, disgust, fear, happiness, sadness, and surprise), each with three levels of intensity (low, moderate, and normal). Overall, responses were fastest and most accurate for happy expressions, but slowest and least accurate for fearful expressions. More intense expressions were also classified most accurately. Reaction time showed a different pattern, with slowest response times recorded for expressions of moderate intensity. Overall, responses were slowest, but also most accurate, for female faces. Relative to male observers, women showed greater accuracy and speed when recognizing female expressions. Dwell time analyses revealed that attention to the eyes was about three times greater than on the mouth, with fearful eyes in particular attracting longer dwell times. The mouth region was attended to the most for fearful, angry, and disgusted expressions and least for surprise. These results extend upon previous findings to show important effects of expression, emotion intensity, and sex on expression recognition and gaze behaviour, and may have implications for understanding the ways in which emotion recognition abilities break down. PMID:27942030
Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi
2014-12-08
Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.
Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi
2014-01-01
Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system. PMID:25494350
Voice Recognition in Face-Blind Patients
Liu, Ran R.; Pancaroglu, Raika; Hills, Charlotte S.; Duchaine, Brad; Barton, Jason J. S.
2016-01-01
Right or bilateral anterior temporal damage can impair face recognition, but whether this is an associative variant of prosopagnosia or part of a multimodal disorder of person recognition is an unsettled question, with implications for cognitive and neuroanatomic models of person recognition. We assessed voice perception and short-term recognition of recently heard voices in 10 subjects with impaired face recognition acquired after cerebral lesions. All 4 subjects with apperceptive prosopagnosia due to lesions limited to fusiform cortex had intact voice discrimination and recognition. One subject with bilateral fusiform and anterior temporal lesions had a combined apperceptive prosopagnosia and apperceptive phonagnosia, the first such described case. Deficits indicating a multimodal syndrome of person recognition were found only in 2 subjects with bilateral anterior temporal lesions. All 3 subjects with right anterior temporal lesions had normal voice perception and recognition, 2 of whom performed normally on perceptual discrimination of faces. This confirms that such lesions can cause a modality-specific associative prosopagnosia. PMID:25349193
Tracking the truth: the effect of face familiarity on eye fixations during deception.
Millen, Ailsa E; Hope, Lorraine; Hillstrom, Anne P; Vrij, Aldert
2017-05-01
In forensic investigations, suspects sometimes conceal recognition of a familiar person to protect co-conspirators or hide knowledge of a victim. The current experiment sought to determine whether eye fixations could be used to identify memory of known persons when lying about recognition of faces. Participants' eye movements were monitored whilst they lied and told the truth about recognition of faces that varied in familiarity (newly learned, famous celebrities, personally known). Memory detection by eye movements during recognition of personally familiar and famous celebrity faces was negligibly affected by lying, thereby demonstrating that detection of memory during lies is influenced by the prior learning of the face. By contrast, eye movements did not reveal lies robustly for newly learned faces. These findings support the use of eye movements as markers of memory during concealed recognition but also suggest caution when familiarity is only a consequence of one brief exposure.
The asymmetric distribution of informative face information during gender recognition.
Hu, Fengpei; Hu, Huan; Xu, Lian; Qin, Jungang
2013-02-01
Recognition of the gender of a face is important in social interactions. In the current study, the distribution of informative facial information was systematically examined during gender judgment using two methods, Bubbles and Focus windows techniques. Two experiments found that the most informative information was around the eyes, followed by the mouth and nose. Other parts of the face contributed to the gender recognition but were less important. The left side of the face was used more during gender recognition in two experiments. These results show mainly areas around the eyes are used for gender judgment and demonstrate perceptual asymmetry with a normal (non-chimeric) face.
Faces with Light Makeup Are Better Recognized than Faces with Heavy Makeup
Tagai, Keiko; Ohtaka, Hitomi; Nittono, Hiroshi
2016-01-01
Many women wear facial makeup to accentuate their appeal and attractiveness. Makeup may vary from natural (light) to glamorous (heavy), depending of the context of interpersonal situations, an emphasis on femininity, and current societal makeup trends. This study examined how light makeup and heavy makeup influenced attractiveness ratings and facial recognition. In a rating task, 38 Japanese women assigned attractiveness ratings to 36 Japanese female faces with no makeup, light makeup, and heavy makeup (12 each). In a subsequent recognition task, the participants were presented with 36 old and 36 new faces. Results indicated that attractiveness was rated highest for the light makeup faces and lowest for the no makeup faces. In contrast, recognition performance was higher for the no makeup and light make up faces than for the heavy makeup faces. Faces with heavy makeup produced a higher rate of false recognition than did other faces, possibly because heavy makeup creates an impression of the style of makeup itself, rather than the individual wearing the makeup. The present study suggests that light makeup is preferable to heavy makeup in that light makeup does not interfere with individual recognition and gives beholders positive impressions. PMID:26973553
Faces with Light Makeup Are Better Recognized than Faces with Heavy Makeup.
Tagai, Keiko; Ohtaka, Hitomi; Nittono, Hiroshi
2016-01-01
Many women wear facial makeup to accentuate their appeal and attractiveness. Makeup may vary from natural (light) to glamorous (heavy), depending of the context of interpersonal situations, an emphasis on femininity, and current societal makeup trends. This study examined how light makeup and heavy makeup influenced attractiveness ratings and facial recognition. In a rating task, 38 Japanese women assigned attractiveness ratings to 36 Japanese female faces with no makeup, light makeup, and heavy makeup (12 each). In a subsequent recognition task, the participants were presented with 36 old and 36 new faces. Results indicated that attractiveness was rated highest for the light makeup faces and lowest for the no makeup faces. In contrast, recognition performance was higher for the no makeup and light make up faces than for the heavy makeup faces. Faces with heavy makeup produced a higher rate of false recognition than did other faces, possibly because heavy makeup creates an impression of the style of makeup itself, rather than the individual wearing the makeup. The present study suggests that light makeup is preferable to heavy makeup in that light makeup does not interfere with individual recognition and gives beholders positive impressions.
The Effects of Inversion and Familiarity on Face versus Body Cues to Person Recognition
ERIC Educational Resources Information Center
Robbins, Rachel A.; Coltheart, Max
2012-01-01
Extensive research has focused on face recognition, and much is known about this topic. However, much of this work seems to be based on an assumption that faces are the most important aspect of person recognition. Here we test this assumption in two experiments. We show that when viewers are forced to choose, they "do" use the face more than the…
False match elimination for face recognition based on SIFT algorithm
NASA Astrophysics Data System (ADS)
Gu, Xuyuan; Shi, Ping; Shao, Meide
2011-06-01
The SIFT (Scale Invariant Feature Transform) is a well known algorithm used to detect and describe local features in images. It is invariant to image scale, rotation and robust to the noise and illumination. In this paper, a novel method used for face recognition based on SIFT is proposed, which combines the optimization of SIFT, mutual matching and Progressive Sample Consensus (PROSAC) together and can eliminate the false matches of face recognition effectively. Experiments on ORL face database show that many false matches can be eliminated and better recognition rate is achieved.
Vrancken, Leia; Germeys, Filip; Verfaillie, Karl
2017-01-01
A considerable amount of research on identity recognition and emotion identification with the composite design points to the holistic processing of these aspects in faces and bodies. In this paradigm, the interference from a nonattended face half on the perception of the attended half is taken as evidence for holistic processing (i.e., a composite effect). Far less research, however, has been dedicated to the concept of gaze. Nonetheless, gaze perception is a substantial component of face and body perception, and holds critical information for everyday communicative interactions. Furthermore, the ability of human observers to detect direct versus averted eye gaze is effortless, perhaps similar to identity perception and emotion recognition. However, the hypothesis of holistic perception of eye gaze has never been tested directly. Research on gaze perception with the composite design could facilitate further systematic comparison with other aspects of face and body perception that have been investigated using the composite design (i.e., identity and emotion). In the present research, a composite design was administered to assess holistic processing of gaze cues in faces (Experiment 1) and bodies (Experiment 2). Results confirmed that eye and head orientation (Experiment 1A) and head and body orientation (Experiment 2A) are integrated in a holistic manner. However, the composite effect was not completely disrupted by inversion (Experiments 1B and 2B), a finding that will be discussed together with implications for future research.
Calvo, Manuel G; Nummenmaa, Lauri
2009-12-01
Happy, surprised, disgusted, angry, sad, fearful, and neutral faces were presented extrafoveally, with fixations on faces allowed or not. The faces were preceded by a cue word that designated the face to be saccaded in a two-alternative forced-choice discrimination task (2AFC; Experiments 1 and 2), or were followed by a probe word for recognition (Experiment 3). Eye tracking was used to decompose the recognition process into stages. Relative to the other expressions, happy faces (1) were identified faster (as early as 160 msec from stimulus onset) in extrafoveal vision, as revealed by shorter saccade latencies in the 2AFC task; (2) required less encoding effort, as indexed by shorter first fixations and dwell times; and (3) required less decision-making effort, as indicated by fewer refixations on the face after the recognition probe was presented. This reveals a happy-face identification advantage both prior to and during overt attentional processing. The results are discussed in relation to prior neurophysiological findings on latencies in facial expression recognition.
Face photo-sketch synthesis and recognition.
Wang, Xiaogang; Tang, Xiaoou
2009-11-01
In this paper, we propose a novel face photo-sketch synthesis and recognition method using a multiscale Markov Random Fields (MRF) model. Our system has three components: 1) given a face photo, synthesizing a sketch drawing; 2) given a face sketch drawing, synthesizing a photo; and 3) searching for face photos in the database based on a query sketch drawn by an artist. It has useful applications for both digital entertainment and law enforcement. We assume that faces to be studied are in a frontal pose, with normal lighting and neutral expression, and have no occlusions. To synthesize sketch/photo images, the face region is divided into overlapping patches for learning. The size of the patches decides the scale of local face structures to be learned. From a training set which contains photo-sketch pairs, the joint photo-sketch model is learned at multiple scales using a multiscale MRF model. By transforming a face photo to a sketch (or transforming a sketch to a photo), the difference between photos and sketches is significantly reduced, thus allowing effective matching between the two in face sketch recognition. After the photo-sketch transformation, in principle, most of the proposed face photo recognition approaches can be applied to face sketch recognition in a straightforward way. Extensive experiments are conducted on a face sketch database including 606 faces, which can be downloaded from our Web site (http://mmlab.ie.cuhk.edu.hk/facesketch.html).
Social trait judgment and affect recognition from static faces and video vignettes in schizophrenia
McIntosh, Lindsey G.; Park, Sohee
2014-01-01
Social impairment is a core feature of schizophrenia, present from the pre-morbid stage and predictive of outcome, but the etiology of this deficit remains poorly understood. Successful and adaptive social interactions depend on one’s ability to make rapid and accurate judgments about others in real time. Our surprising ability to form accurate first impressions from brief exposures, known as “thin slices” of behavior has been studied very extensively in healthy participants. We sought to examine affect and social trait judgment from thin slices of static or video stimuli in order to investigate the ability of schizophrenic individuals to form reliable social impressions of others. 21 individuals with schizophrenia (SZ) and 20 matched healthy participants (HC) were asked to identify emotions and social traits for actors in standardized face stimuli as well as brief video clips. Sound was removed from videos to remove all verbal cues. Clinical symptoms in SZ and delusional ideation in both groups were measured. Results showed a general impairment in affect recognition for both types of stimuli in SZ. However, the two groups did not differ in the judgments of trustworthiness, approachability, attractiveness, and intelligence. Interestingly, in SZ, the severity of positive symptoms was correlated with higher ratings of attractiveness, trustworthiness, and approachability. Finally, increased delusional ideation in SZ was associated with a tendency to rate others as more trustworthy, while the opposite was true for HC. These findings suggest that complex social judgments in SZ are affected by symptomatology. PMID:25037526
Face Recognition From One Example View.
1995-09-01
Proceedings, International Workshop on Automatic Face- and Gesture-Recognition, pages 248{253, Zurich, 1995. [32] Yael Moses, Shimon Ullman, and Shimon...recognition. Journal of Cognitive Neuroscience, 3(1):71{86, 1991. [49] Shimon Ullman and Ronen Basri. Recognition by linear combinations of models
Automated facial attendance logger for students
NASA Astrophysics Data System (ADS)
Krithika, L. B.; Kshitish, S.; Kishore, M. R.
2017-11-01
From the past two decades, various spheres of activity are in the aspect of ‘Face recognition’ as an essential tool. The complete series of actions of face recognition is composed of 3 stages: Face Detection, Feature Extraction and Recognition. In this paper, we make an effort to put forth a new application of face recognition and detection in education. The proposed system scans the classroom and detects the face of the students in class and matches the scanned face with the templates that is available in the database and updates the attendance of the respective students.
[Neural mechanisms of facial recognition].
Nagai, Chiyoko
2007-01-01
We review recent researches in neural mechanisms of facial recognition in the light of three aspects: facial discrimination and identification, recognition of facial expressions, and face perception in itself. First, it has been demonstrated that the fusiform gyrus has a main role of facial discrimination and identification. However, whether the FFA (fusiform face area) is really a special area for facial processing or not is controversial; some researchers insist that the FFA is related to 'becoming an expert' for some kinds of visual objects, including faces. Neural mechanisms of prosopagnosia would be deeply concerned to this issue. Second, the amygdala seems to be very concerned to recognition of facial expressions, especially fear. The amygdala, connected with the superior temporal sulcus and the orbitofrontal cortex, appears to operate the cortical function. The amygdala and the superior temporal sulcus are related to gaze recognition, which explains why a patient with bilateral amygdala damage could not recognize only a fear expression; the information from eyes is necessary for fear recognition. Finally, even a newborn infant can recognize a face as a face, which is congruent with the innate hypothesis of facial recognition. Some researchers speculate that the neural basis of such face perception is the subcortical network, comprised of the amygdala, the superior colliculus, and the pulvinar. This network would relate to covert recognition that prosopagnosic patients have.
[Face recognition in patients with schizophrenia].
Doi, Hirokazu; Shinohara, Kazuyuki
2012-07-01
It is well known that patients with schizophrenia show severe deficiencies in social communication skills. These deficiencies are believed to be partly derived from abnormalities in face recognition. However, the exact nature of these abnormalities exhibited by schizophrenic patients with respect to face recognition has yet to be clarified. In the present paper, we review the main findings on face recognition deficiencies in patients with schizophrenia, particularly focusing on abnormalities in the recognition of facial expression and gaze direction, which are the primary sources of information of others' mental states. The existing studies reveal that the abnormal recognition of facial expression and gaze direction in schizophrenic patients is attributable to impairments in both perceptual processing of visual stimuli, and cognitive-emotional responses to social information. Furthermore, schizophrenic patients show malfunctions in distributed neural regions, ranging from the fusiform gyrus recruited in the structural encoding of facial stimuli, to the amygdala which plays a primary role in the detection of the emotional significance of stimuli. These findings were obtained from research in patient groups with heterogeneous characteristics. Because previous studies have indicated that impairments in face recognition in schizophrenic patients might vary according to the types of symptoms, it is of primary importance to compare the nature of face recognition deficiencies and the impairments of underlying neural functions across sub-groups of patients.
Facial Asymmetry-Based Age Group Estimation: Role in Recognizing Age-Separated Face Images.
Sajid, Muhammad; Taj, Imtiaz Ahmad; Bajwa, Usama Ijaz; Ratyal, Naeem Iqbal
2018-04-23
Face recognition aims to establish the identity of a person based on facial characteristics. On the other hand, age group estimation is the automatic calculation of an individual's age range based on facial features. Recognizing age-separated face images is still a challenging research problem due to complex aging processes involving different types of facial tissues, skin, fat, muscles, and bones. Certain holistic and local facial features are used to recognize age-separated face images. However, most of the existing methods recognize face images without incorporating the knowledge learned from age group estimation. In this paper, we propose an age-assisted face recognition approach to handle aging variations. Inspired by the observation that facial asymmetry is an age-dependent intrinsic facial feature, we first use asymmetric facial dimensions to estimate the age group of a given face image. Deeply learned asymmetric facial features are then extracted for face recognition using a deep convolutional neural network (dCNN). Finally, we integrate the knowledge learned from the age group estimation into the face recognition algorithm using the same dCNN. This integration results in a significant improvement in the overall performance compared to using the face recognition algorithm alone. The experimental results on two large facial aging datasets, the MORPH and FERET sets, show that the proposed age group estimation based on the face recognition approach yields superior performance compared to some existing state-of-the-art methods. © 2018 American Academy of Forensic Sciences.
Shannon, Robert W; Patrick, Christopher J; Venables, Noah C; He, Sheng
2013-12-01
The ability to recognize a variety of different human faces is undoubtedly one of the most important and impressive functions of the human perceptual system. Neuroimaging studies have revealed multiple brain regions (including the FFA, STS, OFA) and electrophysiological studies have identified differing brain event-related potential (ERP) components (e.g., N170, P200) possibly related to distinct types of face information processing. To evaluate the heritability of ERP components associated with face processing, including N170, P200, and LPP, we examined ERP responses to fearful and neutral face stimuli in monozygotic (MZ) and dizygotic (DZ) twins. Concordance levels for early brain response indices of face processing (N170, P200) were found to be stronger for MZ than DZ twins, providing evidence of a heritable basis to each. These findings support the idea that certain key neural mechanisms for face processing are genetically coded. Implications for understanding individual differences in recognition of facial identity and the emotional content of faces are discussed. Copyright © 2013 Elsevier Inc. All rights reserved.
Andrews, Timothy J; Baseler, Heidi; Jenkins, Rob; Burton, A Mike; Young, Andrew W
2016-10-01
A full understanding of face recognition will involve identifying the visual information that is used to discriminate different identities and how this is represented in the brain. The aim of this study was to explore the importance of shape and surface properties in the recognition and neural representation of familiar faces. We used image morphing techniques to generate hybrid faces that mixed shape properties (more specifically, second order spatial configural information as defined by feature positions in the 2D-image) from one identity and surface properties from a different identity. Behavioural responses showed that recognition and matching of these hybrid faces was primarily based on their surface properties. These behavioural findings contrasted with neural responses recorded using a block design fMRI adaptation paradigm to test the sensitivity of Haxby et al.'s (2000) core face-selective regions in the human brain to the shape or surface properties of the face. The fusiform face area (FFA) and occipital face area (OFA) showed a lower response (adaptation) to repeated images of the same face (same shape, same surface) compared to different faces (different shapes, different surfaces). From the behavioural data indicating the critical contribution of surface properties to the recognition of identity, we predicted that brain regions responsible for familiar face recognition should continue to adapt to faces that vary in shape but not surface properties, but show a release from adaptation to faces that vary in surface properties but not shape. However, we found that the FFA and OFA showed an equivalent release from adaptation to changes in both shape and surface properties. The dissociation between the neural and perceptual responses suggests that, although they may play a role in the process, these core face regions are not solely responsible for the recognition of facial identity. Copyright © 2016 Elsevier Ltd. All rights reserved.
Wang, Qiandong; Xiao, Naiqi G.; Quinn, Paul C.; Hu, Chao S.; Qian, Miao; Fu, Genyue; Lee, Kang
2014-01-01
Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese faces, Caucasian faces, and racially ambiguous morphed face stimuli. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information of racial categories that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time. PMID:25497461
Robust Point Set Matching for Partial Face Recognition.
Weng, Renliang; Lu, Jiwen; Tan, Yap-Peng
2016-03-01
Over the past three decades, a number of face recognition methods have been proposed in computer vision, and most of them use holistic face images for person identification. In many real-world scenarios especially some unconstrained environments, human faces might be occluded by other objects, and it is difficult to obtain fully holistic face images for recognition. To address this, we propose a new partial face recognition approach to recognize persons of interest from their partial faces. Given a pair of gallery image and probe face patch, we first detect keypoints and extract their local textural features. Then, we propose a robust point set matching method to discriminatively match these two extracted local feature sets, where both the textural information and geometrical information of local features are explicitly used for matching simultaneously. Finally, the similarity of two faces is converted as the distance between these two aligned feature sets. Experimental results on four public face data sets show the effectiveness of the proposed approach.
ERIC Educational Resources Information Center
Wilson, Rebecca; Pascalis, Olivier; Blades, Mark
2007-01-01
We investigated whether children with autistic spectrum disorders (ASD) have a deficit in recognising familiar faces. Children with ASD were given a forced choice familiar face recognition task with three conditions: full faces, inner face parts and outer face parts. Control groups were children with developmental delay (DD) and typically…
Score Fusion and Decision Fusion for the Performance Improvement of Face Recognition
2013-07-01
0.1). A Hamming distance (HD) [7] is calculated with the FP-CGF to measure the similarities among faces. The matched face has the shortest HD from...then put into a face pattern byte (FPB) pixel- by-pixel. A HD is calculated with the FPB to measure the similarities among faces, and recognition is...all query users are included in the database), the recognition performance can be measured by a verification rate (VR), the percentage of the
Face recognition using slow feature analysis and contourlet transform
NASA Astrophysics Data System (ADS)
Wang, Yuehao; Peng, Lingling; Zhe, Fuchuan
2018-04-01
In this paper we propose a novel face recognition approach based on slow feature analysis (SFA) in contourlet transform domain. This method firstly use contourlet transform to decompose the face image into low frequency and high frequency part, and then takes technological advantages of slow feature analysis for facial feature extraction. We named the new method combining the slow feature analysis and contourlet transform as CT-SFA. The experimental results on international standard face database demonstrate that the new face recognition method is effective and competitive.
iFER: facial expression recognition using automatically selected geometric eye and eyebrow features
NASA Astrophysics Data System (ADS)
Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz
2018-03-01
Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.
Scene and human face recognition in the central vision of patients with glaucoma
Aptel, Florent; Attye, Arnaud; Guyader, Nathalie; Boucart, Muriel; Chiquet, Christophe; Peyrin, Carole
2018-01-01
Primary open-angle glaucoma (POAG) firstly mainly affects peripheral vision. Current behavioral studies support the idea that visual defects of patients with POAG extend into parts of the central visual field classified as normal by static automated perimetry analysis. This is particularly true for visual tasks involving processes of a higher level than mere detection. The purpose of this study was to assess visual abilities of POAG patients in central vision. Patients were assigned to two groups following a visual field examination (Humphrey 24–2 SITA-Standard test). Patients with both peripheral and central defects and patients with peripheral but no central defect, as well as age-matched controls, participated in the experiment. All participants had to perform two visual tasks where low-contrast stimuli were presented in the central 6° of the visual field. A categorization task of scene images and human face images assessed high-level visual recognition abilities. In contrast, a detection task using the same stimuli assessed low-level visual function. The difference in performance between detection and categorization revealed the cost of high-level visual processing. Compared to controls, patients with a central visual defect showed a deficit in both detection and categorization of all low-contrast images. This is consistent with the abnormal retinal sensitivity as assessed by perimetry. However, the deficit was greater for categorization than detection. Patients without a central defect showed similar performances to the controls concerning the detection and categorization of faces. However, while the detection of scene images was well-maintained, these patients showed a deficit in their categorization. This suggests that the simple loss of peripheral vision could be detrimental to scene recognition, even when the information is displayed in central vision. This study revealed subtle defects in the central visual field of POAG patients that cannot be predicted by static automated perimetry assessment using Humphrey 24–2 SITA-Standard test. PMID:29481572
ERIC Educational Resources Information Center
Van Strien, Jan W.; Glimmerveen, Johanna C.; Franken, Ingmar H. A.; Martens, Vanessa E. G.; de Bruin, Eveline A.
2011-01-01
To examine the development of recognition memory in primary-school children, 36 healthy younger children (8-9 years old) and 36 healthy older children (11-12 years old) participated in an ERP study with an extended continuous face recognition task (Study 1). Each face of a series of 30 faces was shown randomly six times interspersed with…
Recognition and identification of famous faces in patients with unilateral temporal lobe epilepsy.
Seidenberg, Michael; Griffith, Randall; Sabsevitz, David; Moran, Maria; Haltiner, Alan; Bell, Brian; Swanson, Sara; Hammeke, Thomas; Hermann, Bruce
2002-01-01
We examined the performance of 21 patients with unilateral temporal lobe epilepsy (TLE) and hippocampal damage (10 lefts, and 11 rights) and 10 age-matched controls on the recognition and identification (name and occupation) of well-known faces. Famous face stimuli were selected from four time periods; 1970s, 1980s, 1990-1994, and 1995-1996. Differential patterns of performance were observed for the left and right TLE group across distinct face processing components. The left TLE group showed a selective impairment in naming famous faces while they performed similar to the controls in face recognition and semantic identification (i.e. occupation). In contrast, the right TLE group was impaired across all components of face memory; face recognition, semantic identification, and face naming. Face naming impairment in the left TLE group was characterized by a temporal gradient with better naming performance for famous faces from more distant time periods. Findings are discussed in terms of the role of the temporal lobe system for the acquisition, retention, and retrieval of face semantic networks, and the differential effects of lateralized temporal lobe lesions in this process.
Semantic and visual determinants of face recognition in a prosopagnosic patient.
Dixon, M J; Bub, D N; Arguin, M
1998-05-01
Prosopagnosia is the neuropathological inability to recognize familiar people by their faces. It can occur in isolation or can coincide with recognition deficits for other nonface objects. Often, patients whose prosopagnosia is accompanied by object recognition difficulties have more trouble identifying certain categories of objects relative to others. In previous research, we demonstrated that objects that shared multiple visual features and were semantically close posed severe recognition difficulties for a patient with temporal lobe damage. We now demonstrate that this patient's face recognition is constrained by these same parameters. The prosopagnosic patient ELM had difficulties pairing faces to names when the faces shared visual features and the names were semantically related (e.g., Tonya Harding, Nancy Kerrigan, and Josee Chouinard -three ice skaters). He made tenfold fewer errors when the exact same faces were associated with semantically unrelated people (e.g., singer Celine Dion, actress Betty Grable, and First Lady Hillary Clinton). We conclude that prosopagnosia and co-occurring category-specific recognition problems both stem from difficulties disambiguating the stored representations of objects that share multiple visual features and refer to semantically close identities or concepts.
Wang, Qiandong; Xiao, Naiqi G; Quinn, Paul C; Hu, Chao S; Qian, Miao; Fu, Genyue; Lee, Kang
2015-02-01
Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese, Caucasian, and racially ambiguous faces. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time. Copyright © 2014 Elsevier Ltd. All rights reserved.
Actively Paranoid Patients with Schizophrenia Over Attribute Anger to Neutral Faces
Pinkham, Amy E.; Brensinger, Colleen; Kohler, Christian; Gur, Raquel E.; Gur, Ruben C.
2010-01-01
Previous investigations of the influence of paranoia on facial affect recognition in schizophrenia have been inconclusive as some studies demonstrate better performance for paranoid relative to non-paranoid patients and others show that paranoid patients display greater impairments. These studies have been limited by small sample sizes and inconsistencies in the criteria used to define groups. Here, we utilized an established emotion recognition task and a large sample to examine differential performance in emotion recognition ability between patients who were actively paranoid (AP) and those who were not actively paranoid (NAP). Accuracy and error patterns on the Penn Emotion Recognition test (ER40) were examined in 132 patients (64 NAP and 68 AP). Groups were defined based on the presence of paranoid ideation at the time of testing rather than diagnostic subtype. AP and NAP patients did not differ in overall task accuracy; however, an emotion by group interaction indicated that AP patients were significantly worse than NAP patients at correctly labeling neutral faces. A comparison of error patterns on neutral stimuli revealed that the groups differed only in misattributions of anger expressions, with AP patients being significantly more likely to misidentify a neutral expression as angry. The present findings suggest that paranoia is associated with a tendency to over attribute threat to ambiguous stimuli and also lend support to emerging hypotheses of amygdala hyperactivation as a potential neural mechanism for paranoid ideation. PMID:21112186
Self- or familiar-face recognition advantage? New insight using ambient images.
Bortolon, Catherine; Lorieux, Siméon; Raffard, Stéphane
2018-06-01
Self-face recognition has been widely explored in the past few years. Nevertheless, the current literature relies on the use of standardized photographs which do not represent daily-life face recognition. Therefore, we aim for the first time to evaluate self-face processing in healthy individuals using natural/ambient images which contain variations in the environment and in the face itself. In total, 40 undergraduate and graduate students performed a forced delayed-matching task, including images of one's own face, friend, famous and unknown individuals. For both reaction time and accuracy, results showed that participants were faster and more accurate when matching different images of their own face compared to both famous and unfamiliar faces. Nevertheless, no significant differences were found between self-face and friend-face and between friend-face and famous-face. They were also faster and more accurate when matching friend and famous faces compared to unfamiliar faces. Our results suggest that faster and more accurate responses to self-face might be better explained by a familiarity effect - that is, (1) the result of frequent exposition to one's own image through mirror and photos, (2) a more robust mental representation of one's own face and (3) strong face recognition units as for other familiar faces.
Robust kernel collaborative representation for face recognition
NASA Astrophysics Data System (ADS)
Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong
2015-05-01
One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.
Karen and George: Face Recognition by Visually Impaired Children.
ERIC Educational Resources Information Center
Ellis, Hadyn D.; And Others
1988-01-01
Two visually impaired children, aged 8 and 10, appeared to have severe difficulty in recognizing faces. After assessment, it became apparent that only one had unusually poor facial recognition skills. After training, which included matching face photographs, schematic faces, and digitized faces, there was no evidence of any improvement.…
The cross-race effect in face recognition memory by bicultural individuals.
Marsh, Benjamin U; Pezdek, Kathy; Ozery, Daphna Hausman
2016-09-01
Social-cognitive models of the cross-race effect (CRE) generally specify that cross-race faces are automatically categorized as an out-group, and that different encoding processes are then applied to same-race and cross-race faces, resulting in better recognition memory for same-race faces. We examined whether cultural priming moderates the cognitive categorization of cross-race faces. In Experiment 1, monoracial Latino-Americans, considered to have a bicultural self, were primed to focus on either a Latino or American cultural self and then viewed Latino and White faces. Latino-Americans primed as Latino exhibited higher recognition accuracy (A') for Latino than White faces; those primed as American exhibited higher recognition accuracy for White than Latino faces. In Experiment 2, as predicted, prime condition did not moderate the CRE in European-Americans. These results suggest that for monoracial biculturals, priming either of their cultural identities influences the encoding processes applied to same- and cross-race faces, thereby moderating the CRE. Copyright © 2016 Elsevier B.V. All rights reserved.
Face recognition using facial expression: a novel approach
NASA Astrophysics Data System (ADS)
Singh, Deepak Kumar; Gupta, Priya; Tiwary, U. S.
2008-04-01
Facial expressions are undoubtedly the most effective nonverbal communication. The face has always been the equation of a person's identity. The face draws the demarcation line between identity and extinction. Each line on the face adds an attribute to the identity. These lines become prominent when we experience an emotion and these lines do not change completely with age. In this paper we have proposed a new technique for face recognition which focuses on the facial expressions of the subject to identify his face. This is a grey area on which not much light has been thrown earlier. According to earlier researches it is difficult to alter the natural expression. So our technique will be beneficial for identifying occluded or intentionally disguised faces. The test results of the experiments conducted prove that this technique will give a new direction in the field of face recognition. This technique will provide a strong base to the area of face recognition and will be used as the core method for critical defense security related issues.
Wang, Rong
2015-01-01
In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.
Efficient live face detection to counter spoof attack in face recognition systems
NASA Astrophysics Data System (ADS)
Biswas, Bikram Kumar; Alam, Mohammad S.
2015-03-01
Face recognition is a critical tool used in almost all major biometrics based security systems. But recognition, authentication and liveness detection of the face of an actual user is a major challenge because an imposter or a non-live face of the actual user can be used to spoof the security system. In this research, a robust technique is proposed which detects liveness of faces in order to counter spoof attacks. The proposed technique uses a three-dimensional (3D) fast Fourier transform to compare spectral energies of a live face and a fake face in a mathematically selective manner. The mathematical model involves evaluation of energies of selective high frequency bands of average power spectra of both live and non-live faces. It also carries out proper recognition and authentication of the face of the actual user using the fringe-adjusted joint transform correlation technique, which has been found to yield the highest correlation output for a match. Experimental tests show that the proposed technique yields excellent results for identifying live faces.
NASA Astrophysics Data System (ADS)
Iqtait, M.; Mohamad, F. S.; Mamat, M.
2018-03-01
Biometric is a pattern recognition system which is used for automatic recognition of persons based on characteristics and features of an individual. Face recognition with high recognition rate is still a challenging task and usually accomplished in three phases consisting of face detection, feature extraction, and expression classification. Precise and strong location of trait point is a complicated and difficult issue in face recognition. Cootes proposed a Multi Resolution Active Shape Models (ASM) algorithm, which could extract specified shape accurately and efficiently. Furthermore, as the improvement of ASM, Active Appearance Models algorithm (AAM) is proposed to extracts both shape and texture of specified object simultaneously. In this paper we give more details about the two algorithms and give the results of experiments, testing their performance on one dataset of faces. We found that the ASM is faster and gains more accurate trait point location than the AAM, but the AAM gains a better match to the texture.
Multi-pose facial correction based on Gaussian process with combined kernel function
NASA Astrophysics Data System (ADS)
Shi, Shuyan; Ji, Ruirui; Zhang, Fan
2018-04-01
In order to improve the recognition rate of various postures, this paper proposes a method of facial correction based on Gaussian Process which build a nonlinear regression model between the front and the side face with combined kernel function. The face images with horizontal angle from -45° to +45° can be properly corrected to front faces. Finally, Support Vector Machine is employed for face recognition. Experiments on CAS PEAL R1 face database show that Gaussian process can weaken the influence of pose changes and improve the accuracy of face recognition to certain extent.
Development of an Autonomous Face Recognition Machine.
1986-12-08
This approach, like Baron’s, would be a very time consuming task. The problem of locating a face in Bromley’s work was the least complex of the three...top level design and the development and design decisions that were made in developing the Autonomous Face Recognition Machine (AFRM). The chapter is...images within a digital image. The second sectio examines the algorithm used in performing face recognition. The decision to divide the development
Spatial-frequency cutoff requirements for pattern recognition in central and peripheral vision
Kwon, MiYoung; Legge, Gordon E.
2011-01-01
It is well known that object recognition requires spatial frequencies exceeding some critical cutoff value. People with central scotomas who rely on peripheral vision have substantial difficulty with reading and face recognition. Deficiencies of pattern recognition in peripheral vision, might result in higher cutoff requirements, and may contribute to the functional problems of people with central-field loss. Here we asked about differences in spatial-cutoff requirements in central and peripheral vision for letter and face recognition. The stimuli were the 26 letters of the English alphabet and 26 celebrity faces. Each image was blurred using a low-pass filter in the spatial frequency domain. Critical cutoffs (defined as the minimum low-pass filter cutoff yielding 80% accuracy) were obtained by measuring recognition accuracy as a function of cutoff (in cycles per object). Our data showed that critical cutoffs increased from central to peripheral vision by 20% for letter recognition and by 50% for face recognition. We asked whether these differences could be accounted for by central/peripheral differences in the contrast sensitivity function (CSF). We addressed this question by implementing an ideal-observer model which incorporates empirical CSF measurements and tested the model on letter and face recognition. The success of the model indicates that central/peripheral differences in the cutoff requirements for letter and face recognition can be accounted for by the information content of the stimulus limited by the shape of the human CSF, combined with a source of internal noise and followed by an optimal decision rule. PMID:21854800
Aviezer, Hillel; Hassin, Ran. R.; Perry, Anat; Dudarev, Veronica; Bentin, Shlomo
2012-01-01
The current study examined the nature of deficits in emotion recognition from facial expressions in case LG, an individual with a rare form of developmental visual agnosia (DVA). LG presents with profoundly impaired recognition of facial expressions, yet the underlying nature of his deficit remains unknown. During typical face processing, normal sighted individuals extract information about expressed emotions from face regions with activity diagnostic for specific emotion categories. Given LG’s impairment, we sought to shed light on his emotion perception by examining if priming facial expressions with diagnostic emotional face components would facilitate his recognition of the emotion expressed by the face. LG and control participants matched isolated face components with components appearing in a subsequently presented full-face and then categorized the face’s emotion. Critically, the matched components were from regions which were diagnostic or non-diagnostic of the emotion portrayed by the full face. In experiment 1, when the full faces were briefly presented (150 ms), LG’s performance was strongly influenced by the diagnosticity of the components: His emotion recognition was boosted within normal limits when diagnostic components were used and was obliterated when non-diagnostic components were used. By contrast, in experiment 2, when the face-exposure duration was extended (2000 ms), the beneficial effect of the diagnostic matching was diminished as was the detrimental effect of the non-diagnostic matching. These data highlight the impact of diagnostic facial features in normal expression recognition and suggest that impaired emotion recognition in DVA results from deficient visual integration across diagnostic face components. PMID:22349446
Fast and Famous: Looking for the Fastest Speed at Which a Face Can be Recognized
Barragan-Jason, Gladys; Besson, Gabriel; Ceccaldi, Mathieu; Barbeau, Emmanuel J.
2012-01-01
Face recognition is supposed to be fast. However, the actual speed at which faces can be recognized remains unknown. To address this issue, we report two experiments run with speed constraints. In both experiments, famous faces had to be recognized among unknown ones using a large set of stimuli to prevent pre-activation of features which would speed up recognition. In the first experiment (31 participants), recognition of famous faces was investigated using a rapid go/no-go task. In the second experiment, 101 participants performed a highly time constrained recognition task using the Speed and Accuracy Boosting procedure. Results indicate that the fastest speed at which a face can be recognized is around 360–390 ms. Such latencies are about 100 ms longer than the latencies recorded in similar tasks in which subjects have to detect faces among other stimuli. We discuss which model of activation of the visual ventral stream could account for such latencies. These latencies are not consistent with a purely feed-forward pass of activity throughout the visual ventral stream. An alternative is that face recognition relies on the core network underlying face processing identified in fMRI studies (OFA, FFA, and pSTS) and reentrant loops to refine face representation. However, the model of activation favored is that of an activation of the whole visual ventral stream up to anterior areas, such as the perirhinal cortex, combined with parallel and feed-back processes. Further studies are needed to assess which of these three models of activation can best account for face recognition. PMID:23460051
[Neural basis of self-face recognition: social aspects].
Sugiura, Motoaki
2012-07-01
Considering the importance of the face in social survival and evidence from evolutionary psychology of visual self-recognition, it is reasonable that we expect neural mechanisms for higher social-cognitive processes to underlie self-face recognition. A decade of neuroimaging studies so far has, however, not provided an encouraging finding in this respect. Self-face specific activation has typically been reported in the areas for sensory-motor integration in the right lateral cortices. This observation appears to reflect the physical nature of the self-face which representation is developed via the detection of contingency between one's own action and sensory feedback. We have recently revealed that the medial prefrontal cortex, implicated in socially nuanced self-referential process, is activated during self-face recognition under a rich social context where multiple other faces are available for reference. The posterior cingulate cortex has also exhibited this activation modulation, and in the separate experiment showed a response to attractively manipulated self-face suggesting its relevance to positive self-value. Furthermore, the regions in the right lateral cortices typically showing self-face-specific activation have responded also to the face of one's close friend under the rich social context. This observation is potentially explained by the fact that the contingency detection for physical self-recognition also plays a role in physical social interaction, which characterizes the representation of personally familiar people. These findings demonstrate that neuroscientific exploration reveals multiple facets of the relationship between self-face recognition and social-cognitive process, and that technically the manipulation of social context is key to its success.
NASA Astrophysics Data System (ADS)
Barsics, Catherine; Brédart, Serge
2010-11-01
Autonoetic consciousness is a fundamental property of human memory, enabling us to experience mental time travel, to recollect past events with a feeling of self-involvement, and to project ourselves in the future. Autonoetic consciousness is a characteristic of episodic memory. By contrast, awareness of the past associated with a mere feeling of familiarity or knowing relies on noetic consciousness, depending on semantic memory integrity. Present research was aimed at evaluating whether conscious recollection of episodic memories is more likely to occur following the recognition of a familiar face than following the recognition of a familiar voice. Recall of semantic information (biographical information) was also assessed. Previous studies that investigated the recall of biographical information following person recognition used faces and voices of famous people as stimuli. In this study, the participants were presented with personally familiar people's voices and faces, thus avoiding the presence of identity cues in the spoken extracts and allowing a stricter control of frequency exposure with both types of stimuli (voices and faces). In the present study, the rate of retrieved episodic memories, associated with autonoetic awareness, was significantly higher from familiar faces than familiar voices even though the level of overall recognition was similar for both these stimuli domains. The same pattern was observed regarding semantic information retrieval. These results and their implications for current Interactive Activation and Competition person recognition models are discussed.
Impaired Word and Face Recognition in Older Adults with Type 2 Diabetes.
Jones, Nicola; Riby, Leigh M; Smith, Michael A
2016-07-01
Older adults with type 2 diabetes mellitus (DM2) exhibit accelerated decline in some domains of cognition including verbal episodic memory. Few studies have investigated the influence of DM2 status in older adults on recognition memory for more complex stimuli such as faces. In the present study we sought to compare recognition memory performance for words, objects and faces under conditions of relatively low and high cognitive load. Healthy older adults with good glucoregulatory control (n = 13) and older adults with DM2 (n = 24) were administered recognition memory tasks in which stimuli (faces, objects and words) were presented under conditions of either i) low (stimulus presented without a background pattern) or ii) high (stimulus presented against a background pattern) cognitive load. In a subsequent recognition phase, the DM2 group recognized fewer faces than healthy controls. Further, the DM2 group exhibited word recognition deficits in the low cognitive load condition. The recognition memory impairment observed in patients with DM2 has clear implications for day-to-day functioning. Although these deficits were not amplified under conditions of increased cognitive load, the present study emphasizes that recognition memory impairment for both words and more complex stimuli such as face are a feature of DM2 in older adults. Copyright © 2016 IMSS. Published by Elsevier Inc. All rights reserved.
Development of Face Recognition in 5- to 15-Year-Olds
ERIC Educational Resources Information Center
Kinnunen, Suna; Korkman, Marit; Laasonen, Marja; Lahti-Nuuttila, Pekka
2013-01-01
This study focuses on the development of face recognition in typically developing preschool- and school-aged children (aged 5 to 15 years old, "n" = 611, 336 girls). Social predictors include sex differences and own-sex bias. At younger ages, the development of face recognition was rapid and became more gradual as the age increased up…
Transfer between Pose and Illumination Training in Face Recognition
ERIC Educational Resources Information Center
Liu, Chang Hong; Bhuiyan, Md. Al-Amin; Ward, James; Sui, Jie
2009-01-01
The relationship between pose and illumination learning in face recognition was examined in a yes-no recognition paradigm. The authors assessed whether pose training can transfer to a new illumination or vice versa. Results show that an extensive level of pose training through a face-name association task was able to generalize to a new…
Psychocentricity and participant profiles: implications for lexical processing among multilinguals
Libben, Gary; Curtiss, Kaitlin; Weber, Silke
2014-01-01
Lexical processing among bilinguals is often affected by complex patterns of individual experience. In this paper we discuss the psychocentric perspective on language representation and processing, which highlights the centrality of individual experience in psycholinguistic experimentation. We discuss applications to the investigation of lexical processing among multilinguals and explore the advantages of using high-density experiments with multilinguals. High density experiments are designed to co-index measures of lexical perception and production, as well as participant profiles. We discuss the challenges associated with the characterization of participant profiles and present a new data visualization technique, that we term Facial Profiles. This technique is based on Chernoff faces developed over 40 years ago. The Facial Profile technique seeks to overcome some of the challenges associated with the use of Chernoff faces, while maintaining the core insight that recoding multivariate data as facial features can engage the human face recognition system and thus enhance our ability to detect and interpret patterns within multivariate datasets. We demonstrate that Facial Profiles can code participant characteristics in lexical processing studies by recoding variables such as reading ability, speaking ability, and listening ability into iconically-related relative sizes of eye, mouth, and ear, respectively. The balance of ability in bilinguals can be captured by creating composite facial profiles or Janus Facial Profiles. We demonstrate the use of Facial Profiles and Janus Facial Profiles in the characterization of participant effects in the study of lexical perception and production. PMID:25071614
From Caregivers to Peers: Puberty Shapes Human Face Perception.
Picci, Giorgia; Scherf, K Suzanne
2016-11-01
Puberty prepares mammals to sexually reproduce during adolescence. It is also hypothesized to invoke a social metamorphosis that prepares adolescents to take on adult social roles. We provide the first evidence to support this hypothesis in humans and show that pubertal development retunes the face-processing system from a caregiver bias to a peer bias. Prior to puberty, children exhibit enhanced recognition for adult female faces. With puberty, superior recognition emerges for peer faces that match one's pubertal status. As puberty progresses, so does the peer recognition bias. Adolescents become better at recognizing faces with a pubertal status similar to their own. These findings reconceptualize the adolescent "dip" in face recognition by showing that it is a recalibration of the face-processing system away from caregivers toward peers. Thus, in addition to preparing the physical body for sexual reproduction, puberty shapes the perceptual system for processing the social world in new ways. © The Author(s) 2016.
The hows and whys of face memory: level of construal influences the recognition of human faces
Wyer, Natalie A.; Hollins, Timothy J.; Pahl, Sabine; Roper, Jean
2015-01-01
Three experiments investigated the influence of level of construal (i.e., the interpretation of actions in terms of their meaning or their details) on different stages of face memory. We employed a standard multiple-face recognition paradigm, with half of the faces inverted at test. Construal level was manipulated prior to recognition (Experiment 1), during study (Experiment 2) or both (Experiment 3). The results support a general advantage for high-level construal over low-level construal at both study and at test, and suggest that matching processing style between study and recognition has no advantage. These experiments provide additional evidence in support of a link between semantic processing (i.e., construal) and visual (i.e., face) processing. We conclude with a discussion of implications for current theories relating to both construal and face processing. PMID:26500586
Influence of motion on face recognition.
Bonfiglio, Natale S; Manfredi, Valentina; Pessa, Eliano
2012-02-01
The influence of motion information and temporal associations on recognition of non-familiar faces was investigated using two groups which performed a face recognition task. One group was presented with regular temporal sequences of face views designed to produce the impression of motion of the face rotating in depth, the other group with random sequences of the same views. In one condition, participants viewed the sequences of the views in rapid succession with a negligible interstimulus interval (ISI). This condition was characterized by three different presentation times. In another condition, participants were presented a sequence with a 1-sec. ISI among the views. That regular sequences of views with a negligible ISI and a shorter presentation time were hypothesized to give rise to better recognition, related to a stronger impression of face rotation. Analysis of data from 45 participants showed a shorter presentation time was associated with significantly better accuracy on the recognition task; however, differences between performances associated with regular and random sequences were not significant.
Support vector machine for automatic pain recognition
NASA Astrophysics Data System (ADS)
Monwar, Md Maruf; Rezaei, Siamak
2009-02-01
Facial expressions are a key index of emotion and the interpretation of such expressions of emotion is critical to everyday social functioning. In this paper, we present an efficient video analysis technique for recognition of a specific expression, pain, from human faces. We employ an automatic face detector which detects face from the stored video frame using skin color modeling technique. For pain recognition, location and shape features of the detected faces are computed. These features are then used as inputs to a support vector machine (SVM) for classification. We compare the results with neural network based and eigenimage based automatic pain recognition systems. The experiment results indicate that using support vector machine as classifier can certainly improve the performance of automatic pain recognition system.
Blood perfusion construction for infrared face recognition based on bio-heat transfer.
Xie, Zhihua; Liu, Guodong
2014-01-01
To improve the performance of infrared face recognition for time-lapse data, a new construction of blood perfusion is proposed based on bio-heat transfer. Firstly, by quantifying the blood perfusion based on Pennes equation, the thermal information is converted into blood perfusion rate, which is stable facial biological feature of face image. Then, the separability discriminant criterion in Discrete Cosine Transform (DCT) domain is applied to extract the discriminative features of blood perfusion information. Experimental results demonstrate that the features of blood perfusion are more concentrative and discriminative for recognition than those of thermal information. The infrared face recognition based on the proposed blood perfusion is robust and can achieve better recognition performance compared with other state-of-the-art approaches.
Zimmermann, Friederike G S; Eimer, Martin
2013-06-01
Recognizing unfamiliar faces is more difficult than familiar face recognition, and this has been attributed to qualitative differences in the processing of familiar and unfamiliar faces. Familiar faces are assumed to be represented by view-independent codes, whereas unfamiliar face recognition depends mainly on view-dependent low-level pictorial representations. We employed an electrophysiological marker of visual face recognition processes in order to track the emergence of view-independence during the learning of previously unfamiliar faces. Two face images showing either the same or two different individuals in the same or two different views were presented in rapid succession, and participants had to perform an identity-matching task. On trials where both faces showed the same view, repeating the face of the same individual triggered an N250r component at occipito-temporal electrodes, reflecting the rapid activation of visual face memory. A reliable N250r component was also observed on view-change trials. Crucially, this view-independence emerged as a result of face learning. In the first half of the experiment, N250r components were present only on view-repetition trials but were absent on view-change trials, demonstrating that matching unfamiliar faces was initially based on strictly view-dependent codes. In the second half, the N250r was triggered not only on view-repetition trials but also on view-change trials, indicating that face recognition had now become more view-independent. This transition may be due to the acquisition of abstract structural codes of individual faces during face learning, but could also reflect the formation of associative links between sets of view-specific pictorial representations of individual faces. Copyright © 2013 Elsevier Ltd. All rights reserved.
Identifying and detecting facial expressions of emotion in peripheral vision.
Smith, Fraser W; Rossit, Stephanie
2018-01-01
Facial expressions of emotion are signals of high biological value. Whilst recognition of facial expressions has been much studied in central vision, the ability to perceive these signals in peripheral vision has only seen limited research to date, despite the potential adaptive advantages of such perception. In the present experiment, we investigate facial expression recognition and detection performance for each of the basic emotions (plus neutral) at up to 30 degrees of eccentricity. We demonstrate, as expected, a decrease in recognition and detection performance with increasing eccentricity, with happiness and surprised being the best recognized expressions in peripheral vision. In detection however, while happiness and surprised are still well detected, fear is also a well detected expression. We show that fear is a better detected than recognized expression. Our results demonstrate that task constraints shape the perception of expression in peripheral vision and provide novel evidence that detection and recognition rely on partially separate underlying mechanisms, with the latter more dependent on the higher spatial frequency content of the face stimulus.
Identifying and detecting facial expressions of emotion in peripheral vision
Rossit, Stephanie
2018-01-01
Facial expressions of emotion are signals of high biological value. Whilst recognition of facial expressions has been much studied in central vision, the ability to perceive these signals in peripheral vision has only seen limited research to date, despite the potential adaptive advantages of such perception. In the present experiment, we investigate facial expression recognition and detection performance for each of the basic emotions (plus neutral) at up to 30 degrees of eccentricity. We demonstrate, as expected, a decrease in recognition and detection performance with increasing eccentricity, with happiness and surprised being the best recognized expressions in peripheral vision. In detection however, while happiness and surprised are still well detected, fear is also a well detected expression. We show that fear is a better detected than recognized expression. Our results demonstrate that task constraints shape the perception of expression in peripheral vision and provide novel evidence that detection and recognition rely on partially separate underlying mechanisms, with the latter more dependent on the higher spatial frequency content of the face stimulus. PMID:29847562
Holistic processing, contact, and the other-race effect in face recognition.
Zhao, Mintao; Hayward, William G; Bülthoff, Isabelle
2014-12-01
Face recognition, holistic processing, and processing of configural and featural facial information are known to be influenced by face race, with better performance for own- than other-race faces. However, whether these various other-race effects (OREs) arise from the same underlying mechanisms or from different processes remains unclear. The present study addressed this question by measuring the OREs in a set of face recognition tasks, and testing whether these OREs are correlated with each other. Participants performed different tasks probing (1) face recognition, (2) holistic processing, (3) processing of configural information, and (4) processing of featural information for both own- and other-race faces. Their contact with other-race people was also assessed with a questionnaire. The results show significant OREs in tasks testing face memory and processing of configural information, but not in tasks testing either holistic processing or processing of featural information. Importantly, there was no cross-task correlation between any of the measured OREs. Moreover, the level of other-race contact predicted only the OREs obtained in tasks testing face memory and processing of configural information. These results indicate that these various cross-race differences originate from different aspects of face processing, in contrary to the view that the ORE in face recognition is due to cross-race differences in terms of holistic processing. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Deep and shallow encoding effects on face recognition: an ERP study.
Marzi, Tessa; Viggiano, Maria Pia
2010-12-01
Event related potentials (ERPs) were employed to investigate whether and when brain activity related to face recognition varies according to the processing level undertaken at encoding. Recognition was assessed when preceded by a "shallow" (orientation judgement) or by a "deep" study task (occupation judgement). Moreover, we included a further manipulation by presenting at encoding faces either in the upright or inverted orientation. As expected, deeply encoded faces were recognized more accurately and more quickly with respect to shallowly encoded faces. The ERP showed three main findings: i) as witnessed by more positive-going potentials for deeply encoded faces, at early and later processing stage, face recognition was influenced by the processing strategy adopted during encoding; ii) structural encoding, indexed by the N170, turned out to be "cognitively penetrable" showing repetition priming effects for deeply encoded faces; iii) face inversion, by disrupting configural processing during encoding, influenced memory related processes for deeply encoded faces and impaired the recognition of faces shallowly processed. The present study adds weight to the concept that the depth of processing during memory encoding affects retrieval. We found that successful retrieval following deep encoding involved both familiarity- and recollection-related processes showing from 500 ms a fronto-parietal distribution, whereas shallow encoding affected only earlier processing stages reflecting perceptual priming. Copyright © 2010 Elsevier B.V. All rights reserved.
Aviezer, Hillel; Hassin, Ran R; Perry, Anat; Dudarev, Veronica; Bentin, Shlomo
2012-04-01
The current study examined the nature of deficits in emotion recognition from facial expressions in case LG, an individual with a rare form of developmental visual agnosia (DVA). LG presents with profoundly impaired recognition of facial expressions, yet the underlying nature of his deficit remains unknown. During typical face processing, normal sighted individuals extract information about expressed emotions from face regions with activity diagnostic for specific emotion categories. Given LG's impairment, we sought to shed light on his emotion perception by examining if priming facial expressions with diagnostic emotional face components would facilitate his recognition of the emotion expressed by the face. LG and control participants matched isolated face components with components appearing in a subsequently presented full-face and then categorized the face's emotion. Critically, the matched components were from regions which were diagnostic or non-diagnostic of the emotion portrayed by the full face. In experiment 1, when the full faces were briefly presented (150 ms), LG's performance was strongly influenced by the diagnosticity of the components: his emotion recognition was boosted within normal limits when diagnostic components were used and was obliterated when non-diagnostic components were used. By contrast, in experiment 2, when the face-exposure duration was extended (2000 ms), the beneficial effect of the diagnostic matching was diminished as was the detrimental effect of the non-diagnostic matching. These data highlight the impact of diagnostic facial features in normal expression recognition and suggest that impaired emotion recognition in DVA results from deficient visual integration across diagnostic face components. Copyright © 2012 Elsevier Ltd. All rights reserved.
Face Recognition Is Shaped by the Use of Sign Language
ERIC Educational Resources Information Center
Stoll, Chloé; Palluel-Germain, Richard; Caldara, Roberto; Lao, Junpeng; Dye, Matthew W. G.; Aptel, Florent; Pascalis, Olivier
2018-01-01
Previous research has suggested that early deaf signers differ in face processing. Which aspects of face processing are changed and the role that sign language may have played in that change are however unclear. Here, we compared face categorization (human/non-human) and human face recognition performance in early profoundly deaf signers, hearing…
Eye tracking reveals a crucial role for facial motion in recognition of faces by infants
Xiao, Naiqi G.; Quinn, Paul C.; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang
2015-01-01
Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces and then their face recognition was tested with static face images. Eye tracking methodology was used to record eye movements during familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better was their face recognition, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. PMID:26010387
Error Rates in Users of Automatic Face Recognition Software
White, David; Dunn, James D.; Schmid, Alexandra C.; Kemp, Richard I.
2015-01-01
In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated ‘candidate lists’ selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers–who use the system in their daily work–and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems–potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems. PMID:26465631
Looking at My Own Face: Visual Processing Strategies in Self–Other Face Recognition
Chakraborty, Anya; Chakrabarti, Bhismadev
2018-01-01
We live in an age of ‘selfies.’ Yet, how we look at our own faces has seldom been systematically investigated. In this study we test if the visual processing of the highly familiar self-face is different from other faces, using psychophysics and eye-tracking. This paradigm also enabled us to test the association between the psychophysical properties of self-face representation and visual processing strategies involved in self-face recognition. Thirty-three adults performed a self-face recognition task from a series of self-other face morphs with simultaneous eye-tracking. Participants were found to look longer at the lower part of the face for self-face compared to other-face. Participants with a more distinct self-face representation, as indexed by a steeper slope of the psychometric response curve for self-face recognition, were found to look longer at upper part of the faces identified as ‘self’ vs. those identified as ‘other’. This result indicates that self-face representation can influence where we look when we process our own vs. others’ faces. We also investigated the association of autism-related traits with self-face processing metrics since autism has previously been associated with atypical self-processing. The study did not find any self-face specific association with autistic traits, suggesting that autism-related features may be related to self-processing in a domain specific manner. PMID:29487554