Sample records for visual face recognition

  1. A new selective developmental deficit: Impaired object recognition with normal face recognition.

    PubMed

    Germine, Laura; Cashdollar, Nathan; Düzel, Emrah; Duchaine, Bradley

    2011-05-01

    Studies of developmental deficits in face recognition, or developmental prosopagnosia, have shown that individuals who have not suffered brain damage can show face recognition impairments coupled with normal object recognition (Duchaine and Nakayama, 2005; Duchaine et al., 2006; Nunn et al., 2001). However, no developmental cases with the opposite dissociation - normal face recognition with impaired object recognition - have been reported. The existence of a case of non-face developmental visual agnosia would indicate that the development of normal face recognition mechanisms does not rely on the development of normal object recognition mechanisms. To see whether a developmental variant of non-face visual object agnosia exists, we conducted a series of web-based object and face recognition tests to screen for individuals showing object recognition memory impairments but not face recognition impairments. Through this screening process, we identified AW, an otherwise normal 19-year-old female, who was then tested in the lab on face and object recognition tests. AW's performance was impaired in within-class visual recognition memory across six different visual categories (guns, horses, scenes, tools, doors, and cars). In contrast, she scored normally on seven tests of face recognition, tests of memory for two other object categories (houses and glasses), and tests of recall memory for visual shapes. Testing confirmed that her impairment was not related to a general deficit in lower-level perception, object perception, basic-level recognition, or memory. AW's results provide the first neuropsychological evidence that recognition memory for non-face visual object categories can be selectively impaired in individuals without brain damage or other memory impairment. These results indicate that the development of recognition memory for faces does not depend on intact object recognition memory and provide further evidence for category-specific dissociations in visual recognition. Copyright © 2010 Elsevier Srl. All rights reserved.

  2. Quality labeled faces in the wild (QLFW): a database for studying face recognition in real-world environments

    NASA Astrophysics Data System (ADS)

    Karam, Lina J.; Zhu, Tong

    2015-03-01

    The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.

  3. What Types of Visual Recognition Tasks Are Mediated by the Neural Subsystem that Subserves Face Recognition?

    ERIC Educational Resources Information Center

    Brooks, Brian E.; Cooper, Eric E.

    2006-01-01

    Three divided visual field experiments tested current hypotheses about the types of visual shape representation tasks that recruit the cognitive and neural mechanisms underlying face recognition. Experiment 1 found a right hemisphere advantage for subordinate but not basic-level face recognition. Experiment 2 found a right hemisphere advantage for…

  4. Comparing the visual spans for faces and letters

    PubMed Central

    He, Yingchen; Scholz, Jennifer M.; Gage, Rachel; Kallie, Christopher S.; Liu, Tingting; Legge, Gordon E.

    2015-01-01

    The visual span—the number of adjacent text letters that can be reliably recognized on one fixation—has been proposed as a sensory bottleneck that limits reading speed (Legge, Mansfield, & Chung, 2001). Like reading, searching for a face is an important daily task that involves pattern recognition. Is there a similar limitation on the number of faces that can be recognized in a single fixation? Here we report on a study in which we measured and compared the visual-span profiles for letter and face recognition. A serial two-stage model for pattern recognition was developed to interpret the data. The first stage is characterized by factors limiting recognition of isolated letters or faces, and the second stage represents the interfering effect of nearby stimuli on recognition. Our findings show that the visual span for faces is smaller than that for letters. Surprisingly, however, when differences in first-stage processing for letters and faces are accounted for, the two visual spans become nearly identical. These results suggest that the concept of visual span may describe a common sensory bottleneck that underlies different types of pattern recognition. PMID:26129858

  5. Automatic face recognition in HDR imaging

    NASA Astrophysics Data System (ADS)

    Pereira, Manuela; Moreno, Juan-Carlos; Proença, Hugo; Pinheiro, António M. G.

    2014-05-01

    The gaining popularity of the new High Dynamic Range (HDR) imaging systems is raising new privacy issues caused by the methods used for visualization. HDR images require tone mapping methods for an appropriate visualization on conventional and non-expensive LDR displays. These visualization methods might result in completely different visualization raising several issues on privacy intrusion. In fact, some visualization methods result in a perceptual recognition of the individuals, while others do not even show any identity. Although perceptual recognition might be possible, a natural question that can rise is how computer based recognition will perform using tone mapping generated images? In this paper, a study where automatic face recognition using sparse representation is tested with images that result from common tone mapping operators applied to HDR images. Its ability for the face identity recognition is described. Furthermore, typical LDR images are used for the face recognition training.

  6. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    PubMed

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line with the 'auditory-visual view' of auditory speech perception, which assumes that auditory speech recognition is optimized by using predictions from previously encoded speaker-specific audio-visual internal models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Simulation of talking faces in the human brain improves auditory speech recognition

    PubMed Central

    von Kriegstein, Katharina; Dogan, Özgür; Grüter, Martina; Giraud, Anne-Lise; Kell, Christian A.; Grüter, Thomas; Kleinschmidt, Andreas; Kiebel, Stefan J.

    2008-01-01

    Human face-to-face communication is essentially audiovisual. Typically, people talk to us face-to-face, providing concurrent auditory and visual input. Understanding someone is easier when there is visual input, because visual cues like mouth and tongue movements provide complementary information about speech content. Here, we hypothesized that, even in the absence of visual input, the brain optimizes both auditory-only speech and speaker recognition by harvesting speaker-specific predictions and constraints from distinct visual face-processing areas. To test this hypothesis, we performed behavioral and neuroimaging experiments in two groups: subjects with a face recognition deficit (prosopagnosia) and matched controls. The results show that observing a specific person talking for 2 min improves subsequent auditory-only speech and speaker recognition for this person. In both prosopagnosics and controls, behavioral improvement in auditory-only speech recognition was based on an area typically involved in face-movement processing. Improvement in speaker recognition was only present in controls and was based on an area involved in face-identity processing. These findings challenge current unisensory models of speech processing, because they show that, in auditory-only speech, the brain exploits previously encoded audiovisual correlations to optimize communication. We suggest that this optimization is based on speaker-specific audiovisual internal models, which are used to simulate a talking face. PMID:18436648

  8. Beneficial effects of verbalization and visual distinctiveness on remembering and knowing faces.

    PubMed

    Brown, Charity; Lloyd-Jones, Toby J

    2006-03-01

    We examined the effect of verbally describing faces upon visual memory. In particular, we examined the locus of the facilitative effects of verbalization by manipulating the visual distinctiveness ofthe to-be-remembered faces and using the remember/know procedure as a measure of recognition performance (i.e., remember vs. know judgments). Participants were exposed to distinctive faces intermixed with typical faces and described (or not, in the control condition) each face following its presentation. Subsequently, the participants discriminated the original faces from distinctive and typical distractors in a yes/no recognition decision and made remember/know judgments. Distinctive faces elicited better discrimination performance than did typical faces. Furthermore, for both typical and distinctive faces, better discrimination performance was obtained in the description than in the control condition. Finally, these effects were evident for both recollection- and familiarity-based recognition decisions. We argue that verbalization and visual distinctiveness independently benefit face recognition, and we discuss these findings in terms of the nature of verbalization and the role of recollective and familiarity-based processes in recognition.

  9. Association of impaired facial affect recognition with basic facial and visual processing deficits in schizophrenia.

    PubMed

    Norton, Daniel; McBain, Ryan; Holt, Daphne J; Ongur, Dost; Chen, Yue

    2009-06-15

    Impaired emotion recognition has been reported in schizophrenia, yet the nature of this impairment is not completely understood. Recognition of facial emotion depends on processing affective and nonaffective facial signals, as well as basic visual attributes. We examined whether and how poor facial emotion recognition in schizophrenia is related to basic visual processing and nonaffective face recognition. Schizophrenia patients (n = 32) and healthy control subjects (n = 29) performed emotion discrimination, identity discrimination, and visual contrast detection tasks, where the emotionality, distinctiveness of identity, or visual contrast was systematically manipulated. Subjects determined which of two presentations in a trial contained the target: the emotional face for emotion discrimination, a specific individual for identity discrimination, and a sinusoidal grating for contrast detection. Patients had significantly higher thresholds (worse performance) than control subjects for discriminating both fearful and happy faces. Furthermore, patients' poor performance in fear discrimination was predicted by performance in visual detection and face identity discrimination. Schizophrenia patients require greater emotional signal strength to discriminate fearful or happy face images from neutral ones. Deficient emotion recognition in schizophrenia does not appear to be determined solely by affective processing but is also linked to the processing of basic visual and facial information.

  10. Visual body recognition in a prosopagnosic patient.

    PubMed

    Moro, V; Pernigo, S; Avesani, R; Bulgarelli, C; Urgesi, C; Candidi, M; Aglioti, S M

    2012-01-01

    Conspicuous deficits in face recognition characterize prosopagnosia. Information on whether agnosic deficits may extend to non-facial body parts is lacking. Here we report the neuropsychological description of FM, a patient affected by a complete deficit in face recognition in the presence of mild clinical signs of visual object agnosia. His deficit involves both overt and covert recognition of faces (i.e. recognition of familiar faces, but also categorization of faces for gender or age) as well as the visual mental imagery of faces. By means of a series of matching-to-sample tasks we investigated: (i) a possible association between prosopagnosia and disorders in visual body perception; (ii) the effect of the emotional content of stimuli on the visual discrimination of faces, bodies and objects; (iii) the existence of a dissociation between identity recognition and the emotional discrimination of faces and bodies. Our results document, for the first time, the co-occurrence of body agnosia, i.e. the visual inability to discriminate body forms and body actions, and prosopagnosia. Moreover, the results show better performance in the discrimination of emotional face and body expressions with respect to body identity and neutral actions. Since FM's lesions involve bilateral fusiform areas, it is unlikely that the amygdala-temporal projections explain the relative sparing of emotion discrimination performance. Indeed, the emotional content of the stimuli did not improve the discrimination of their identity. The results hint at the existence of two segregated brain networks involved in identity and emotional discrimination that are at least partially shared by face and body processing. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.

    PubMed

    Nummenmaa, Lauri; Calvo, Manuel G

    2015-04-01

    Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. (c) 2015 APA, all rights reserved).

  12. Karen and George: Face Recognition by Visually Impaired Children.

    ERIC Educational Resources Information Center

    Ellis, Hadyn D.; And Others

    1988-01-01

    Two visually impaired children, aged 8 and 10, appeared to have severe difficulty in recognizing faces. After assessment, it became apparent that only one had unusually poor facial recognition skills. After training, which included matching face photographs, schematic faces, and digitized faces, there was no evidence of any improvement.…

  13. Semantic and visual determinants of face recognition in a prosopagnosic patient.

    PubMed

    Dixon, M J; Bub, D N; Arguin, M

    1998-05-01

    Prosopagnosia is the neuropathological inability to recognize familiar people by their faces. It can occur in isolation or can coincide with recognition deficits for other nonface objects. Often, patients whose prosopagnosia is accompanied by object recognition difficulties have more trouble identifying certain categories of objects relative to others. In previous research, we demonstrated that objects that shared multiple visual features and were semantically close posed severe recognition difficulties for a patient with temporal lobe damage. We now demonstrate that this patient's face recognition is constrained by these same parameters. The prosopagnosic patient ELM had difficulties pairing faces to names when the faces shared visual features and the names were semantically related (e.g., Tonya Harding, Nancy Kerrigan, and Josee Chouinard -three ice skaters). He made tenfold fewer errors when the exact same faces were associated with semantically unrelated people (e.g., singer Celine Dion, actress Betty Grable, and First Lady Hillary Clinton). We conclude that prosopagnosia and co-occurring category-specific recognition problems both stem from difficulties disambiguating the stored representations of objects that share multiple visual features and refer to semantically close identities or concepts.

  14. The Role of Higher Level Adaptive Coding Mechanisms in the Development of Face Recognition

    ERIC Educational Resources Information Center

    Pimperton, Hannah; Pellicano, Elizabeth; Jeffery, Linda; Rhodes, Gillian

    2009-01-01

    DevDevelopmental improvements in face identity recognition ability are widely documented, but the source of children's immaturity in face recognition remains unclear. Differences in the way in which children and adults visually represent faces might underlie immaturities in face recognition. Recent evidence of a face identity aftereffect (FIAE),…

  15. Impaired recognition of faces and objects in dyslexia: Evidence for ventral stream dysfunction?

    PubMed

    Sigurdardottir, Heida Maria; Ívarsson, Eysteinn; Kristinsdóttir, Kristjana; Kristjánsson, Árni

    2015-09-01

    The objective of this study was to establish whether or not dyslexics are impaired at the recognition of faces and other complex nonword visual objects. This would be expected based on a meta-analysis revealing that children and adult dyslexics show functional abnormalities within the left fusiform gyrus, a brain region high up in the ventral visual stream, which is thought to support the recognition of words, faces, and other objects. 20 adult dyslexics (M = 29 years) and 20 matched typical readers (M = 29 years) participated in the study. One dyslexic-typical reader pair was excluded based on Adult Reading History Questionnaire scores and IS-FORM reading scores. Performance was measured on 3 high-level visual processing tasks: the Cambridge Face Memory Test, the Vanderbilt Holistic Face Processing Test, and the Vanderbilt Expertise Test. People with dyslexia are impaired in their recognition of faces and other visually complex objects. Their holistic processing of faces appears to be intact, suggesting that dyslexics may instead be specifically impaired at part-based processing of visual objects. The difficulty that people with dyslexia experience with reading might be the most salient manifestation of a more general high-level visual deficit. (c) 2015 APA, all rights reserved).

  16. Visual scanning behavior is related to recognition performance for own- and other-age faces

    PubMed Central

    Proietti, Valentina; Macchi Cassia, Viola; dell’Amore, Francesca; Conte, Stefania; Bricolo, Emanuela

    2015-01-01

    It is well-established that our recognition ability is enhanced for faces belonging to familiar categories, such as own-race faces and own-age faces. Recent evidence suggests that, for race, the recognition bias is also accompanied by different visual scanning strategies for own- compared to other-race faces. Here, we tested the hypothesis that these differences in visual scanning patterns extend also to the comparison between own and other-age faces and contribute to the own-age recognition advantage. Participants (young adults with limited experience with infants) were tested in an old/new recognition memory task where they encoded and subsequently recognized a series of adult and infant faces while their eye movements were recorded. Consistent with findings on the other-race bias, we found evidence of an own-age bias in recognition which was accompanied by differential scanning patterns, and consequently differential encoding strategies, for own-compared to other-age faces. Gaze patterns for own-age faces involved a more dynamic sampling of the internal features and longer viewing time on the eye region compared to the other regions of the face. This latter strategy was extensively employed during learning (vs. recognition) and was positively correlated to discriminability. These results suggest that deeply encoding the eye region is functional for recognition and that the own-age bias is evident not only in differential recognition performance, but also in the employment of different sampling strategies found to be effective for accurate recognition. PMID:26579056

  17. Should visual speech cues (speechreading) be considered when fitting hearing aids?

    NASA Astrophysics Data System (ADS)

    Grant, Ken

    2002-05-01

    When talker and listener are face-to-face, visual speech cues become an important part of the communication environment, and yet, these cues are seldom considered when designing hearing aids. Models of auditory-visual speech recognition highlight the importance of complementary versus redundant speech information for predicting auditory-visual recognition performance. Thus, for hearing aids to work optimally when visual speech cues are present, it is important to know whether the cues provided by amplification and the cues provided by speechreading complement each other. In this talk, data will be reviewed that show nonmonotonicity between auditory-alone speech recognition and auditory-visual speech recognition, suggesting that efforts designed solely to improve auditory-alone recognition may not always result in improved auditory-visual recognition. Data will also be presented showing that one of the most important speech cues for enhancing auditory-visual speech recognition performance, voicing, is often the cue that benefits least from amplification.

  18. Emotion Recognition in Faces and the Use of Visual Context in Young People with High-Functioning Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Wright, Barry; Clarke, Natalie; Jordan, Jo; Young, Andrew W.; Clarke, Paula; Miles, Jeremy; Nation, Kate; Clarke, Leesa; Williams, Christine

    2008-01-01

    We compared young people with high-functioning autism spectrum disorders (ASDs) with age, sex and IQ matched controls on emotion recognition of faces and pictorial context. Each participant completed two tests of emotion recognition. The first used Ekman series faces. The second used facial expressions in visual context. A control task involved…

  19. Neural microgenesis of personally familiar face recognition

    PubMed Central

    Ramon, Meike; Vizioli, Luca; Liu-Shuang, Joan; Rossion, Bruno

    2015-01-01

    Despite a wealth of information provided by neuroimaging research, the neural basis of familiar face recognition in humans remains largely unknown. Here, we isolated the discriminative neural responses to unfamiliar and familiar faces by slowly increasing visual information (i.e., high-spatial frequencies) to progressively reveal faces of unfamiliar or personally familiar individuals. Activation in ventral occipitotemporal face-preferential regions increased with visual information, independently of long-term face familiarity. In contrast, medial temporal lobe structures (perirhinal cortex, amygdala, hippocampus) and anterior inferior temporal cortex responded abruptly when sufficient information for familiar face recognition was accumulated. These observations suggest that following detailed analysis of individual faces in core posterior areas of the face-processing network, familiar face recognition emerges categorically in medial temporal and anterior regions of the extended cortical face network. PMID:26283361

  20. Neural microgenesis of personally familiar face recognition.

    PubMed

    Ramon, Meike; Vizioli, Luca; Liu-Shuang, Joan; Rossion, Bruno

    2015-09-01

    Despite a wealth of information provided by neuroimaging research, the neural basis of familiar face recognition in humans remains largely unknown. Here, we isolated the discriminative neural responses to unfamiliar and familiar faces by slowly increasing visual information (i.e., high-spatial frequencies) to progressively reveal faces of unfamiliar or personally familiar individuals. Activation in ventral occipitotemporal face-preferential regions increased with visual information, independently of long-term face familiarity. In contrast, medial temporal lobe structures (perirhinal cortex, amygdala, hippocampus) and anterior inferior temporal cortex responded abruptly when sufficient information for familiar face recognition was accumulated. These observations suggest that following detailed analysis of individual faces in core posterior areas of the face-processing network, familiar face recognition emerges categorically in medial temporal and anterior regions of the extended cortical face network.

  1. Face recognition increases during saccade preparation.

    PubMed

    Lin, Hai; Rizak, Joshua D; Ma, Yuan-ye; Yang, Shang-chuan; Chen, Lin; Hu, Xin-tian

    2014-01-01

    Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition.

  2. Visual abilities are important for auditory-only speech recognition: evidence from autism spectrum disorder.

    PubMed

    Schelinski, Stefanie; Riedel, Philipp; von Kriegstein, Katharina

    2014-12-01

    In auditory-only conditions, for example when we listen to someone on the phone, it is essential to fast and accurately recognize what is said (speech recognition). Previous studies have shown that speech recognition performance in auditory-only conditions is better if the speaker is known not only by voice, but also by face. Here, we tested the hypothesis that such an improvement in auditory-only speech recognition depends on the ability to lip-read. To test this we recruited a group of adults with autism spectrum disorder (ASD), a condition associated with difficulties in lip-reading, and typically developed controls. All participants were trained to identify six speakers by name and voice. Three speakers were learned by a video showing their face and three others were learned in a matched control condition without face. After training, participants performed an auditory-only speech recognition test that consisted of sentences spoken by the trained speakers. As a control condition, the test also included speaker identity recognition on the same auditory material. The results showed that, in the control group, performance in speech recognition was improved for speakers known by face in comparison to speakers learned in the matched control condition without face. The ASD group lacked such a performance benefit. For the ASD group auditory-only speech recognition was even worse for speakers known by face compared to speakers not known by face. In speaker identity recognition, the ASD group performed worse than the control group independent of whether the speakers were learned with or without face. Two additional visual experiments showed that the ASD group performed worse in lip-reading whereas face identity recognition was within the normal range. The findings support the view that auditory-only communication involves specific visual mechanisms. Further, they indicate that in ASD, speaker-specific dynamic visual information is not available to optimize auditory-only speech recognition. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Task-dependent modulation of the visual sensory thalamus assists visual-speech recognition.

    PubMed

    Díaz, Begoña; Blank, Helen; von Kriegstein, Katharina

    2018-05-14

    The cerebral cortex modulates early sensory processing via feed-back connections to sensory pathway nuclei. The functions of this top-down modulation for human behavior are poorly understood. Here, we show that top-down modulation of the visual sensory thalamus (the lateral geniculate body, LGN) is involved in visual-speech recognition. In two independent functional magnetic resonance imaging (fMRI) studies, LGN response increased when participants processed fast-varying features of articulatory movements required for visual-speech recognition, as compared to temporally more stable features required for face identification with the same stimulus material. The LGN response during the visual-speech task correlated positively with the visual-speech recognition scores across participants. In addition, the task-dependent modulation was present for speech movements and did not occur for control conditions involving non-speech biological movements. In face-to-face communication, visual speech recognition is used to enhance or even enable understanding what is said. Speech recognition is commonly explained in frameworks focusing on cerebral cortex areas. Our findings suggest that task-dependent modulation at subcortical sensory stages has an important role for communication: Together with similar findings in the auditory modality the findings imply that task-dependent modulation of the sensory thalami is a general mechanism to optimize speech recognition. Copyright © 2018. Published by Elsevier Inc.

  4. The processing of auditory and visual recognition of self-stimuli.

    PubMed

    Hughes, Susan M; Nicholson, Shevon E

    2010-12-01

    This study examined self-recognition processing in both the auditory and visual modalities by determining how comparable hearing a recording of one's own voice was to seeing photograph of one's own face. We also investigated whether the simultaneous presentation of auditory and visual self-stimuli would either facilitate or inhibit self-identification. Ninety-one participants completed reaction-time tasks of self-recognition when presented with their own faces, own voices, and combinations of the two. Reaction time and errors made when responding with both the right and left hand were recorded to determine if there were lateralization effects on these tasks. Our findings showed that visual self-recognition for facial photographs appears to be superior to auditory self-recognition for voice recordings. Furthermore, a combined presentation of one's own face and voice appeared to inhibit rather than facilitate self-recognition and there was a left-hand advantage for reaction time on the combined-presentation tasks. Copyright © 2010 Elsevier Inc. All rights reserved.

  5. Face recognition in age related macular degeneration: perceived disability, measured disability, and performance with a bioptic device.

    PubMed

    Tejeria, L; Harper, R A; Artes, P H; Dickinson, C M

    2002-09-01

    (1) To explore the relation between performance on tasks of familiar face recognition (FFR) and face expression difference discrimination (FED) with both perceived disability in face recognition and clinical measures of visual function in subjects with age related macular degeneration (AMD). (2) To quantify the gain in performance for face recognition tasks when subjects use a bioptic telescopic low vision device. 30 subjects with AMD (age range 66-90 years; visual acuity 0.4-1.4 logMAR) were recruited for the study. Perceived (self rated) disability in face recognition was assessed by an eight item questionnaire covering a range of issues relating to face recognition. Visual functions measured were distance visual acuity (ETDRS logMAR charts), continuous text reading acuity (MNRead charts), contrast sensitivity (Pelli-Robson chart), and colour vision (large panel D-15). In the FFR task, images of famous people had to be identified. FED was assessed by a forced choice test where subjects had to decide which one of four images showed a different facial expression. These tasks were repeated with subjects using a bioptic device. Overall perceived disability in face recognition did not correlate with performance on either task, although a specific item on difficulty recognising familiar faces did correlate with FFR (r = 0.49, p<0.05). FFR performance was most closely related to distance acuity (r = -0.69, p<0.001), while FED performance was most closely related to continuous text reading acuity (r = -0.79, p<0.001). In multiple regression, neither contrast sensitivity nor colour vision significantly increased the explained variance. When using a bioptic telescope, FFR performance improved in 86% of subjects (median gain = 49%; p<0.001), while FED performance increased in 79% of subjects (median gain = 50%; p<0.01). Distance and reading visual acuity are closely associated with measured task performance in FFR and FED. A bioptic low vision device can offer a significant improvement in performance for face recognition tasks, and may be useful in reducing the handicap associated with this disability. There is, however, little evidence for a correlation between self rated difficulty in face recognition and measured performance for either task. Further work is needed to explore the complex relation between the perception of disability and measured performance.

  6. Face recognition in age related macular degeneration: perceived disability, measured disability, and performance with a bioptic device

    PubMed Central

    Tejeria, L; Harper, R A; Artes, P H; Dickinson, C M

    2002-01-01

    Aims: (1) To explore the relation between performance on tasks of familiar face recognition (FFR) and face expression difference discrimination (FED) with both perceived disability in face recognition and clinical measures of visual function in subjects with age related macular degeneration (AMD). (2) To quantify the gain in performance for face recognition tasks when subjects use a bioptic telescopic low vision device. Methods: 30 subjects with AMD (age range 66–90 years; visual acuity 0.4–1.4 logMAR) were recruited for the study. Perceived (self rated) disability in face recognition was assessed by an eight item questionnaire covering a range of issues relating to face recognition. Visual functions measured were distance visual acuity (ETDRS logMAR charts), continuous text reading acuity (MNRead charts), contrast sensitivity (Pelli-Robson chart), and colour vision (large panel D-15). In the FFR task, images of famous people had to be identified. FED was assessed by a forced choice test where subjects had to decide which one of four images showed a different facial expression. These tasks were repeated with subjects using a bioptic device. Results: Overall perceived disability in face recognition did not correlate with performance on either task, although a specific item on difficulty recognising familiar faces did correlate with FFR (r = 0.49, p<0.05). FFR performance was most closely related to distance acuity (r = −0.69, p<0.001), while FED performance was most closely related to continuous text reading acuity (r = −0.79, p<0.001). In multiple regression, neither contrast sensitivity nor colour vision significantly increased the explained variance. When using a bioptic telescope, FFR performance improved in 86% of subjects (median gain = 49%; p<0.001), while FED performance increased in 79% of subjects (median gain = 50%; p<0.01). Conclusion: Distance and reading visual acuity are closely associated with measured task performance in FFR and FED. A bioptic low vision device can offer a significant improvement in performance for face recognition tasks, and may be useful in reducing the handicap associated with this disability. There is, however, little evidence for a correlation between self rated difficulty in face recognition and measured performance for either task. Further work is needed to explore the complex relation between the perception of disability and measured performance. PMID:12185131

  7. Model-Driven Study of Visual Memory

    DTIC Science & Technology

    2004-12-01

    dimensional stimuli (synthetic human faces ) afford important insights into episodic recognition memory. The results were well accommodated by a summed...the unusual properties of the z-transformed ROCS. 15. SUBJECT TERMS Memory, visual memory, computational model, human memory, faces , identity 16...3 Accomplishments/New Findings 3 Work on Objective One: Recognition Memory for Synthetic Faces . 3 Experim ent 1

  8. Character displacement of Cercopithecini primate visual signals

    PubMed Central

    Allen, William L.; Stevens, Martin; Higham, James P.

    2014-01-01

    Animal visual signals have the potential to act as an isolating barrier to prevent interbreeding of populations through a role in species recognition. Within communities of competing species, species recognition signals are predicted to undergo character displacement, becoming more visually distinctive from each other, however this pattern has rarely been identified. Using computational face recognition algorithms to model primate face processing, we demonstrate that the face patterns of guenons (tribe: Cercopithecini) have evolved under selection to become more visually distinctive from those of other guenon species with whom they are sympatric. The relationship between the appearances of sympatric species suggests that distinguishing conspecifics from other guenon species has been a major driver of diversification in guenon face appearance. Visual signals that have undergone character displacement may have had an important role in the tribe’s radiation, keeping populations that became geographically separated reproductively isolated on secondary contact. PMID:24967517

  9. Looking but Not Seeing: Atypical Visual Scanning and Recognition of Faces in 2 and 4-Year-Old Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Chawarska, Katarzyna; Shic, Frederick

    2009-01-01

    This study used eye-tracking to examine visual scanning and recognition of faces by 2- and 4-year-old children with autism spectrum disorder (ASD) (N = 44) and typically developing (TD) controls (N = 30). TD toddlers at both age levels scanned and recognized faces similarly. Toddlers with ASD looked increasingly away from faces with age,…

  10. Repetition priming of face recognition in a serial choice reaction-time task.

    PubMed

    Roberts, T; Bruce, V

    1989-05-01

    Marshall & Walker (1987) found that pictorial stimuli yield visual priming that is disrupted by an unpredictable visual event in the response-stimulus interval. They argue that visual stimuli are represented in memory in the form of distinct visual and object codes. Bruce & Young (1986) propose similar pictorial, structural and semantic codes which mediate the recognition of faces, yet repetition priming results obtained with faces as stimuli (Bruce & Valentine, 1985), and with objects (Warren & Morton, 1982) are quite different from those of Marshall & Walker (1987), in the sense that recognition is facilitated by pictures presented 20 minutes earlier. The experiment reported here used different views of familiar and unfamiliar faces as stimuli in a serial choice reaction-time task and found that, with identical pictures, repetition priming survives and intervening item requiring a response, with both familiar and unfamiliar faces. Furthermore, with familiar faces such priming was present even when the view of the prime was different from the target. The theoretical implications of these results are discussed.

  11. Looking at My Own Face: Visual Processing Strategies in Self–Other Face Recognition

    PubMed Central

    Chakraborty, Anya; Chakrabarti, Bhismadev

    2018-01-01

    We live in an age of ‘selfies.’ Yet, how we look at our own faces has seldom been systematically investigated. In this study we test if the visual processing of the highly familiar self-face is different from other faces, using psychophysics and eye-tracking. This paradigm also enabled us to test the association between the psychophysical properties of self-face representation and visual processing strategies involved in self-face recognition. Thirty-three adults performed a self-face recognition task from a series of self-other face morphs with simultaneous eye-tracking. Participants were found to look longer at the lower part of the face for self-face compared to other-face. Participants with a more distinct self-face representation, as indexed by a steeper slope of the psychometric response curve for self-face recognition, were found to look longer at upper part of the faces identified as ‘self’ vs. those identified as ‘other’. This result indicates that self-face representation can influence where we look when we process our own vs. others’ faces. We also investigated the association of autism-related traits with self-face processing metrics since autism has previously been associated with atypical self-processing. The study did not find any self-face specific association with autistic traits, suggesting that autism-related features may be related to self-processing in a domain specific manner. PMID:29487554

  12. Utterance independent bimodal emotion recognition in spontaneous communication

    NASA Astrophysics Data System (ADS)

    Tao, Jianhua; Pan, Shifeng; Yang, Minghao; Li, Ya; Mu, Kaihui; Che, Jianfeng

    2011-12-01

    Emotion expressions sometimes are mixed with the utterance expression in spontaneous face-to-face communication, which makes difficulties for emotion recognition. This article introduces the methods of reducing the utterance influences in visual parameters for the audio-visual-based emotion recognition. The audio and visual channels are first combined under a Multistream Hidden Markov Model (MHMM). Then, the utterance reduction is finished by finding the residual between the real visual parameters and the outputs of the utterance related visual parameters. This article introduces the Fused Hidden Markov Model Inversion method which is trained in the neutral expressed audio-visual corpus to solve the problem. To reduce the computing complexity the inversion model is further simplified to a Gaussian Mixture Model (GMM) mapping. Compared with traditional bimodal emotion recognition methods (e.g., SVM, CART, Boosting), the utterance reduction method can give better results of emotion recognition. The experiments also show the effectiveness of our emotion recognition system when it was used in a live environment.

  13. Recognition-induced forgetting of faces in visual long-term memory.

    PubMed

    Rugo, Kelsi F; Tamler, Kendall N; Woodman, Geoffrey F; Maxcey, Ashleigh M

    2017-10-01

    Despite more than a century of evidence that long-term memory for pictures and words are different, much of what we know about memory comes from studies using words. Recent research examining visual long-term memory has demonstrated that recognizing an object induces the forgetting of objects from the same category. This recognition-induced forgetting has been shown with a variety of everyday objects. However, unlike everyday objects, faces are objects of expertise. As a result, faces may be immune to recognition-induced forgetting. However, despite excellent memory for such stimuli, we found that faces were susceptible to recognition-induced forgetting. Our findings have implications for how models of human memory account for recognition-induced forgetting as well as represent objects of expertise and consequences for eyewitness testimony and the justice system.

  14. The effect of inversion on face recognition in adults with autism spectrum disorder.

    PubMed

    Hedley, Darren; Brewer, Neil; Young, Robyn

    2015-05-01

    Face identity recognition has widely been shown to be impaired in individuals with autism spectrum disorders (ASD). In this study we examined the influence of inversion on face recognition in 26 adults with ASD and 33 age and IQ matched controls. Participants completed a recognition test comprising upright and inverted faces. Participants with ASD performed worse than controls on the recognition task but did not show an advantage for inverted face recognition. Both groups directed more visual attention to the eye than the mouth region and gaze patterns were not found to be associated with recognition performance. These results provide evidence of a normal effect of inversion on face recognition in adults with ASD.

  15. The Right Place at the Right Time: Priming Facial Expressions with Emotional Face Components in Developmental Visual Agnosia

    PubMed Central

    Aviezer, Hillel; Hassin, Ran. R.; Perry, Anat; Dudarev, Veronica; Bentin, Shlomo

    2012-01-01

    The current study examined the nature of deficits in emotion recognition from facial expressions in case LG, an individual with a rare form of developmental visual agnosia (DVA). LG presents with profoundly impaired recognition of facial expressions, yet the underlying nature of his deficit remains unknown. During typical face processing, normal sighted individuals extract information about expressed emotions from face regions with activity diagnostic for specific emotion categories. Given LG’s impairment, we sought to shed light on his emotion perception by examining if priming facial expressions with diagnostic emotional face components would facilitate his recognition of the emotion expressed by the face. LG and control participants matched isolated face components with components appearing in a subsequently presented full-face and then categorized the face’s emotion. Critically, the matched components were from regions which were diagnostic or non-diagnostic of the emotion portrayed by the full face. In experiment 1, when the full faces were briefly presented (150 ms), LG’s performance was strongly influenced by the diagnosticity of the components: His emotion recognition was boosted within normal limits when diagnostic components were used and was obliterated when non-diagnostic components were used. By contrast, in experiment 2, when the face-exposure duration was extended (2000 ms), the beneficial effect of the diagnostic matching was diminished as was the detrimental effect of the non-diagnostic matching. These data highlight the impact of diagnostic facial features in normal expression recognition and suggest that impaired emotion recognition in DVA results from deficient visual integration across diagnostic face components. PMID:22349446

  16. Fast and Famous: Looking for the Fastest Speed at Which a Face Can be Recognized

    PubMed Central

    Barragan-Jason, Gladys; Besson, Gabriel; Ceccaldi, Mathieu; Barbeau, Emmanuel J.

    2012-01-01

    Face recognition is supposed to be fast. However, the actual speed at which faces can be recognized remains unknown. To address this issue, we report two experiments run with speed constraints. In both experiments, famous faces had to be recognized among unknown ones using a large set of stimuli to prevent pre-activation of features which would speed up recognition. In the first experiment (31 participants), recognition of famous faces was investigated using a rapid go/no-go task. In the second experiment, 101 participants performed a highly time constrained recognition task using the Speed and Accuracy Boosting procedure. Results indicate that the fastest speed at which a face can be recognized is around 360–390 ms. Such latencies are about 100 ms longer than the latencies recorded in similar tasks in which subjects have to detect faces among other stimuli. We discuss which model of activation of the visual ventral stream could account for such latencies. These latencies are not consistent with a purely feed-forward pass of activity throughout the visual ventral stream. An alternative is that face recognition relies on the core network underlying face processing identified in fMRI studies (OFA, FFA, and pSTS) and reentrant loops to refine face representation. However, the model of activation favored is that of an activation of the whole visual ventral stream up to anterior areas, such as the perirhinal cortex, combined with parallel and feed-back processes. Further studies are needed to assess which of these three models of activation can best account for face recognition. PMID:23460051

  17. Qualitatively similar processing for own- and other-race faces: Evidence from efficiency and equivalent input noise.

    PubMed

    Shafai, Fakhri; Oruc, Ipek

    2018-02-01

    The other-race effect is the finding of diminished performance in recognition of other-race faces compared to those of own-race. It has been suggested that the other-race effect stems from specialized expert processes being tuned exclusively to own-race faces. In the present study, we measured recognition contrast thresholds for own- and other-race faces as well as houses for Caucasian observers. We have factored face recognition performance into two invariant aspects of visual function: efficiency, which is related to neural computations and processing demanded by the task, and equivalent input noise, related to signal degradation within the visual system. We hypothesized that if expert processes are available only to own-race faces, this should translate into substantially greater recognition efficiencies for own-race compared to other-race faces. Instead, we found similar recognition efficiencies for both own- and other-race faces. The other-race effect manifested as increased equivalent input noise. These results argue against qualitatively distinct perceptual processes. Instead they suggest that for Caucasian observers, similar neural computations underlie recognition of own- and other-race faces. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Emotion recognition abilities across stimulus modalities in schizophrenia and the role of visual attention.

    PubMed

    Simpson, Claire; Pinkham, Amy E; Kelsven, Skylar; Sasson, Noah J

    2013-12-01

    Emotion can be expressed by both the voice and face, and previous work suggests that presentation modality may impact emotion recognition performance in individuals with schizophrenia. We investigated the effect of stimulus modality on emotion recognition accuracy and the potential role of visual attention to faces in emotion recognition abilities. Thirty-one patients who met DSM-IV criteria for schizophrenia (n=8) or schizoaffective disorder (n=23) and 30 non-clinical control individuals participated. Both groups identified emotional expressions in three different conditions: audio only, visual only, combined audiovisual. In the visual only and combined conditions, time spent visually fixating salient features of the face were recorded. Patients were significantly less accurate than controls in emotion recognition during both the audio and visual only conditions but did not differ from controls on the combined condition. Analysis of visual scanning behaviors demonstrated that patients attended less than healthy individuals to the mouth in the visual condition but did not differ in visual attention to salient facial features in the combined condition, which may in part explain the absence of a deficit for patients in this condition. Collectively, these findings demonstrate that patients benefit from multimodal stimulus presentations of emotion and support hypotheses that visual attention to salient facial features may serve as a mechanism for accurate emotion identification. © 2013.

  19. Superior voice recognition in a patient with acquired prosopagnosia and object agnosia.

    PubMed

    Hoover, Adria E N; Démonet, Jean-François; Steeves, Jennifer K E

    2010-11-01

    Anecdotally, it has been reported that individuals with acquired prosopagnosia compensate for their inability to recognize faces by using other person identity cues such as hair, gait or the voice. Are they therefore superior at the use of non-face cues, specifically voices, to person identity? Here, we empirically measure person and object identity recognition in a patient with acquired prosopagnosia and object agnosia. We quantify person identity (face and voice) and object identity (car and horn) recognition for visual, auditory, and bimodal (visual and auditory) stimuli. The patient is unable to recognize faces or cars, consistent with his prosopagnosia and object agnosia, respectively. He is perfectly able to recognize people's voices and car horns and bimodal stimuli. These data show a reverse shift in the typical weighting of visual over auditory information for audiovisual stimuli in a compromised visual recognition system. Moreover, the patient shows selectively superior voice recognition compared to the controls revealing that two different stimulus domains, persons and objects, may not be equally affected by sensory adaptation effects. This also implies that person and object identity recognition are processed in separate pathways. These data demonstrate that an individual with acquired prosopagnosia and object agnosia can compensate for the visual impairment and become quite skilled at using spared aspects of sensory processing. In the case of acquired prosopagnosia it is advantageous to develop a superior use of voices for person identity recognition in everyday life. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. In infancy the timing of emergence of the other-race effect is dependent on face gender.

    PubMed

    Tham, Diana Su Yun; Bremner, J Gavin; Hay, Dennis

    2015-08-01

    Poorer recognition of other-race faces relative to own-race faces is well documented from late infancy to adulthood. Research has revealed an increase in the other-race effect (ORE) during the first year of life, but there is some disagreement regarding the age at which it emerges. Using cropped faces to eliminate discrimination based on external features, visual paired comparison and spontaneous visual preference measures were used to investigate the relationship between ORE and face gender at 3-4 and 8-9 months. Caucasian-White 3- to 4-month-olds' discrimination of Chinese, Malay, and Caucasian-White faces showed an own-race advantage for female faces alone whereas at 8-9 months the own-race advantage was general across gender. This developmental effect is accompanied by a preference for female over male faces at 4 months and no gender preference at 9 months. The pattern of recognition advantage and preference suggests that there is a shift from a female-based own-race recognition advantage to a general own-race recognition advantage, in keeping with a visual and social experience-based account of ORE. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Looking for myself: current multisensory input alters self-face recognition.

    PubMed

    Tsakiris, Manos

    2008-01-01

    How do I know the person I see in the mirror is really me? Is it because I know the person simply looks like me, or is it because the mirror reflection moves when I move, and I see it being touched when I feel touch myself? Studies of face-recognition suggest that visual recognition of stored visual features inform self-face recognition. In contrast, body-recognition studies conclude that multisensory integration is the main cue to selfhood. The present study investigates for the first time the specific contribution of current multisensory input for self-face recognition. Participants were stroked on their face while they were looking at a morphed face being touched in synchrony or asynchrony. Before and after the visuo-tactile stimulation participants performed a self-recognition task. The results show that multisensory signals have a significant effect on self-face recognition. Synchronous tactile stimulation while watching another person's face being similarly touched produced a bias in recognizing one's own face, in the direction of the other person included in the representation of one's own face. Multisensory integration can update cognitive representations of one's body, such as the sense of ownership. The present study extends this converging evidence by showing that the correlation of synchronous multisensory signals also updates the representation of one's face. The face is a key feature of our identity, but at the same time is a source of rich multisensory experiences used to maintain or update self-representations.

  2. The right place at the right time: priming facial expressions with emotional face components in developmental visual agnosia.

    PubMed

    Aviezer, Hillel; Hassin, Ran R; Perry, Anat; Dudarev, Veronica; Bentin, Shlomo

    2012-04-01

    The current study examined the nature of deficits in emotion recognition from facial expressions in case LG, an individual with a rare form of developmental visual agnosia (DVA). LG presents with profoundly impaired recognition of facial expressions, yet the underlying nature of his deficit remains unknown. During typical face processing, normal sighted individuals extract information about expressed emotions from face regions with activity diagnostic for specific emotion categories. Given LG's impairment, we sought to shed light on his emotion perception by examining if priming facial expressions with diagnostic emotional face components would facilitate his recognition of the emotion expressed by the face. LG and control participants matched isolated face components with components appearing in a subsequently presented full-face and then categorized the face's emotion. Critically, the matched components were from regions which were diagnostic or non-diagnostic of the emotion portrayed by the full face. In experiment 1, when the full faces were briefly presented (150 ms), LG's performance was strongly influenced by the diagnosticity of the components: his emotion recognition was boosted within normal limits when diagnostic components were used and was obliterated when non-diagnostic components were used. By contrast, in experiment 2, when the face-exposure duration was extended (2000 ms), the beneficial effect of the diagnostic matching was diminished as was the detrimental effect of the non-diagnostic matching. These data highlight the impact of diagnostic facial features in normal expression recognition and suggest that impaired emotion recognition in DVA results from deficient visual integration across diagnostic face components. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Are face representations depth cue invariant?

    PubMed

    Dehmoobadsharifabadi, Armita; Farivar, Reza

    2016-06-01

    The visual system can process three-dimensional depth cues defining surfaces of objects, but it is unclear whether such information contributes to complex object recognition, including face recognition. The processing of different depth cues involves both dorsal and ventral visual pathways. We investigated whether facial surfaces defined by individual depth cues resulted in meaningful face representations-representations that maintain the relationship between the population of faces as defined in a multidimensional face space. We measured face identity aftereffects for facial surfaces defined by individual depth cues (Experiments 1 and 2) and tested whether the aftereffect transfers across depth cues (Experiments 3 and 4). Facial surfaces and their morphs to the average face were defined purely by one of shading, texture, motion, or binocular disparity. We obtained identification thresholds for matched (matched identity between adapting and test stimuli), non-matched (non-matched identity between adapting and test stimuli), and no-adaptation (showing only the test stimuli) conditions for each cue and across different depth cues. We found robust face identity aftereffect in both experiments. Our results suggest that depth cues do contribute to forming meaningful face representations that are depth cue invariant. Depth cue invariance would require integration of information across different areas and different pathways for object recognition, and this in turn has important implications for cortical models of visual object recognition.

  4. Outlining face processing skills of portrait artists: Perceptual experience with faces predicts performance.

    PubMed

    Devue, Christel; Barsics, Catherine

    2016-10-01

    Most humans seem to demonstrate astonishingly high levels of skill in face processing if one considers the sophisticated level of fine-tuned discrimination that face recognition requires. However, numerous studies now indicate that the ability to process faces is not as fundamental as once thought and that performance can range from despairingly poor to extraordinarily high across people. Here we studied people who are super specialists of faces, namely portrait artists, to examine how their specific visual experience with faces relates to a range of face processing skills (perceptual discrimination, short- and longer term recognition). Artists show better perceptual discrimination and, to some extent, recognition of newly learned faces than controls. They are also more accurate on other perceptual tasks (i.e., involving non-face stimuli or mental rotation). By contrast, artists do not display an advantage compared to controls on longer term face recognition (i.e., famous faces) nor on person recognition from other sensorial modalities (i.e., voices). Finally, the face inversion effect exists in artists and controls and is not modulated by artistic practice. Advantages in face processing for artists thus seem to closely mirror perceptual and visual short term memory skills involved in portraiture. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Emotion Recognition and Visual-Scan Paths in Fragile X Syndrome

    ERIC Educational Resources Information Center

    Shaw, Tracey A.; Porter, Melanie A.

    2013-01-01

    This study investigated emotion recognition abilities and visual scanning of emotional faces in 16 Fragile X syndrome (FXS) individuals compared to 16 chronological-age and 16 mental-age matched controls. The relationships between emotion recognition, visual scan-paths and symptoms of social anxiety, schizotypy and autism were also explored.…

  6. Brief daily exposures to Asian females reverses perceptual narrowing for Asian faces in Caucasian infants

    PubMed Central

    Anzures, Gizelle; Wheeler, Andrea; Quinn, Paul C.; Pascalis, Olivier; Slater, Alan M.; Heron-Delaney, Michelle; Tanaka, James W.; Lee, Kang

    2012-01-01

    Perceptual narrowing in the visual, auditory, and multisensory domains has its developmental origins in infancy. The present study shows that experimentally induced experience can reverse the effects of perceptual narrowing on infants’ visual recognition memory of other-race faces. Caucasian 8- to 10-month-olds who could not discriminate between novel and familiarized Asian faces at the beginning of testing were given brief daily experience with Asian female faces in the experimental condition and Caucasian female faces in the control condition. At the end of three weeks, only infants who received daily experience with Asian females showed above-chance recognition of novel Asian female and male faces. Further, infants in the experimental condition showed greater efficiency in learning novel Asian females compared to infants in the control condition. Thus, visual experience with a novel stimulus category can reverse the effects of perceptual narrowing in infancy via improved stimulus recognition and encoding. PMID:22625845

  7. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    PubMed

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  8. Face-specific and domain-general visual processing deficits in children with developmental prosopagnosia.

    PubMed

    Dalrymple, Kirsten A; Elison, Jed T; Duchaine, Brad

    2017-02-01

    Evidence suggests that face and object recognition depend on distinct neural circuitry within the visual system. Work with adults with developmental prosopagnosia (DP) demonstrates that some individuals have preserved object recognition despite severe face recognition deficits. This face selectivity in adults with DP indicates that face- and object-processing systems can develop independently, but it is unclear at what point in development these mechanisms are separable. Determining when individuals with DP first show dissociations between faces and objects is one means to address this question. In the current study, we investigated face and object processing in six children with DP (5-12-years-old). Each child was assessed with one face perception test, two different face memory tests, and two object memory tests that were matched to the face memory tests in format and difficulty. Scores from the DP children on the matched face and object tasks were compared to within-subject data from age-matched controls. Four of the six DP children, including the 5-year-old, showed evidence of face-specific deficits, while one child appeared to have more general visual-processing deficits. The remaining child had inconsistent results. The presence of face-specific deficits in children with DP suggests that face and object perception depend on dissociable processes in childhood.

  9. Facial recognition using enhanced pixelized image for simulated visual prosthesis.

    PubMed

    Li, Ruonan; Zhhang, Xudong; Zhang, Hui; Hu, Guanshu

    2005-01-01

    A simulated face recognition experiment using enhanced pixelized images is designed and performed for the artificial visual prosthesis. The results of the simulation reveal new characteristics of visual performance in an enhanced pixelization condition, and then new suggestions on the future design of visual prosthesis are provided.

  10. The neural basis of body form and body action agnosia.

    PubMed

    Moro, Valentina; Urgesi, Cosimo; Pernigo, Simone; Lanteri, Paola; Pazzaglia, Mariella; Aglioti, Salvatore Maria

    2008-10-23

    Visual analysis of faces and nonfacial body stimuli brings about neural activity in different cortical areas. Moreover, processing body form and body action relies on distinct neural substrates. Although brain lesion studies show specific face processing deficits, neuropsychological evidence for defective recognition of nonfacial body parts is lacking. By combining psychophysics studies with lesion-mapping techniques, we found that lesions of ventromedial, occipitotemporal areas induce face and body recognition deficits while lesions involving extrastriate body area seem causatively associated with impaired recognition of body but not of face and object stimuli. We also found that body form and body action recognition deficits can be double dissociated and are causatively associated with lesions to extrastriate body area and ventral premotor cortex, respectively. Our study reports two category-specific visual deficits, called body form and body action agnosia, and highlights their neural underpinnings.

  11. Acquired prosopagnosia without word recognition deficits.

    PubMed

    Susilo, Tirta; Wright, Victoria; Tree, Jeremy J; Duchaine, Bradley

    2015-01-01

    It has long been suggested that face recognition relies on specialized mechanisms that are not involved in visual recognition of other object categories, including those that require expert, fine-grained discrimination at the exemplar level such as written words. But according to the recently proposed many-to-many theory of object recognition (MTMT), visual recognition of faces and words are carried out by common mechanisms [Behrmann, M., & Plaut, D. C. ( 2013 ). Distributed circuits, not circumscribed centers, mediate visual recognition. Trends in Cognitive Sciences, 17, 210-219]. MTMT acknowledges that face and word recognition are lateralized, but posits that the mechanisms that predominantly carry out face recognition still contribute to word recognition and vice versa. MTMT makes a key prediction, namely that acquired prosopagnosics should exhibit some measure of word recognition deficits. We tested this prediction by assessing written word recognition in five acquired prosopagnosic patients. Four patients had lesions limited to the right hemisphere while one had bilateral lesions with more pronounced lesions in the right hemisphere. The patients completed a total of seven word recognition tasks: two lexical decision tasks and five reading aloud tasks totalling more than 1200 trials. The performances of the four older patients (3 female, age range 50-64 years) were compared to those of 12 older controls (8 female, age range 56-66 years), while the performances of the younger prosopagnosic (male, 31 years) were compared to those of 14 younger controls (9 female, age range 20-33 years). We analysed all results at the single-patient level using Crawford's t-test. Across seven tasks, four prosopagnosics performed as quickly and accurately as controls. Our results demonstrate that acquired prosopagnosia can exist without word recognition deficits. These findings are inconsistent with a key prediction of MTMT. They instead support the hypothesis that face recognition is carried out by specialized mechanisms that do not contribute to recognition of written words.

  12. Evidence for perceptual deficits in associative visual (prosop)agnosia: a single-case study.

    PubMed

    Delvenne, Jean François; Seron, Xavier; Coyette, Françoise; Rossion, Bruno

    2004-01-01

    Associative visual agnosia is classically defined as normal visual perception stripped of its meaning [Archiv für Psychiatrie und Nervenkrankheiten 21 (1890) 22/English translation: Cognitive Neuropsychol. 5 (1988) 155]: these patients cannot access to their stored visual memories to categorize the objects nonetheless perceived correctly. However, according to an influential theory of visual agnosia [Farah, Visual Agnosia: Disorders of Object Recognition and What They Tell Us about Normal Vision, MIT Press, Cambridge, MA, 1990], visual associative agnosics necessarily present perceptual deficits that are the cause of their impairment at object recognition Here we report a detailed investigation of a patient with bilateral occipito-temporal lesions strongly impaired at object and face recognition. NS presents normal drawing copy, and normal performance at object and face matching tasks as used in classical neuropsychological tests. However, when tested with several computer tasks using carefully controlled visual stimuli and taking both his accuracy rate and response times into account, NS was found to have abnormal performances at high-level visual processing of objects and faces. Albeit presenting a different pattern of deficits than previously described in integrative agnosic patients such as HJA and LH, his deficits were characterized by an inability to integrate individual parts into a whole percept, as suggested by his failure at processing structurally impossible three-dimensional (3D) objects, an absence of face inversion effects and an advantage at detecting and matching single parts. Taken together, these observations question the idea of separate visual representations for object/face perception and object/face knowledge derived from investigations of visual associative (prosop)agnosia, and they raise some methodological issues in the analysis of single-case studies of (prosop)agnosic patients.

  13. [Symptoms and lesion localization in visual agnosia].

    PubMed

    Suzuki, Kyoko

    2004-11-01

    There are two cortical visual processing streams, the ventral and dorsal stream. The ventral visual stream plays the major role in constructing our perceptual representation of the visual world and the objects within it. Disturbance of visual processing at any stage of the ventral stream could result in impairment of visual recognition. Thus we need systematic investigations to diagnose visual agnosia and its type. Two types of category-selective visual agnosia, prosopagnosia and landmark agnosia, are different from others in that patients could recognize a face as a face and buildings as buildings, but could not identify an individual person or building. Neuronal bases of prosopagnosia and landmark agnosia are distinct. Importance of the right fusiform gyrus for face recognition was confirmed by both clinical and neuroimaging studies. Landmark agnosia is related to lesions in the right parahippocampal gyrus. Enlarged lesions including both the right fusiform and parahippocampal gyri can result in prosopagnosia and landmark agnosia at the same time. Category non-selective visual agnosia is related to bilateral occipito-temporal lesions, which is in agreement with the results of neuroimaging studies that revealed activation of the bilateral occipito-temporal during object recognition tasks.

  14. Face and Word Recognition Can Be Selectively Affected by Brain Injury or Developmental Disorders.

    PubMed

    Robotham, Ro J; Starrfelt, Randi

    2017-01-01

    Face and word recognition have traditionally been thought to rely on highly specialised and relatively independent cognitive processes. Some of the strongest evidence for this has come from patients with seemingly category-specific visual perceptual deficits such as pure prosopagnosia, a selective face recognition deficit, and pure alexia, a selective word recognition deficit. Together, the patterns of impaired reading with preserved face recognition and impaired face recognition with preserved reading constitute a double dissociation. The existence of these selective deficits has been questioned over the past decade. It has been suggested that studies describing patients with these pure deficits have failed to measure the supposedly preserved functions using sensitive enough measures, and that if tested using sensitive measurements, all patients with deficits in one visual category would also have deficits in the other. The implications of this would be immense, with most textbooks in cognitive neuropsychology requiring drastic revisions. In order to evaluate the evidence for dissociations, we review studies that specifically investigate whether face or word recognition can be selectively affected by acquired brain injury or developmental disorders. We only include studies published since 2004, as comprehensive reviews of earlier studies are available. Most of the studies assess the supposedly preserved functions using sensitive measurements. We found convincing evidence that reading can be preserved in acquired and developmental prosopagnosia and also evidence (though weaker) that face recognition can be preserved in acquired or developmental dyslexia, suggesting that face and word recognition are at least in part supported by independent processes.

  15. Functional Connectivity between Face-Movement and Speech-Intelligibility Areas during Auditory-Only Speech Perception

    PubMed Central

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas. PMID:24466026

  16. Global precedence effects account for individual differences in both face and object recognition performance.

    PubMed

    Gerlach, Christian; Starrfelt, Randi

    2018-03-20

    There has been an increase in studies adopting an individual difference approach to examine visual cognition and in particular in studies trying to relate face recognition performance with measures of holistic processing (the face composite effect and the part-whole effect). In the present study we examine whether global precedence effects, measured by means of non-face stimuli in Navon's paradigm, can also account for individual differences in face recognition and, if so, whether the effect is of similar magnitude for faces and objects. We find evidence that global precedence effects facilitate both face and object recognition, and to a similar extent. Our results suggest that both face and object recognition are characterized by a coarse-to-fine temporal dynamic, where global shape information is derived prior to local shape information, and that the efficiency of face and object recognition is related to the magnitude of the global precedence effect.

  17. Enhanced Visual Short-Term Memory for Angry Faces

    ERIC Educational Resources Information Center

    Jackson, Margaret C.; Wu, Chia-Yun; Linden, David E. J.; Raymond, Jane E.

    2009-01-01

    Although some views of face perception posit independent processing of face identity and expression, recent studies suggest interactive processing of these 2 domains. The authors examined expression-identity interactions in visual short-term memory (VSTM) by assessing recognition performance in a VSTM task in which face identity was relevant and…

  18. Visual Scan Paths and Recognition of Facial Identity in Autism Spectrum Disorder and Typical Development

    PubMed Central

    Wilson, C. Ellie; Palermo, Romina; Brock, Jon

    2012-01-01

    Background Previous research suggests that many individuals with autism spectrum disorder (ASD) have impaired facial identity recognition, and also exhibit abnormal visual scanning of faces. Here, two hypotheses accounting for an association between these observations were tested: i) better facial identity recognition is associated with increased gaze time on the Eye region; ii) better facial identity recognition is associated with increased eye-movements around the face. Methodology and Principal Findings Eye-movements of 11 children with ASD and 11 age-matched typically developing (TD) controls were recorded whilst they viewed a series of faces, and then completed a two alternative forced-choice recognition memory test for the faces. Scores on the memory task were standardized according to age. In both groups, there was no evidence of an association between the proportion of time spent looking at the Eye region of faces and age-standardized recognition performance, thus the first hypothesis was rejected. However, the ‘Dynamic Scanning Index’ – which was incremented each time the participant saccaded into and out of one of the core-feature interest areas – was strongly associated with age-standardized face recognition scores in both groups, even after controlling for various other potential predictors of performance. Conclusions and Significance In support of the second hypothesis, results suggested that increased saccading between core-features was associated with more accurate face recognition ability, both in typical development and ASD. Causal directions of this relationship remain undetermined. PMID:22666378

  19. The effect of non-neovascular age-related macular degeneration on face recognition performance.

    PubMed

    Taylor, Deanna J; Smith, Nicholas D; Binns, Alison M; Crabb, David P

    2018-04-01

    There is a well-established research base surrounding face recognition in patients with age-related macular degeneration (AMD). However, much of this existing research does not differentiate between results obtained for 'wet' AMD and 'dry' AMD. Here, we test the hypothesis that face recognition performance is worse in patients with dry AMD compared with visually healthy peers. Patients (>60 years of age, logMAR binocular visual acuity 0.7 or better) with dry AMD of varying severity and visually healthy age-related peers (controls) completed a modified version of the Cambridge Face Memory Test (CFMT). Percentage of correctly identified faces was used as an outcome measure for performance for each participant. A 90% normative reference limit was generated from the distribution of CFMT scores recorded in the visually healthy controls. Scores for AMD participants were then specifically compared to this limit, and comparisons between average scores in the AMD severity groups were investigated. Thirty patients (median [interquartile range] age of 76 [70, 79] years) and 34 controls (median age of 70 [64, 75] years) were examined. Four, seventeen and nine patients were classified as having early, intermediate and late AMD (geographic atrophy) respectively. Five (17%) patients recorded a face recognition performance worse than the 90% limit (Fisher's exact test, p = 0.46) set by controls; four of these had geographic atrophy. Patients with geographic atrophy identified fewer faces on average (±SD) (61% ± 22%) than those with early and intermediate AMD (75 ± 11%) and controls (74% ± 11%). People with dry AMD may not suffer from problems with face recognition until the disease is in its later stages; those with late AMD (geographic atrophy) are likely to have difficulty recognising faces. The results from this study should influence the management and expectations of patients with dry AMD in both community practice and hospital clinics.

  20. Spatial location in brief, free-viewing face encoding modulates contextual face recognition

    PubMed Central

    Felisberti, Fatima M.; McDermott, Mark R.

    2013-01-01

    The effect of the spatial location of faces in the visual field during brief, free-viewing encoding in subsequent face recognition is not known. This study addressed this question by tagging three groups of faces with cheating, cooperating or neutral behaviours and presenting them for encoding in two visual hemifields (upper vs. lower or left vs. right). Participants then had to indicate if a centrally presented face had been seen before or not. Head and eye movements were free in all phases. Findings showed that the overall recognition of cooperators was significantly better than cheaters, and it was better for faces encoded in the upper hemifield than in the lower hemifield, both in terms of a higher d′ and faster reaction time (RT). The d′ for any given behaviour in the left and right hemifields was similar. The RT in the left hemifield did not vary with tagged behaviour, whereas the RT in the right hemifield was longer for cheaters than for cooperators. The results showed that memory biases in contextual face recognition were modulated by the spatial location of briefly encoded faces and are discussed in terms of scanning reading habits, top-left bias in lighting preference and peripersonal space. PMID:24349694

  1. Faces in Context: Does Face Perception Depend on the Orientation of the Visual Scene?

    PubMed

    Taubert, Jessica; van Golde, Celine; Verstraten, Frans A J

    2016-10-01

    The mechanisms held responsible for familiar face recognition are thought to be orientation dependent; inverted faces are more difficult to recognize than their upright counterparts. Although this effect of inversion has been investigated extensively, researchers have typically sliced faces from photographs and presented them in isolation. As such, it is not known whether the perceived orientation of a face is inherited from the visual scene in which it appears. Here, we address this question by measuring performance in a simultaneous same-different task while manipulating both the orientation of the faces and the scene. We found that the face inversion effect survived scene inversion. Nonetheless, an improvement in performance when the scene was upside down suggests that sensitivity to identity increased when the faces were more easily segmented from the scene. Thus, while these data identify congruency with the visual environment as a contributing factor in recognition performance, they imply different mechanisms operate on upright and inverted faces. © The Author(s) 2016.

  2. Local visual perception bias in children with high-functioning autism spectrum disorders; do we have the whole picture?

    PubMed

    Falkmer, Marita; Black, Melissa; Tang, Julia; Fitzgerald, Patrick; Girdler, Sonya; Leung, Denise; Ordqvist, Anna; Tan, Tele; Jahan, Ishrat; Falkmer, Torbjorn

    2016-01-01

    While local bias in visual processing in children with autism spectrum disorders (ASD) has been reported to result in difficulties in recognizing faces and facially expressed emotions, but superior ability in disembedding figures, associations between these abilities within a group of children with and without ASD have not been explored. Possible associations in performance on the Visual Perception Skills Figure-Ground test, a face recognition test and an emotion recognition test were investigated within 25 8-12-years-old children with high-functioning autism/Asperger syndrome, and in comparison to 33 typically developing children. Analyses indicated a weak positive correlation between accuracy in Figure-Ground recognition and emotion recognition. No other correlation estimates were significant. These findings challenge both the enhanced perceptual function hypothesis and the weak central coherence hypothesis, and accentuate the importance of further scrutinizing the existance and nature of local visual bias in ASD.

  3. Is it me? Self-recognition bias across sensory modalities and its relationship to autistic traits.

    PubMed

    Chakraborty, Anya; Chakrabarti, Bhismadev

    2015-01-01

    Atypical self-processing is an emerging theme in autism research, suggested by lower self-reference effect in memory, and atypical neural responses to visual self-representations. Most research on physical self-processing in autism uses visual stimuli. However, the self is a multimodal construct, and therefore, it is essential to test self-recognition in other sensory modalities as well. Self-recognition in the auditory modality remains relatively unexplored and has not been tested in relation to autism and related traits. This study investigates self-recognition in auditory and visual domain in the general population and tests if it is associated with autistic traits. Thirty-nine neurotypical adults participated in a two-part study. In the first session, individual participant's voice was recorded and face was photographed and morphed respectively with voices and faces from unfamiliar identities. In the second session, participants performed a 'self-identification' task, classifying each morph as 'self' voice (or face) or an 'other' voice (or face). All participants also completed the Autism Spectrum Quotient (AQ). For each sensory modality, slope of the self-recognition curve was used as individual self-recognition metric. These two self-recognition metrics were tested for association between each other, and with autistic traits. Fifty percent 'self' response was reached for a higher percentage of self in the auditory domain compared to the visual domain (t = 3.142; P < 0.01). No significant correlation was noted between self-recognition bias across sensory modalities (τ = -0.165, P = 0.204). Higher recognition bias for self-voice was observed in individuals higher in autistic traits (τ AQ = 0.301, P = 0.008). No such correlation was observed between recognition bias for self-face and autistic traits (τ AQ = -0.020, P = 0.438). Our data shows that recognition bias for physical self-representation is not related across sensory modalities. Further, individuals with higher autistic traits were better able to discriminate self from other voices, but this relation was not observed with self-face. A narrow self-other overlap in the auditory domain seen in individuals with high autistic traits could arise due to enhanced perceptual processing of auditory stimuli often observed in individuals with autism.

  4. The Other-Race Effect Develops During Infancy

    PubMed Central

    Quinn, Paul C.; Slater, Alan M.; Lee, Kang; Ge, Liezhong; Pascalis, Olivier

    2008-01-01

    Experience plays a crucial role in the development of face processing. In the study reported here, we investigated how faces observed within the visual environment affect the development of the face-processing system during the 1st year of life. We assessed 3-, 6-, and 9-month-old Caucasian infants' ability to discriminate faces within their own racial group and within three other-race groups (African, Middle Eastern, and Chinese). The 3-month-old infants demonstrated recognition in all conditions, the 6-month-old infants were able to recognize Caucasian and Chinese faces only, and the 9-month-old infants' recognition was restricted to own-race faces. The pattern of preferences indicates that the other-race effect is emerging by 6 months of age and is present at 9 months of age. The findings suggest that facial input from the infant's visual environment is crucial for shaping the face-processing system early in infancy, resulting in differential recognition accuracy for faces of different races in adulthood. PMID:18031416

  5. Emotional Faces in Context: Age Differences in Recognition Accuracy and Scanning Patterns

    PubMed Central

    Noh, Soo Rim; Isaacowitz, Derek M.

    2014-01-01

    While age-related declines in facial expression recognition are well documented, previous research relied mostly on isolated faces devoid of context. We investigated the effects of context on age differences in recognition of facial emotions and in visual scanning patterns of emotional faces. While their eye movements were monitored, younger and older participants viewed facial expressions (i.e., anger, disgust) in contexts that were emotionally congruent, incongruent, or neutral to the facial expression to be identified. Both age groups had highest recognition rates of facial expressions in the congruent context, followed by the neutral context, and recognition rates in the incongruent context were worst. These context effects were more pronounced for older adults. Compared to younger adults, older adults exhibited a greater benefit from congruent contextual information, regardless of facial expression. Context also influenced the pattern of visual scanning characteristics of emotional faces in a similar manner across age groups. In addition, older adults initially attended more to context overall. Our data highlight the importance of considering the role of context in understanding emotion recognition in adulthood. PMID:23163713

  6. Effects of facial emotion recognition remediation on visual scanning of novel face stimuli.

    PubMed

    Marsh, Pamela J; Luckett, Gemma; Russell, Tamara; Coltheart, Max; Green, Melissa J

    2012-11-01

    Previous research shows that emotion recognition in schizophrenia can be improved with targeted remediation that draws attention to important facial features (eyes, nose, mouth). Moreover, the effects of training have been shown to last for up to one month after training. The aim of this study was to investigate whether improved emotion recognition of novel faces is associated with concomitant changes in visual scanning of these same novel facial expressions. Thirty-nine participants with schizophrenia received emotion recognition training using Ekman's Micro-Expression Training Tool (METT), with emotion recognition and visual scanpath (VSP) recordings to face stimuli collected simultaneously. Baseline ratings of interpersonal and cognitive functioning were also collected from all participants. Post-METT training, participants showed changes in foveal attention to the features of facial expressions of emotion not used in METT training, which were generally consistent with the information about important features from the METT. In particular, there were changes in how participants looked at the features of facial expressions of emotion surprise, disgust, fear, happiness, and neutral, demonstrating that improved emotion recognition is paralleled by changes in the way participants with schizophrenia viewed novel facial expressions of emotion. However, there were overall decreases in foveal attention to sad and neutral faces that indicate more intensive instruction might be needed for these faces during training. Most importantly, the evidence shows that participant gender may affect training outcomes. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Learning to recognize face shapes through serial exploration.

    PubMed

    Wallraven, Christian; Whittingstall, Lisa; Bülthoff, Heinrich H

    2013-05-01

    Human observers are experts at visual face recognition due to specialized visual mechanisms for face processing that evolve with perceptual expertize. Such expertize has long been attributed to the use of configural processing, enabled by fast, parallel information encoding of the visual information in the face. Here we tested whether participants can learn to efficiently recognize faces that are serially encoded-that is, when only partial visual information about the face is available at any given time. For this, ten participants were trained in gaze-restricted face recognition in which face masks were viewed through a small aperture controlled by the participant. Tests comparing trained with untrained performance revealed (1) a marked improvement in terms of speed and accuracy, (2) a gradual development of configural processing strategies, and (3) participants' ability to rapidly learn and accurately recognize novel exemplars. This performance pattern demonstrates that participants were able to learn new strategies to compensate for the serial nature of information encoding. The results are discussed in terms of expertize acquisition and relevance for other sensory modalities relying on serial encoding.

  8. The neural correlates of visual self-recognition.

    PubMed

    Devue, Christel; Brédart, Serge

    2011-03-01

    This paper presents a review of studies that were aimed at determining which brain regions are recruited during visual self-recognition, with a particular focus on self-face recognition. A complex bilateral network, involving frontal, parietal and occipital areas, appears to be associated with self-face recognition, with a particularly high implication of the right hemisphere. Results indicate that it remains difficult to determine which specific cognitive operation is reflected by each recruited brain area, in part due to the variability of used control stimuli and experimental tasks. A synthesis of the interpretations provided by previous studies is presented. The relevance of using self-recognition as an indicator of self-awareness is discussed. We argue that a major aim of future research in the field should be to identify more clearly the cognitive operations induced by the perception of the self-face, and search for dissociations between neural correlates and cognitive components. Copyright © 2010 Elsevier Inc. All rights reserved.

  9. Reduced adaptability, but no fundamental disruption, of norm-based face coding following early visual deprivation from congenital cataracts.

    PubMed

    Rhodes, Gillian; Nishimura, Mayu; de Heering, Adelaide; Jeffery, Linda; Maurer, Daphne

    2017-05-01

    Faces are adaptively coded relative to visual norms that are updated by experience, and this adaptive coding is linked to face recognition ability. Here we investigated whether adaptive coding of faces is disrupted in individuals (adolescents and adults) who experience face recognition difficulties following visual deprivation from congenital cataracts in infancy. We measured adaptive coding using face identity aftereffects, where smaller aftereffects indicate less adaptive updating of face-coding mechanisms by experience. We also examined whether the aftereffects increase with adaptor identity strength, consistent with norm-based coding of identity, as in typical populations, or whether they show a different pattern indicating some more fundamental disruption of face-coding mechanisms. Cataract-reversal patients showed significantly smaller face identity aftereffects than did controls (Experiments 1 and 2). However, their aftereffects increased significantly with adaptor strength, consistent with norm-based coding (Experiment 2). Thus we found reduced adaptability but no fundamental disruption of norm-based face-coding mechanisms in cataract-reversal patients. Our results suggest that early visual experience is important for the normal development of adaptive face-coding mechanisms. © 2016 John Wiley & Sons Ltd.

  10. Perception and Processing of Faces in the Human Brain Is Tuned to Typical Feature Locations

    PubMed Central

    Schwarzkopf, D. Samuel; Alvarez, Ivan; Lawson, Rebecca P.; Henriksson, Linda; Kriegeskorte, Nikolaus; Rees, Geraint

    2016-01-01

    Faces are salient social stimuli whose features attract a stereotypical pattern of fixations. The implications of this gaze behavior for perception and brain activity are largely unknown. Here, we characterize and quantify a retinotopic bias implied by typical gaze behavior toward faces, which leads to eyes and mouth appearing most often in the upper and lower visual field, respectively. We found that the adult human visual system is tuned to these contingencies. In two recognition experiments, recognition performance for isolated face parts was better when they were presented at typical, rather than reversed, visual field locations. The recognition cost of reversed locations was equal to ∼60% of that for whole face inversion in the same sample. Similarly, an fMRI experiment showed that patterns of activity evoked by eye and mouth stimuli in the right inferior occipital gyrus could be separated with significantly higher accuracy when these features were presented at typical, rather than reversed, visual field locations. Our findings demonstrate that human face perception is determined not only by the local position of features within a face context, but by whether features appear at the typical retinotopic location given normal gaze behavior. Such location sensitivity may reflect fine-tuning of category-specific visual processing to retinal input statistics. Our findings further suggest that retinotopic heterogeneity might play a role for face inversion effects and for the understanding of conditions affecting gaze behavior toward faces, such as autism spectrum disorders and congenital prosopagnosia. SIGNIFICANCE STATEMENT Faces attract our attention and trigger stereotypical patterns of visual fixations, concentrating on inner features, like eyes and mouth. Here we show that the visual system represents face features better when they are shown at retinal positions where they typically fall during natural vision. When facial features were shown at typical (rather than reversed) visual field locations, they were discriminated better by humans and could be decoded with higher accuracy from brain activity patterns in the right occipital face area. This suggests that brain representations of face features do not cover the visual field uniformly. It may help us understand the well-known face-inversion effect and conditions affecting gaze behavior toward faces, such as prosopagnosia and autism spectrum disorders. PMID:27605606

  11. Visual scanning and recognition of Chinese, Caucasian, and racially ambiguous faces: Contributions from bottom-up facial physiognomic information and top-down knowledge of racial categories

    PubMed Central

    Wang, Qiandong; Xiao, Naiqi G.; Quinn, Paul C.; Hu, Chao S.; Qian, Miao; Fu, Genyue; Lee, Kang

    2014-01-01

    Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese faces, Caucasian faces, and racially ambiguous morphed face stimuli. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information of racial categories that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time. PMID:25497461

  12. Face Recognition in Humans and Machines

    NASA Astrophysics Data System (ADS)

    O'Toole, Alice; Tistarelli, Massimo

    The study of human face recognition by psychologists and neuroscientists has run parallel to the development of automatic face recognition technologies by computer scientists and engineers. In both cases, there are analogous steps of data acquisition, image processing, and the formation of representations that can support the complex and diverse tasks we accomplish with faces. These processes can be understood and compared in the context of their neural and computational implementations. In this chapter, we present the essential elements of face recognition by humans and machines, taking a perspective that spans psychological, neural, and computational approaches. From the human side, we overview the methods and techniques used in the neurobiology of face recognition, the underlying neural architecture of the system, the role of visual attention, and the nature of the representations that emerges. From the computational side, we discuss face recognition technologies and the strategies they use to overcome challenges to robust operation over viewing parameters. Finally, we conclude the chapter with a look at some recent studies that compare human and machine performances at face recognition.

  13. Role of fusiform and anterior temporal cortical areas in facial recognition.

    PubMed

    Nasr, Shahin; Tootell, Roger B H

    2012-11-15

    Recent fMRI studies suggest that cortical face processing extends well beyond the fusiform face area (FFA), including unspecified portions of the anterior temporal lobe. However, the exact location of such anterior temporal region(s), and their role during active face recognition, remain unclear. Here we demonstrate that (in addition to FFA) a small bilateral site in the anterior tip of the collateral sulcus ('AT'; the anterior temporal face patch) is selectively activated during recognition of faces but not houses (a non-face object). In contrast to the psychophysical prediction that inverted and contrast reversed faces are processed like other non-face objects, both FFA and AT (but not other visual areas) were also activated during recognition of inverted and contrast reversed faces. However, response accuracy was better correlated to recognition-driven activity in AT, compared to FFA. These data support a segregated, hierarchical model of face recognition processing, extending to the anterior temporal cortex. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Role of Fusiform and Anterior Temporal Cortical Areas in Facial Recognition

    PubMed Central

    Nasr, Shahin; Tootell, Roger BH

    2012-01-01

    Recent FMRI studies suggest that cortical face processing extends well beyond the fusiform face area (FFA), including unspecified portions of the anterior temporal lobe. However, the exact location of such anterior temporal region(s), and their role during active face recognition, remain unclear. Here we demonstrate that (in addition to FFA) a small bilateral site in the anterior tip of the collateral sulcus (‘AT’; the anterior temporal face patch) is selectively activated during recognition of faces but not houses (a non-face object). In contrast to the psychophysical prediction that inverted and contrast reversed faces are processed like other non-face objects, both FFA and AT (but not other visual areas) were also activated during recognition of inverted and contrast reversed faces. However, response accuracy was better correlated to recognition-driven activity in AT, compared to FFA. These data support a segregated, hierarchical model of face recognition processing, extending to the anterior temporal cortex. PMID:23034518

  15. Brief Report: Face-Specific Recognition Deficits in Young Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Bradshaw, Jessica; Shic, Frederick; Chawarska, Katarzyna

    2011-01-01

    This study used eyetracking to investigate the ability of young children with autism spectrum disorders (ASD) to recognize social (faces) and nonsocial (simple objects and complex block patterns) stimuli using the visual paired comparison (VPC) paradigm. Typically developing (TD) children showed evidence for recognition of faces and simple…

  16. How a Hat May Affect 3-Month-Olds' Recognition of a Face: An Eye-Tracking Study

    PubMed Central

    Bulf, Hermann; Valenza, Eloisa; Turati, Chiara

    2013-01-01

    Recent studies have shown that infants’ face recognition rests on a robust face representation that is resilient to a variety of facial transformations such as rotations in depth, motion, occlusion or deprivation of inner/outer features. Here, we investigated whether 3-month-old infants’ ability to represent the invariant aspects of a face is affected by the presence of an external add-on element, i.e. a hat. Using a visual habituation task, three experiments were carried out in which face recognition was investigated by manipulating the presence/absence of a hat during face encoding (i.e. habituation phase) and face recognition (i.e. test phase). An eye-tracker system was used to record the time infants spent looking at face-relevant information compared to the hat. The results showed that infants’ face recognition was not affected by the presence of the external element when the type of the hat did not vary between the habituation and test phases, and when both the novel and the familiar face wore the same hat during the test phase (Experiment 1). Infants’ ability to recognize the invariant aspects of a face was preserved also when the hat was absent in the habituation phase and the same hat was shown only during the test phase (Experiment 2). Conversely, when the novel face identity competed with a novel hat, the hat triggered the infants’ attention, interfering with the recognition process and preventing the infants’ preference for the novel face during the test phase (Experiment 3). Findings from the current study shed light on how faces and objects are processed when they are simultaneously presented in the same visual scene, contributing to an understanding of how infants respond to the multiple and composite information available in their surrounding environment. PMID:24349378

  17. How a hat may affect 3-month-olds' recognition of a face: an eye-tracking study.

    PubMed

    Bulf, Hermann; Valenza, Eloisa; Turati, Chiara

    2013-01-01

    Recent studies have shown that infants' face recognition rests on a robust face representation that is resilient to a variety of facial transformations such as rotations in depth, motion, occlusion or deprivation of inner/outer features. Here, we investigated whether 3-month-old infants' ability to represent the invariant aspects of a face is affected by the presence of an external add-on element, i.e. a hat. Using a visual habituation task, three experiments were carried out in which face recognition was investigated by manipulating the presence/absence of a hat during face encoding (i.e. habituation phase) and face recognition (i.e. test phase). An eye-tracker system was used to record the time infants spent looking at face-relevant information compared to the hat. The results showed that infants' face recognition was not affected by the presence of the external element when the type of the hat did not vary between the habituation and test phases, and when both the novel and the familiar face wore the same hat during the test phase (Experiment 1). Infants' ability to recognize the invariant aspects of a face was preserved also when the hat was absent in the habituation phase and the same hat was shown only during the test phase (Experiment 2). Conversely, when the novel face identity competed with a novel hat, the hat triggered the infants' attention, interfering with the recognition process and preventing the infants' preference for the novel face during the test phase (Experiment 3). Findings from the current study shed light on how faces and objects are processed when they are simultaneously presented in the same visual scene, contributing to an understanding of how infants respond to the multiple and composite information available in their surrounding environment.

  18. Toward a unified model of face and object recognition in the human visual system

    PubMed Central

    Wallis, Guy

    2013-01-01

    Our understanding of the mechanisms and neural substrates underlying visual recognition has made considerable progress over the past 30 years. During this period, accumulating evidence has led many scientists to conclude that objects and faces are recognised in fundamentally distinct ways, and in fundamentally distinct cortical areas. In the psychological literature, in particular, this dissociation has led to a palpable disconnect between theories of how we process and represent the two classes of object. This paper follows a trend in part of the recognition literature to try to reconcile what we know about these two forms of recognition by considering the effects of learning. Taking a widely accepted, self-organizing model of object recognition, this paper explains how such a system is affected by repeated exposure to specific stimulus classes. In so doing, it explains how many aspects of recognition generally regarded as unusual to faces (holistic processing, configural processing, sensitivity to inversion, the other-race effect, the prototype effect, etc.) are emergent properties of category-specific learning within such a system. Overall, the paper describes how a single model of recognition learning can and does produce the seemingly very different types of representation associated with faces and objects. PMID:23966963

  19. Using hypnosis to disrupt face processing: mirrored-self misidentification delusion and different visual media

    PubMed Central

    Connors, Michael H.; Barnier, Amanda J.; Coltheart, Max; Langdon, Robyn; Cox, Rochelle E.; Rivolta, Davide; Halligan, Peter W.

    2014-01-01

    Mirrored-self misidentification delusion is the belief that one’s reflection in the mirror is not oneself. This experiment used hypnotic suggestion to impair normal face processing in healthy participants and recreate key aspects of the delusion in the laboratory. From a pool of 439 participants, 22 high hypnotisable participants (“highs”) and 20 low hypnotisable participants were selected on the basis of their extreme scores on two separately administered measures of hypnotisability. These participants received a hypnotic induction and a suggestion for either impaired (i) self-face recognition or (ii) impaired recognition of all faces. Participants were tested on their ability to recognize themselves in a mirror and other visual media – including a photograph, live video, and handheld mirror – and their ability to recognize other people, including the experimenter and famous faces. Both suggestions produced impaired self-face recognition and recreated key aspects of the delusion in highs. However, only the suggestion for impaired other-face recognition disrupted recognition of other faces, albeit in a minority of highs. The findings confirm that hypnotic suggestion can disrupt face processing and recreate features of mirrored-self misidentification. The variability seen in participants’ responses also corresponds to the heterogeneity seen in clinical patients. An important direction for future research will be to examine sources of this variability within both clinical patients and the hypnotic model. PMID:24994973

  20. Visual scanning and recognition of Chinese, Caucasian, and racially ambiguous faces: contributions from bottom-up facial physiognomic information and top-down knowledge of racial categories.

    PubMed

    Wang, Qiandong; Xiao, Naiqi G; Quinn, Paul C; Hu, Chao S; Qian, Miao; Fu, Genyue; Lee, Kang

    2015-02-01

    Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese, Caucasian, and racially ambiguous faces. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Oxytocin Reduces Face Processing Time but Leaves Recognition Accuracy and Eye-Gaze Unaffected.

    PubMed

    Hubble, Kelly; Daughters, Katie; Manstead, Antony S R; Rees, Aled; Thapar, Anita; van Goozen, Stephanie H M

    2017-01-01

    Previous studies have found that oxytocin (OXT) can improve the recognition of emotional facial expressions; it has been proposed that this effect is mediated by an increase in attention to the eye-region of faces. Nevertheless, evidence in support of this claim is inconsistent, and few studies have directly tested the effect of oxytocin on emotion recognition via altered eye-gaze Methods: In a double-blind, within-subjects, randomized control experiment, 40 healthy male participants received 24 IU intranasal OXT and placebo in two identical experimental sessions separated by a 2-week interval. Visual attention to the eye-region was assessed on both occasions while participants completed a static facial emotion recognition task using medium intensity facial expressions. Although OXT had no effect on emotion recognition accuracy, recognition performance was improved because face processing was faster across emotions under the influence of OXT. This effect was marginally significant (p<.06). Consistent with a previous study using dynamic stimuli, OXT had no effect on eye-gaze patterns when viewing static emotional faces and this was not related to recognition accuracy or face processing time. These findings suggest that OXT-induced enhanced facial emotion recognition is not necessarily mediated by an increase in attention to the eye-region of faces, as previously assumed. We discuss several methodological issues which may explain discrepant findings and suggest the effect of OXT on visual attention may differ depending on task requirements. (JINS, 2017, 23, 23-33).

  2. Facial recognition using multisensor images based on localized kernel eigen spaces.

    PubMed

    Gundimada, Satyanadh; Asari, Vijayan K

    2009-06-01

    A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.

  3. Efficient visual information for unfamiliar face matching despite viewpoint variations: It's not in the eyes!

    PubMed

    Royer, Jessica; Blais, Caroline; Barnabé-Lortie, Vincent; Carré, Mélissa; Leclerc, Josiane; Fiset, Daniel

    2016-06-01

    Faces are encountered in highly diverse angles in real-world settings. Despite this considerable diversity, most individuals are able to easily recognize familiar faces. The vast majority of studies in the field of face recognition have nonetheless focused almost exclusively on frontal views of faces. Indeed, a number of authors have investigated the diagnostic facial features for the recognition of frontal views of faces previously encoded in this same view. However, the nature of the information useful for identity matching when the encoded face and test face differ in viewing angle remains mostly unexplored. The present study addresses this issue using individual differences and bubbles, a method that pinpoints the facial features effectively used in a visual categorization task. Our results indicate that the use of features located in the center of the face, the lower left portion of the nose area and the center of the mouth, are significantly associated with individual efficiency to generalize a face's identity across different viewpoints. However, as faces become more familiar, the reliance on this area decreases, while the diagnosticity of the eye region increases. This suggests that a certain distinction can be made between the visual mechanisms subtending viewpoint invariance and face recognition in the case of unfamiliar face identification. Our results further support the idea that the eye area may only come into play when the face stimulus is particularly familiar to the observer. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Face learning and the emergence of view-independent face recognition: an event-related brain potential study.

    PubMed

    Zimmermann, Friederike G S; Eimer, Martin

    2013-06-01

    Recognizing unfamiliar faces is more difficult than familiar face recognition, and this has been attributed to qualitative differences in the processing of familiar and unfamiliar faces. Familiar faces are assumed to be represented by view-independent codes, whereas unfamiliar face recognition depends mainly on view-dependent low-level pictorial representations. We employed an electrophysiological marker of visual face recognition processes in order to track the emergence of view-independence during the learning of previously unfamiliar faces. Two face images showing either the same or two different individuals in the same or two different views were presented in rapid succession, and participants had to perform an identity-matching task. On trials where both faces showed the same view, repeating the face of the same individual triggered an N250r component at occipito-temporal electrodes, reflecting the rapid activation of visual face memory. A reliable N250r component was also observed on view-change trials. Crucially, this view-independence emerged as a result of face learning. In the first half of the experiment, N250r components were present only on view-repetition trials but were absent on view-change trials, demonstrating that matching unfamiliar faces was initially based on strictly view-dependent codes. In the second half, the N250r was triggered not only on view-repetition trials but also on view-change trials, indicating that face recognition had now become more view-independent. This transition may be due to the acquisition of abstract structural codes of individual faces during face learning, but could also reflect the formation of associative links between sets of view-specific pictorial representations of individual faces. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Using eye movements as an index of implicit face recognition in autism spectrum disorder.

    PubMed

    Hedley, Darren; Young, Robyn; Brewer, Neil

    2012-10-01

    Individuals with an autism spectrum disorder (ASD) typically show impairment on face recognition tasks. Performance has usually been assessed using overt, explicit recognition tasks. Here, a complementary method involving eye tracking was used to examine implicit face recognition in participants with ASD and in an intelligence quotient-matched non-ASD control group. Differences in eye movement indices between target and foil faces were used as an indicator of implicit face recognition. Explicit face recognition was assessed using old-new discrimination and reaction time measures. Stimuli were faces of studied (target) or unfamiliar (foil) persons. Target images at test were either identical to the images presented at study or altered by changing the lighting, pose, or by masking with visual noise. Participants with ASD performed worse than controls on the explicit recognition task. Eye movement-based measures, however, indicated that implicit recognition may not be affected to the same degree as explicit recognition. Autism Res 2012, 5: 363-379. © 2012 International Society for Autism Research, Wiley Periodicals, Inc. © 2012 International Society for Autism Research, Wiley Periodicals, Inc.

  6. Functional specialization and convergence in the occipito-temporal cortex supporting haptic and visual identification of human faces and body parts: an fMRI study.

    PubMed

    Kitada, Ryo; Johnsrude, Ingrid S; Kochiyama, Takanori; Lederman, Susan J

    2009-10-01

    Humans can recognize common objects by touch extremely well whenever vision is unavailable. Despite its importance to a thorough understanding of human object recognition, the neuroscientific study of this topic has been relatively neglected. To date, the few published studies have addressed the haptic recognition of nonbiological objects. We now focus on haptic recognition of the human body, a particularly salient object category for touch. Neuroimaging studies demonstrate that regions of the occipito-temporal cortex are specialized for visual perception of faces (fusiform face area, FFA) and other body parts (extrastriate body area, EBA). Are the same category-sensitive regions activated when these components of the body are recognized haptically? Here, we use fMRI to compare brain organization for haptic and visual recognition of human body parts. Sixteen subjects identified exemplars of faces, hands, feet, and nonbiological control objects using vision and haptics separately. We identified two discrete regions within the fusiform gyrus (FFA and the haptic face region) that were each sensitive to both haptically and visually presented faces; however, these two regions differed significantly in their response patterns. Similarly, two regions within the lateral occipito-temporal area (EBA and the haptic body region) were each sensitive to body parts in both modalities, although the response patterns differed. Thus, although the fusiform gyrus and the lateral occipito-temporal cortex appear to exhibit modality-independent, category-sensitive activity, our results also indicate a degree of functional specialization related to sensory modality within these structures.

  7. Cross-modal reorganization in cochlear implant users: Auditory cortex contributes to visual face processing.

    PubMed

    Stropahl, Maren; Plotz, Karsten; Schönfeld, Rüdiger; Lenarz, Thomas; Sandmann, Pascale; Yovel, Galit; De Vos, Maarten; Debener, Stefan

    2015-11-01

    There is converging evidence that the auditory cortex takes over visual functions during a period of auditory deprivation. A residual pattern of cross-modal take-over may prevent the auditory cortex to adapt to restored sensory input as delivered by a cochlear implant (CI) and limit speech intelligibility with a CI. The aim of the present study was to investigate whether visual face processing in CI users activates auditory cortex and whether this has adaptive or maladaptive consequences. High-density electroencephalogram data were recorded from CI users (n=21) and age-matched normal hearing controls (n=21) performing a face versus house discrimination task. Lip reading and face recognition abilities were measured as well as speech intelligibility. Evaluation of event-related potential (ERP) topographies revealed significant group differences over occipito-temporal scalp regions. Distributed source analysis identified significantly higher activation in the right auditory cortex for CI users compared to NH controls, confirming visual take-over. Lip reading skills were significantly enhanced in the CI group and appeared to be particularly better after a longer duration of deafness, while face recognition was not significantly different between groups. However, auditory cortex activation in CI users was positively related to face recognition abilities. Our results confirm a cross-modal reorganization for ecologically valid visual stimuli in CI users. Furthermore, they suggest that residual takeover, which can persist even after adaptation to a CI is not necessarily maladaptive. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Individual differences in the ability to recognise facial identity are associated with social anxiety.

    PubMed

    Davis, Joshua M; McKone, Elinor; Dennett, Hugh; O'Connor, Kirsty B; O'Kearney, Richard; Palermo, Romina

    2011-01-01

    Previous research has been concerned with the relationship between social anxiety and the recognition of face expression but the question of whether there is a relationship between social anxiety and the recognition of face identity has been neglected. Here, we report the first evidence that social anxiety is associated with recognition of face identity, across the population range of individual differences in recognition abilities. Results showed poorer face identity recognition (on the Cambridge Face Memory Test) was correlated with a small but significant increase in social anxiety (Social Interaction Anxiety Scale) but not general anxiety (State-Trait Anxiety Inventory). The correlation was also independent of general visual memory (Cambridge Car Memory Test) and IQ. Theoretically, the correlation could arise because correct identification of people, typically achieved via faces, is important for successful social interactions, extending evidence that individuals with clinical-level deficits in face identity recognition (prosopagnosia) often report social stress due to their inability to recognise others. Equally, the relationship could arise if social anxiety causes reduced exposure or attention to people's faces, and thus to poor development of face recognition mechanisms.

  9. Individual Differences in the Ability to Recognise Facial Identity Are Associated with Social Anxiety

    PubMed Central

    Davis, Joshua M.; McKone, Elinor; Dennett, Hugh; O'Connor, Kirsty B.; O'Kearney, Richard; Palermo, Romina

    2011-01-01

    Previous research has been concerned with the relationship between social anxiety and the recognition of face expression but the question of whether there is a relationship between social anxiety and the recognition of face identity has been neglected. Here, we report the first evidence that social anxiety is associated with recognition of face identity, across the population range of individual differences in recognition abilities. Results showed poorer face identity recognition (on the Cambridge Face Memory Test) was correlated with a small but significant increase in social anxiety (Social Interaction Anxiety Scale) but not general anxiety (State-Trait Anxiety Inventory). The correlation was also independent of general visual memory (Cambridge Car Memory Test) and IQ. Theoretically, the correlation could arise because correct identification of people, typically achieved via faces, is important for successful social interactions, extending evidence that individuals with clinical-level deficits in face identity recognition (prosopagnosia) often report social stress due to their inability to recognise others. Equally, the relationship could arise if social anxiety causes reduced exposure or attention to people's faces, and thus to poor development of face recognition mechanisms. PMID:22194916

  10. Effective Connectivity from Early Visual Cortex to Posterior Occipitotemporal Face Areas Supports Face Selectivity and Predicts Developmental Prosopagnosia

    PubMed Central

    Garrido, Lucia; Driver, Jon; Dolan, Raymond J.; Duchaine, Bradley C.; Furl, Nicholas

    2016-01-01

    Face processing is mediated by interactions between functional areas in the occipital and temporal lobe, and the fusiform face area (FFA) and anterior temporal lobe play key roles in the recognition of facial identity. Individuals with developmental prosopagnosia (DP), a lifelong face recognition impairment, have been shown to have structural and functional neuronal alterations in these areas. The present study investigated how face selectivity is generated in participants with normal face processing, and how functional abnormalities associated with DP, arise as a function of network connectivity. Using functional magnetic resonance imaging and dynamic causal modeling, we examined effective connectivity in normal participants by assessing network models that include early visual cortex (EVC) and face-selective areas and then investigated the integrity of this connectivity in participants with DP. Results showed that a feedforward architecture from EVC to the occipital face area, EVC to FFA, and EVC to posterior superior temporal sulcus (pSTS) best explained how face selectivity arises in both controls and participants with DP. In this architecture, the DP group showed reduced connection strengths on feedforward connections carrying face information from EVC to FFA and EVC to pSTS. These altered network dynamics in DP contribute to the diminished face selectivity in the posterior occipitotemporal areas affected in DP. These findings suggest a novel view on the relevance of feedforward projection from EVC to posterior occipitotemporal face areas in generating cortical face selectivity and differences in face recognition ability. SIGNIFICANCE STATEMENT Areas of the human brain showing enhanced activation to faces compared to other objects or places have been extensively studied. However, the factors leading to this face selectively have remained mostly unknown. We show that effective connectivity from early visual cortex to posterior occipitotemporal face areas gives rise to face selectivity. Furthermore, people with developmental prosopagnosia, a lifelong face recognition impairment, have reduced face selectivity in the posterior occipitotemporal face areas and left anterior temporal lobe. We show that this reduced face selectivity can be predicted by effective connectivity from early visual cortex to posterior occipitotemporal face areas. This study presents the first network-based account of how face selectivity arises in the human brain. PMID:27030766

  11. Unconstrained face detection and recognition based on RGB-D camera for the visually impaired

    NASA Astrophysics Data System (ADS)

    Zhao, Xiangdong; Wang, Kaiwei; Yang, Kailun; Hu, Weijian

    2017-02-01

    It is highly important for visually impaired people (VIP) to be aware of human beings around themselves, so correctly recognizing people in VIP assisting apparatus provide great convenience. However, in classical face recognition technology, faces used in training and prediction procedures are usually frontal, and the procedures of acquiring face images require subjects to get close to the camera so that frontal face and illumination guaranteed. Meanwhile, labels of faces are defined manually rather than automatically. Most of the time, labels belonging to different classes need to be input one by one. It prevents assisting application for VIP with these constraints in practice. In this article, a face recognition system under unconstrained environment is proposed. Specifically, it doesn't require frontal pose or uniform illumination as required by previous algorithms. The attributes of this work lie in three aspects. First, a real time frontal-face synthesizing enhancement is implemented, and frontal faces help to increase recognition rate, which is proved with experiment results. Secondly, RGB-D camera plays a significant role in our system, from which both color and depth information are utilized to achieve real time face tracking which not only raises the detection rate but also gives an access to label faces automatically. Finally, we propose to use neural networks to train a face recognition system, and Principal Component Analysis (PCA) is applied to pre-refine the input data. This system is expected to provide convenient help for VIP to get familiar with others, and make an access for them to recognize people when the system is trained enough.

  12. Developmental Commonalities between Object and Face Recognition in Adolescence

    PubMed Central

    Jüttner, Martin; Wakui, Elley; Petters, Dean; Davidoff, Jules

    2016-01-01

    In the visual perception literature, the recognition of faces has often been contrasted with that of non-face objects, in terms of differences with regard to the role of parts, part relations and holistic processing. However, recent evidence from developmental studies has begun to blur this sharp distinction. We review evidence for a protracted development of object recognition that is reminiscent of the well-documented slow maturation observed for faces. The prolonged development manifests itself in a retarded processing of metric part relations as opposed to that of individual parts and offers surprising parallels to developmental accounts of face recognition, even though the interpretation of the data is less clear with regard to holistic processing. We conclude that such results might indicate functional commonalities between the mechanisms underlying the recognition of faces and non-face objects, which are modulated by different task requirements in the two stimulus domains. PMID:27014176

  13. Verbal overshadowing of visual memories: some things are better left unsaid.

    PubMed

    Schooler, J W; Engstler-Schooler, T Y

    1990-01-01

    It is widely believed that verbal processing generally improves memory performance. However, in a series of six experiments, verbalizing the appearance of previously seen visual stimuli impaired subsequent recognition performance. In Experiment 1, subjects viewed a videotape including a salient individual. Later, some subjects described the individual's face. Subjects who verbalized the face performed less well on a subsequent recognition test than control subjects who did not engage in memory verbalization. The results of Experiment 2 replicated those of Experiment 1 and further clarified the effect of memory verbalization by demonstrating that visualization does not impair face recognition. In Experiments 3 and 4 we explored the hypothesis that memory verbalization impairs memory for stimuli that are difficult to put into words. In Experiment 3 memory impairment followed the verbalization of a different visual stimulus: color. In Experiment 4 marginal memory improvement followed the verbalization of a verbal stimulus: a brief spoken statement. In Experiments 5 and 6 the source of verbally induced memory impairment was explored. The results of Experiment 5 suggested that the impairment does not reflect a temporary verbal set, but rather indicates relatively long-lasting memory interference. Finally, Experiment 6 demonstrated that limiting subjects' time to make recognition decisions alleviates the impairment, suggesting that memory verbalization overshadows but does not eradicate the original visual memory. This collection of results is consistent with a recording interference hypothesis: verbalizing a visual memory may produce a verbally biased memory representation that can interfere with the application of the original visual memory.

  14. Self-face recognition shares brain regions active during proprioceptive illusion in the right inferior fronto-parietal superior longitudinal fasciculus III network.

    PubMed

    Morita, Tomoyo; Saito, Daisuke N; Ban, Midori; Shimada, Koji; Okamoto, Yuko; Kosaka, Hirotaka; Okazawa, Hidehiko; Asada, Minoru; Naito, Eiichi

    2017-04-21

    Proprioception is somatic sensation that allows us to sense and recognize position, posture, and their changes in our body parts. It pertains directly to oneself and may contribute to bodily awareness. Likewise, one's face is a symbol of oneself, so that visual self-face recognition directly contributes to the awareness of self as distinct from others. Recently, we showed that right-hemispheric dominant activity in the inferior fronto-parietal cortices, which are connected by the inferior branch of the superior longitudinal fasciculus (SLF III), is associated with proprioceptive illusion (awareness), in concert with sensorimotor activity. Herein, we tested the hypothesis that visual self-face recognition shares brain regions active during proprioceptive illusion in the right inferior fronto-parietal SLF III network. We scanned brain activity using functional magnetic resonance imaging while twenty-two right-handed healthy adults performed two tasks. One was a proprioceptive illusion task, where blindfolded participants experienced a proprioceptive illusion of right hand movement. The other was a visual self-face recognition task, where the participants judged whether an observed face was their own. We examined whether the self-face recognition and the proprioceptive illusion commonly activated the inferior fronto-parietal cortices connected by the SLF III in a right-hemispheric dominant manner. Despite the difference in sensory modality and in the body parts involved in the two tasks, both tasks activated the right inferior fronto-parietal cortices, which are likely connected by the SLF III, in a right-side dominant manner. Here we discuss possible roles for right inferior fronto-parietal activity in bodily awareness and self-awareness. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  15. Visual feature extraction and establishment of visual tags in the intelligent visual internet of things

    NASA Astrophysics Data System (ADS)

    Zhao, Yiqun; Wang, Zhihui

    2015-12-01

    The Internet of things (IOT) is a kind of intelligent networks which can be used to locate, track, identify and supervise people and objects. One of important core technologies of intelligent visual internet of things ( IVIOT) is the intelligent visual tag system. In this paper, a research is done into visual feature extraction and establishment of visual tags of the human face based on ORL face database. Firstly, we use the principal component analysis (PCA) algorithm for face feature extraction, then adopt the support vector machine (SVM) for classifying and face recognition, finally establish a visual tag for face which is already classified. We conducted a experiment focused on a group of people face images, the result show that the proposed algorithm have good performance, and can show the visual tag of objects conveniently.

  16. The Effect of Inversion on 3- to 5-Year-Olds' Recognition of Face and Nonface Visual Objects

    ERIC Educational Resources Information Center

    Picozzi, Marta; Cassia, Viola Macchi; Turati, Chiara; Vescovo, Elena

    2009-01-01

    This study compared the effect of stimulus inversion on 3- to 5-year-olds' recognition of faces and two nonface object categories matched with faces for a number of attributes: shoes (Experiment 1) and frontal images of cars (Experiments 2 and 3). The inversion effect was present for faces but not shoes at 3 years of age (Experiment 1). Analogous…

  17. Face recognition ability matures late: evidence from individual differences in young adults.

    PubMed

    Susilo, Tirta; Germine, Laura; Duchaine, Bradley

    2013-10-01

    Does face recognition ability mature early in childhood (early maturation hypothesis) or does it continue to develop well into adulthood (late maturation hypothesis)? This fundamental issue in face recognition is typically addressed by comparing child and adult participants. However, the interpretation of such studies is complicated by children's inferior test-taking abilities and general cognitive functions. Here we examined the developmental trajectory of face recognition ability in an individual differences study of 18-33 year-olds (n = 2,032), an age interval in which participants are competent test takers with comparable general cognitive functions. We found a positive association between age and face recognition, controlling for nonface visual recognition, verbal memory, sex, and own-race bias. Our study supports the late maturation hypothesis in face recognition, and illustrates how individual differences investigations of young adults can address theoretical issues concerning the development of perceptual and cognitive abilities. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  18. Is attentional prioritisation of infant faces unique in humans?: Comparative demonstrations by modified dot-probe task in monkeys.

    PubMed

    Koda, Hiroki; Sato, Anna; Kato, Akemi

    2013-09-01

    Humans innately perceive infantile features as cute. The ethologist Konrad Lorenz proposed that the infantile features of mammals and birds, known as the baby schema (kindchenschema), motivate caretaking behaviour. As biologically relevant stimuli, newborns are likely to be processed specially in terms of visual attention, perception, and cognition. Recent demonstrations on human participants have shown visual attentional prioritisation to newborn faces (i.e., newborn faces capture visual attention). Although characteristics equivalent to those found in the faces of human infants are found in nonhuman primates, attentional capture by newborn faces has not been tested in nonhuman primates. We examined whether conspecific newborn faces captured the visual attention of two Japanese monkeys using a target-detection task based on dot-probe tasks commonly used in human visual attention studies. Although visual cues enhanced target detection in subject monkeys, our results, unlike those for humans, showed no evidence of an attentional prioritisation for newborn faces by monkeys. Our demonstrations showed the validity of dot-probe task for visual attention studies in monkeys and propose a novel approach to bridge the gap between human and nonhuman primate social cognition research. This suggests that attentional capture by newborn faces is not common to macaques, but it is unclear if nursing experiences influence their perception and recognition of infantile appraisal stimuli. We need additional comparative studies to reveal the evolutionary origins of baby-schema perception and recognition. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Talker variability in audio-visual speech perception

    PubMed Central

    Heald, Shannon L. M.; Nusbaum, Howard C.

    2014-01-01

    A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker’s face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker’s face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker’s face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred. PMID:25076919

  20. Talker variability in audio-visual speech perception.

    PubMed

    Heald, Shannon L M; Nusbaum, Howard C

    2014-01-01

    A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker's face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker's face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker's face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred.

  1. How Fast is Famous Face Recognition?

    PubMed Central

    Barragan-Jason, Gladys; Lachat, Fanny; Barbeau, Emmanuel J.

    2012-01-01

    The rapid recognition of familiar faces is crucial for social interactions. However the actual speed with which recognition can be achieved remains largely unknown as most studies have been carried out without any speed constraints. Different paradigms have been used, leading to conflicting results, and although many authors suggest that face recognition is fast, the speed of face recognition has not been directly compared to “fast” visual tasks. In this study, we sought to overcome these limitations. Subjects performed three tasks, a familiarity categorization task (famous faces among unknown faces), a superordinate categorization task (human faces among animal ones), and a gender categorization task. All tasks were performed under speed constraints. The results show that, despite the use of speed constraints, subjects were slow when they had to categorize famous faces: minimum reaction time was 467 ms, which is 180 ms more than during superordinate categorization and 160 ms more than in the gender condition. Our results are compatible with a hierarchy of face processing from the superordinate level to the familiarity level. The processes taking place between detection and recognition need to be investigated in detail. PMID:23162503

  2. Functional architecture of visual emotion recognition ability: A latent variable approach.

    PubMed

    Lewis, Gary J; Lefevre, Carmen E; Young, Andrew W

    2016-05-01

    Emotion recognition has been a focus of considerable attention for several decades. However, despite this interest, the underlying structure of individual differences in emotion recognition ability has been largely overlooked and thus is poorly understood. For example, limited knowledge exists concerning whether recognition ability for one emotion (e.g., disgust) generalizes to other emotions (e.g., anger, fear). Furthermore, it is unclear whether emotion recognition ability generalizes across modalities, such that those who are good at recognizing emotions from the face, for example, are also good at identifying emotions from nonfacial cues (such as cues conveyed via the body). The primary goal of the current set of studies was to address these questions through establishing the structure of individual differences in visual emotion recognition ability. In three independent samples (Study 1: n = 640; Study 2: n = 389; Study 3: n = 303), we observed that the ability to recognize visually presented emotions is based on different sources of variation: a supramodal emotion-general factor, supramodal emotion-specific factors, and face- and within-modality emotion-specific factors. In addition, we found evidence that general intelligence and alexithymia were associated with supramodal emotion recognition ability. Autism-like traits, empathic concern, and alexithymia were independently associated with face-specific emotion recognition ability. These results (a) provide a platform for further individual differences research on emotion recognition ability, (b) indicate that differentiating levels within the architecture of emotion recognition ability is of high importance, and (c) show that the capacity to understand expressions of emotion in others is linked to broader affective and cognitive processes. (c) 2016 APA, all rights reserved).

  3. Why the long face? The importance of vertical image structure for biological "barcodes" underlying face recognition.

    PubMed

    Spence, Morgan L; Storrs, Katherine R; Arnold, Derek H

    2014-07-29

    Humans are experts at face recognition. The mechanisms underlying this complex capacity are not fully understood. Recently, it has been proposed that face recognition is supported by a coarse-scale analysis of visual information contained in horizontal bands of contrast distributed along the vertical image axis-a biological facial "barcode" (Dakin & Watt, 2009). A critical prediction of the facial barcode hypothesis is that the distribution of image contrast along the vertical axis will be more important for face recognition than image distributions along the horizontal axis. Using a novel paradigm involving dynamic image distortions, a series of experiments are presented examining famous face recognition impairments from selectively disrupting image distributions along the vertical or horizontal image axes. Results show that disrupting the image distribution along the vertical image axis is more disruptive for recognition than matched distortions along the horizontal axis. Consistent with the facial barcode hypothesis, these results suggest that human face recognition relies disproportionately on appropriately scaled distributions of image contrast along the vertical image axis. © 2014 ARVO.

  4. Face identity matching is selectively impaired in developmental prosopagnosia.

    PubMed

    Fisher, Katie; Towler, John; Eimer, Martin

    2017-04-01

    Individuals with developmental prosopagnosia (DP) have severe face recognition deficits, but the mechanisms that are responsible for these deficits have not yet been fully identified. We assessed whether the activation of visual working memory for individual faces is selectively impaired in DP. Twelve DPs and twelve age-matched control participants were tested in a task where they reported whether successively presented faces showed the same or two different individuals, and another task where they judged whether the faces showed the same or different facial expressions. Repetitions versus changes of the other currently irrelevant attribute were varied independently. DPs showed impaired performance in the identity task, but performed at the same level as controls in the expression task. An electrophysiological marker for the activation of visual face memory by identity matches (N250r component) was strongly attenuated in the DP group, and the size of this attenuation was correlated with poor performance in a standardized face recognition test. Results demonstrate an identity-specific deficit of visual face memory in DPs. Their reduced sensitivity to identity matches in the presence of other image changes could result from earlier deficits in the perceptual extraction of image-invariant visual identity cues from face images. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  5. The Ability of Visually Impaired Children to Read Expressions and Recognize Faces.

    ERIC Educational Resources Information Center

    Ellis, H. D.; And Others

    1987-01-01

    Seventeen visually impaired children, aged 7-11 years, were compared with sighted children on a test of facial recognition and a test of expression identification. The visually impaired children were less able to recognize faces successfully but showed no disadvantage in discerning facial expressions such as happiness, anger, surprise, or fear.…

  6. Early visual experience and the recognition of basic facial expressions: involvement of the middle temporal and inferior frontal gyri during haptic identification by the early blind

    PubMed Central

    Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T.; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J.; Sadato, Norihiro

    2012-01-01

    Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience. PMID:23372547

  7. Early visual experience and the recognition of basic facial expressions: involvement of the middle temporal and inferior frontal gyri during haptic identification by the early blind.

    PubMed

    Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J; Sadato, Norihiro

    2013-01-01

    Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience.

  8. Individual differences in adaptive coding of face identity are linked to individual differences in face recognition ability.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise

    2014-06-01

    Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  9. Minimizing Skin Color Differences Does Not Eliminate the Own-Race Recognition Advantage in Infants

    PubMed Central

    Anzures, Gizelle; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; Lee, Kang

    2011-01-01

    An abundance of experience with own-race faces and limited to no experience with other-race faces has been associated with better recognition memory for own-race faces in infants, children, and adults. This study investigated the developmental origins of this other-race effect (ORE) by examining the role of a salient perceptual property of faces—that of skin color. Six- and 9-month-olds’ recognition memory for own- and other-race faces was examined using infant-controlled habituation and visual-paired comparison at test. Infants were shown own- or other-race faces in color or with skin color cues minimized in grayscale images. Results for the color stimuli replicated previous findings that infants show an ORE in face recognition memory. Results for the grayscale stimuli showed that even when a salient perceptual cue to race, such as skin color information, is minimized, 6- to 9-month-olds, nonetheless, show an ORE in their face recognition memory. Infants’ use of shape-based and configural cues for face recognition is discussed. PMID:22039335

  10. Impaired Integration of Emotional Faces and Affective Body Context in a Rare Case of Developmental Visual Agnosia

    PubMed Central

    Aviezer, Hillel; Hassin, Ran. R.; Bentin, Shlomo

    2011-01-01

    In the current study we examined the recognition of facial expressions embedded in emotionally expressive bodies in case LG, an individual with a rare form of developmental visual agnosia who suffers from severe prosopagnosia. Neuropsychological testing demonstrated that LG‘s agnosia is characterized by profoundly impaired visual integration. Unlike individuals with typical developmental prosopagnosia who display specific difficulties with face identity (but typically not expression) recognition, LG was also impaired at recognizing isolated facial expressions. By contrast, he successfully recognized the expressions portrayed by faceless emotional bodies handling affective paraphernalia. When presented with contextualized faces in emotional bodies his ability to detect the emotion expressed by a face did not improve even if it was embedded in an emotionally-congruent body context. Furthermore, in contrast to controls, LG displayed an abnormal pattern of contextual influence from emotionally-incongruent bodies. The results are interpreted in the context of a general integration deficit in developmental visual agnosia, suggesting that impaired integration may extend from the level of the face to the level of the full person. PMID:21482423

  11. Emotion recognition deficits associated with ventromedial prefrontal cortex lesions are improved by gaze manipulation.

    PubMed

    Wolf, Richard C; Pujara, Maia; Baskaya, Mustafa K; Koenigs, Michael

    2016-09-01

    Facial emotion recognition is a critical aspect of human communication. Since abnormalities in facial emotion recognition are associated with social and affective impairment in a variety of psychiatric and neurological conditions, identifying the neural substrates and psychological processes underlying facial emotion recognition will help advance basic and translational research on social-affective function. Ventromedial prefrontal cortex (vmPFC) has recently been implicated in deploying visual attention to the eyes of emotional faces, although there is mixed evidence regarding the importance of this brain region for recognition accuracy. In the present study of neurological patients with vmPFC damage, we used an emotion recognition task with morphed facial expressions of varying intensities to determine (1) whether vmPFC is essential for emotion recognition accuracy, and (2) whether instructed attention to the eyes of faces would be sufficient to improve any accuracy deficits. We found that vmPFC lesion patients are impaired, relative to neurologically healthy adults, at recognizing moderate intensity expressions of anger and that recognition accuracy can be improved by providing instructions of where to fixate. These results suggest that vmPFC may be important for the recognition of facial emotion through a role in guiding visual attention to emotionally salient regions of faces. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Visual Scanning Patterns and Executive Function in Relation to Facial Emotion Recognition in Aging

    PubMed Central

    Circelli, Karishma S.; Clark, Uraina S.; Cronin-Golomb, Alice

    2012-01-01

    Objective The ability to perceive facial emotion varies with age. Relative to younger adults (YA), older adults (OA) are less accurate at identifying fear, anger, and sadness, and more accurate at identifying disgust. Because different emotions are conveyed by different parts of the face, changes in visual scanning patterns may account for age-related variability. We investigated the relation between scanning patterns and recognition of facial emotions. Additionally, as frontal-lobe changes with age may affect scanning patterns and emotion recognition, we examined correlations between scanning parameters and performance on executive function tests. Methods We recorded eye movements from 16 OA (mean age 68.9) and 16 YA (mean age 19.2) while they categorized facial expressions and non-face control images (landscapes), and administered standard tests of executive function. Results OA were less accurate than YA at identifying fear (p<.05, r=.44) and more accurate at identifying disgust (p<.05, r=.39). OA fixated less than YA on the top half of the face for disgust, fearful, happy, neutral, and sad faces (p’s<.05, r’s≥.38), whereas there was no group difference for landscapes. For OA, executive function was correlated with recognition of sad expressions and with scanning patterns for fearful, sad, and surprised expressions. Conclusion We report significant age-related differences in visual scanning that are specific to faces. The observed relation between scanning patterns and executive function supports the hypothesis that frontal-lobe changes with age may underlie some changes in emotion recognition. PMID:22616800

  13. Face familiarity promotes stable identity recognition: exploring face perception using serial dependence

    PubMed Central

    Kok, Rebecca; Van der Burg, Erik; Rhodes, Gillian; Alais, David

    2017-01-01

    Studies suggest that familiar faces are processed in a manner distinct from unfamiliar faces and that familiarity with a face confers an advantage in identity recognition. Our visual system seems to capitalize on experience to build stable face representations that are impervious to variation in retinal input that may occur due to changes in lighting, viewpoint, viewing distance, eye movements, etc. Emerging evidence also suggests that our visual system maintains a continuous perception of a face's identity from one moment to the next despite the retinal input variations through serial dependence. This study investigates whether interactions occur between face familiarity and serial dependence. In two experiments, participants used a continuous scale to rate attractiveness of unfamiliar and familiar faces (either experimentally learned or famous) presented in rapid sequences. Both experiments revealed robust inter-trial effects in which attractiveness ratings for a given face depended on the preceding face's attractiveness. This inter-trial attractiveness effect was most pronounced for unfamiliar faces. Indeed, when participants were familiar with a given face, attractiveness ratings showed significantly less serial dependence. These results represent the first evidence that familiar faces can resist the temporal integration seen in sequential dependencies and highlight the importance of familiarity to visual cognition. PMID:28405355

  14. A Multidimensional Approach to the Study of Emotion Recognition in Autism Spectrum Disorders

    PubMed Central

    Xavier, Jean; Vignaud, Violaine; Ruggiero, Rosa; Bodeau, Nicolas; Cohen, David; Chaby, Laurence

    2015-01-01

    Although deficits in emotion recognition have been widely reported in autism spectrum disorder (ASD), experiments have been restricted to either facial or vocal expressions. Here, we explored multimodal emotion processing in children with ASD (N = 19) and with typical development (TD, N = 19), considering uni (faces and voices) and multimodal (faces/voices simultaneously) stimuli and developmental comorbidities (neuro-visual, language and motor impairments). Compared to TD controls, children with ASD had rather high and heterogeneous emotion recognition scores but showed also several significant differences: lower emotion recognition scores for visual stimuli, for neutral emotion, and a greater number of saccades during visual task. Multivariate analyses showed that: (1) the difficulties they experienced with visual stimuli were partially alleviated with multimodal stimuli. (2) Developmental age was significantly associated with emotion recognition in TD children, whereas it was the case only for the multimodal task in children with ASD. (3) Language impairments tended to be associated with emotion recognition scores of ASD children in the auditory modality. Conversely, in the visual or bimodal (visuo-auditory) tasks, the impact of developmental coordination disorder or neuro-visual impairments was not found. We conclude that impaired emotion processing constitutes a dimension to explore in the field of ASD, as research has the potential to define more homogeneous subgroups and tailored interventions. However, it is clear that developmental age, the nature of the stimuli, and other developmental comorbidities must also be taken into account when studying this dimension. PMID:26733928

  15. The hows and whys of face memory: level of construal influences the recognition of human faces

    PubMed Central

    Wyer, Natalie A.; Hollins, Timothy J.; Pahl, Sabine; Roper, Jean

    2015-01-01

    Three experiments investigated the influence of level of construal (i.e., the interpretation of actions in terms of their meaning or their details) on different stages of face memory. We employed a standard multiple-face recognition paradigm, with half of the faces inverted at test. Construal level was manipulated prior to recognition (Experiment 1), during study (Experiment 2) or both (Experiment 3). The results support a general advantage for high-level construal over low-level construal at both study and at test, and suggest that matching processing style between study and recognition has no advantage. These experiments provide additional evidence in support of a link between semantic processing (i.e., construal) and visual (i.e., face) processing. We conclude with a discussion of implications for current theories relating to both construal and face processing. PMID:26500586

  16. Recognition Memory for Realistic Synthetic Faces

    PubMed Central

    Yotsumoto, Yuko; Kahana, Michael J.; Wilson, Hugh R.; Sekuler, Robert

    2006-01-01

    A series of experiments examined short-term recognition memory for trios of briefly-presented, synthetic human faces derived from three real human faces. The stimuli were graded series of faces, which differed by varying known amounts from the face of the average female. Faces based on each of the three real faces were transformed so as to lie along orthogonal axes in a 3-D face space. Experiment 1 showed that the synthetic faces' perceptual similarity stucture strongly influenced recognition memory. Results were fit by NEMo, a noisy exemplar model of perceptual recognition memory. The fits revealed that recognition memory was influenced both by the similarity of the probe to series items, and by the similarities among the series items themselves. Non-metric multi-dimensional scaling (MDS) showed that faces' perceptual representations largely preserved the 3-D space in which the face stimuli were arrayed. NEMo gave a better account of the results when similarity was defined as perceptual, MDS similarity rather than physical proximity of one face to another. Experiment 2 confirmed the importance of within-list homogeneity directly, without mediation of a model. We discuss the affinities and differences between visual memory for synthetic faces and memory for simpler stimuli. PMID:17948069

  17. Selective visual attention and motivation: the consequences of value learning in an attentional blink task.

    PubMed

    Raymond, Jane E; O'Brien, Jennifer L

    2009-08-01

    Learning to associate the probability and value of behavioral outcomes with specific stimuli (value learning) is essential for rational decision making. However, in demanding cognitive conditions, access to learned values might be constrained by limited attentional capacity. We measured recognition of briefly presented faces seen previously in a value-learning task involving monetary wins and losses; the recognition task was performed both with and without constraints on available attention. Regardless of available attention, recognition was substantially enhanced for motivationally salient stimuli (i.e., stimuli highly predictive of outcomes), compared with equally familiar stimuli that had weak or no motivational salience, and this effect was found regardless of valence (win or loss). However, when attention was constrained (because stimuli were presented during an attentional blink, AB), valence determined recognition; win-associated faces showed no AB, but all other faces showed large ABs. Motivational salience acts independently of attention to modulate simple perceptual decisions, but when attention is limited, visual processing is biased in favor of reward-associated stimuli.

  18. Recognition and context memory for faces from own and other ethnic groups: a remember-know investigation.

    PubMed

    Horry, Ruth; Wright, Daniel B; Tredoux, Colin G

    2010-03-01

    People are more accurate at recognizing faces from their own ethnic group than at recognizing faces from other ethnic groups. This other-ethnicity effect (OEE) in recognition may be produced by a deficit in recollective memory for other-ethnicity faces. In a single study, White and Black participants saw White and Black faces presented within several different visual contexts. The participants were then given an old/new recognition task. Old responses were followed by remember-know-guess judgments and context judgments. Own-ethnicity faces were recognized more accurately, were given more remember responses, and produced more accurate context judgments than did other-ethnicity faces. These results are discussed in a dual-process framework, and implications for eyewitness memory are considered.

  19. Face recognition in newly hatched chicks at the onset of vision.

    PubMed

    Wood, Samantha M W; Wood, Justin N

    2015-04-01

    How does face recognition emerge in the newborn brain? To address this question, we used an automated controlled-rearing method with a newborn animal model: the domestic chick (Gallus gallus). This automated method allowed us to examine chicks' face recognition abilities at the onset of both face experience and object experience. In the first week of life, newly hatched chicks were raised in controlled-rearing chambers that contained no objects other than a single virtual human face. In the second week of life, we used an automated forced-choice testing procedure to examine whether chicks could distinguish that familiar face from a variety of unfamiliar faces. Chicks successfully distinguished the familiar face from most of the unfamiliar faces-for example, chicks were sensitive to changes in the face's age, gender, and orientation (upright vs. inverted). Thus, chicks can build an accurate representation of the first face they see in their life. These results show that the initial state of face recognition is surprisingly powerful: Newborn visual systems can begin encoding and recognizing faces at the onset of vision. (c) 2015 APA, all rights reserved).

  20. Experience moderates overlap between object and face recognition, suggesting a common ability

    PubMed Central

    Gauthier, Isabel; McGugin, Rankin W.; Richler, Jennifer J.; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E.

    2014-01-01

    Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. PMID:24993021

  1. Experience moderates overlap between object and face recognition, suggesting a common ability.

    PubMed

    Gauthier, Isabel; McGugin, Rankin W; Richler, Jennifer J; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E

    2014-07-03

    Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. © 2014 ARVO.

  2. Face sketch recognition based on edge enhancement via deep learning

    NASA Astrophysics Data System (ADS)

    Xie, Zhenzhu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong

    2017-11-01

    In this paper,we address the face sketch recognition problem. Firstly, we utilize the eigenface algorithm to convert a sketch image into a synthesized sketch face image. Subsequently, considering the low-level vision problem in synthesized face sketch image .Super resolution reconstruction algorithm based on CNN(convolutional neural network) is employed to improve the visual effect. To be specific, we uses a lightweight super-resolution structure to learn a residual mapping instead of directly mapping the feature maps from the low-level space to high-level patch representations, which making the networks are easier to optimize and have lower computational complexity. Finally, we adopt LDA(Linear Discriminant Analysis) algorithm to realize face sketch recognition on synthesized face image before super resolution and after respectively. Extensive experiments on the face sketch database(CUFS) from CUHK demonstrate that the recognition rate of SVM(Support Vector Machine) algorithm improves from 65% to 69% and the recognition rate of LDA(Linear Discriminant Analysis) algorithm improves from 69% to 75%.What'more,the synthesized face image after super resolution can not only better describer image details such as hair ,nose and mouth etc, but also improve the recognition accuracy effectively.

  3. Role of temporal processing stages by inferior temporal neurons in facial recognition.

    PubMed

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji

    2011-01-01

    In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition.

  4. Role of Temporal Processing Stages by Inferior Temporal Neurons in Facial Recognition

    PubMed Central

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji

    2011-01-01

    In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition. PMID:21734904

  5. Face Processing: Models For Recognition

    NASA Astrophysics Data System (ADS)

    Turk, Matthew A.; Pentland, Alexander P.

    1990-03-01

    The human ability to process faces is remarkable. We can identify perhaps thousands of faces learned throughout our lifetime and read facial expression to understand such subtle qualities as emotion. These skills are quite robust, despite sometimes large changes in the visual stimulus due to expression, aging, and distractions such as glasses or changes in hairstyle or facial hair. Computers which model and recognize faces will be useful in a variety of applications, including criminal identification, human-computer interface, and animation. We discuss models for representing faces and their applicability to the task of recognition, and present techniques for identifying faces and detecting eye blinks.

  6. Facial Recognition in a Discus Fish (Cichlidae): Experimental Approach Using Digital Models

    PubMed Central

    Satoh, Shun; Tanaka, Hirokazu; Kohda, Masanori

    2016-01-01

    A number of mammals and birds are known to be capable of visually discriminating between familiar and unfamiliar individuals, depending on facial patterns in some species. Many fish also visually recognize other conspecifics individually, and previous studies report that facial color patterns can be an initial signal for individual recognition. For example, a cichlid fish and a damselfish will use individual-specific color patterns that develop only in the facial area. However, it remains to be determined whether the facial area is an especially favorable site for visual signals in fish, and if so why? The monogamous discus fish, Symphysopdon aequifasciatus (Cichlidae), is capable of visually distinguishing its pair-partner from other conspecifics. Discus fish have individual-specific coloration patterns on entire body including the facial area, frontal head, trunk and vertical fins. If the facial area is an inherently important site for the visual cues, this species will use facial patterns for individual recognition, but otherwise they will use patterns on other body parts as well. We used modified digital models to examine whether discus fish use only facial coloration for individual recognition. Digital models of four different combinations of familiar and unfamiliar fish faces and bodies were displayed in frontal and lateral views. Focal fish frequently performed partner-specific displays towards partner-face models, and did aggressive displays towards models of non-partner’s faces. We conclude that to identify individuals this fish does not depend on frontal color patterns but does on lateral facial color patterns, although they have unique color patterns on the other parts of body. We discuss the significance of facial coloration for individual recognition in fish compared with birds and mammals. PMID:27191162

  7. Facial Recognition in a Discus Fish (Cichlidae): Experimental Approach Using Digital Models.

    PubMed

    Satoh, Shun; Tanaka, Hirokazu; Kohda, Masanori

    2016-01-01

    A number of mammals and birds are known to be capable of visually discriminating between familiar and unfamiliar individuals, depending on facial patterns in some species. Many fish also visually recognize other conspecifics individually, and previous studies report that facial color patterns can be an initial signal for individual recognition. For example, a cichlid fish and a damselfish will use individual-specific color patterns that develop only in the facial area. However, it remains to be determined whether the facial area is an especially favorable site for visual signals in fish, and if so why? The monogamous discus fish, Symphysopdon aequifasciatus (Cichlidae), is capable of visually distinguishing its pair-partner from other conspecifics. Discus fish have individual-specific coloration patterns on entire body including the facial area, frontal head, trunk and vertical fins. If the facial area is an inherently important site for the visual cues, this species will use facial patterns for individual recognition, but otherwise they will use patterns on other body parts as well. We used modified digital models to examine whether discus fish use only facial coloration for individual recognition. Digital models of four different combinations of familiar and unfamiliar fish faces and bodies were displayed in frontal and lateral views. Focal fish frequently performed partner-specific displays towards partner-face models, and did aggressive displays towards models of non-partner's faces. We conclude that to identify individuals this fish does not depend on frontal color patterns but does on lateral facial color patterns, although they have unique color patterns on the other parts of body. We discuss the significance of facial coloration for individual recognition in fish compared with birds and mammals.

  8. A stable biologically motivated learning mechanism for visual feature extraction to handle facial categorization.

    PubMed

    Rajaei, Karim; Khaligh-Razavi, Seyed-Mahdi; Ghodrati, Masoud; Ebrahimpour, Reza; Shiri Ahmad Abadi, Mohammad Ebrahim

    2012-01-01

    The brain mechanism of extracting visual features for recognizing various objects has consistently been a controversial issue in computational models of object recognition. To extract visual features, we introduce a new, biologically motivated model for facial categorization, which is an extension of the Hubel and Wiesel simple-to-complex cell hierarchy. To address the synaptic stability versus plasticity dilemma, we apply the Adaptive Resonance Theory (ART) for extracting informative intermediate level visual features during the learning process, which also makes this model stable against the destruction of previously learned information while learning new information. Such a mechanism has been suggested to be embedded within known laminar microcircuits of the cerebral cortex. To reveal the strength of the proposed visual feature learning mechanism, we show that when we use this mechanism in the training process of a well-known biologically motivated object recognition model (the HMAX model), it performs better than the HMAX model in face/non-face classification tasks. Furthermore, we demonstrate that our proposed mechanism is capable of following similar trends in performance as humans in a psychophysical experiment using a face versus non-face rapid categorization task.

  9. Visual attention: low-level and high-level viewpoints

    NASA Astrophysics Data System (ADS)

    Stentiford, Fred W. M.

    2012-06-01

    This paper provides a brief outline of the approaches to modeling human visual attention. Bottom-up and top-down mechanisms are described together with some of the problems that they face. It has been suggested in brain science that memory functions by trading measurement precision for associative power; sensory inputs from the environment are never identical on separate occasions, but the associations with memory compensate for the differences. A graphical representation for image similarity is described that relies on the size of maximally associative structures (cliques) that are found to reflect between pairs of images. This is applied to the recognition of movie posters, the location and recognition of characters, and the recognition of faces. The similarity mechanism is shown to model popout effects when constraints are placed on the physical separation of pixels that correspond to nodes in the maximal cliques. The effect extends to modeling human visual behaviour on the Poggendorff illusion.

  10. Cross spectral, active and passive approach to face recognition for improved performance

    NASA Astrophysics Data System (ADS)

    Grudzien, A.; Kowalski, M.; Szustakowski, M.

    2017-08-01

    Biometrics is a technique for automatic recognition of a person based on physiological or behavior characteristics. Since the characteristics used are unique, biometrics can create a direct link between a person and identity, based on variety of characteristics. The human face is one of the most important biometric modalities for automatic authentication. The most popular method of face recognition which relies on processing of visual information seems to be imperfect. Thermal infrared imagery may be a promising alternative or complement to visible range imaging due to its several reasons. This paper presents an approach of combining both methods.

  11. When apperceptive agnosia is explained by a deficit of primary visual processing.

    PubMed

    Serino, Andrea; Cecere, Roberto; Dundon, Neil; Bertini, Caterina; Sanchez-Castaneda, Cristina; Làdavas, Elisabetta

    2014-03-01

    Visual agnosia is a deficit in shape perception, affecting figure, object, face and letter recognition. Agnosia is usually attributed to lesions to high-order modules of the visual system, which combine visual cues to represent the shape of objects. However, most of previously reported agnosia cases presented visual field (VF) defects and poor primary visual processing. The present case-study aims to verify whether form agnosia could be explained by a deficit in basic visual functions, rather that by a deficit in high-order shape recognition. Patient SDV suffered a bilateral lesion of the occipital cortex due to anoxia. When tested, he could navigate, interact with others, and was autonomous in daily life activities. However, he could not recognize objects from drawings and figures, read or recognize familiar faces. He was able to recognize objects by touch and people from their voice. Assessments of visual functions showed blindness at the centre of the VF, up to almost 5°, bilaterally, with better stimulus detection in the periphery. Colour and motion perception was preserved. Psychophysical experiments showed that SDV's visual recognition deficits were not explained by poor spatial acuity or by the crowding effect. Rather a severe deficit in line orientation processing might be a key mechanism explaining SDV's agnosia. Line orientation processing is a basic function of primary visual cortex neurons, necessary for detecting "edges" of visual stimuli to build up a "primal sketch" for object recognition. We propose, therefore, that some forms of visual agnosia may be explained by deficits in basic visual functions due to widespread lesions of the primary visual areas, affecting primary levels of visual processing. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. A Novel Locally Linear KNN Method With Applications to Visual Recognition.

    PubMed

    Liu, Qingfeng; Liu, Chengjun

    2017-09-01

    A locally linear K Nearest Neighbor (LLK) method is presented in this paper with applications to robust visual recognition. Specifically, the concept of an ideal representation is first presented, which improves upon the traditional sparse representation in many ways. The objective function based on a host of criteria for sparsity, locality, and reconstruction is then optimized to derive a novel representation, which is an approximation to the ideal representation. The novel representation is further processed by two classifiers, namely, an LLK-based classifier and a locally linear nearest mean-based classifier, for visual recognition. The proposed classifiers are shown to connect to the Bayes decision rule for minimum error. Additional new theoretical analysis is presented, such as the nonnegative constraint, the group regularization, and the computational efficiency of the proposed LLK method. New methods such as a shifted power transformation for improving reliability, a coefficients' truncating method for enhancing generalization, and an improved marginal Fisher analysis method for feature extraction are proposed to further improve visual recognition performance. Extensive experiments are implemented to evaluate the proposed LLK method for robust visual recognition. In particular, eight representative data sets are applied for assessing the performance of the LLK method for various visual recognition applications, such as action recognition, scene recognition, object recognition, and face recognition.

  13. Self-organized Evaluation of Dynamic Hand Gestures for Sign Language Recognition

    NASA Astrophysics Data System (ADS)

    Buciu, Ioan; Pitas, Ioannis

    Two main theories exist with respect to face encoding and representation in the human visual system (HVS). The first one refers to the dense (holistic) representation of the face, where faces have "holon"-like appearance. The second one claims that a more appropriate face representation is given by a sparse code, where only a small fraction of the neural cells corresponding to face encoding is activated. Theoretical and experimental evidence suggest that the HVS performs face analysis (encoding, storing, face recognition, facial expression recognition) in a structured and hierarchical way, where both representations have their own contribution and goal. According to neuropsychological experiments, it seems that encoding for face recognition, relies on holistic image representation, while a sparse image representation is used for facial expression analysis and classification. From the computer vision perspective, the techniques developed for automatic face and facial expression recognition fall into the same two representation types. Like in Neuroscience, the techniques which perform better for face recognition yield a holistic image representation, while those techniques suitable for facial expression recognition use a sparse or local image representation. The proposed mathematical models of image formation and encoding try to simulate the efficient storing, organization and coding of data in the human cortex. This is equivalent with embedding constraints in the model design regarding dimensionality reduction, redundant information minimization, mutual information minimization, non-negativity constraints, class information, etc. The presented techniques are applied as a feature extraction step followed by a classification method, which also heavily influences the recognition results.

  14. The cognitive neuroscience of person identification.

    PubMed

    Biederman, Irving; Shilowich, Bryan E; Herald, Sarah B; Margalit, Eshed; Maarek, Rafael; Meschke, Emily X; Hacker, Catrina M

    2018-02-14

    We compare and contrast five differences between person identification by voice and face. 1. There is little or no cost when a familiar face is to be recognized from an unrestricted set of possible faces, even at Rapid Serial Visual Presentation (RSVP) rates, but the accuracy of familiar voice recognition declines precipitously when the set of possible speakers is increased from one to a mere handful. 2. Whereas deficits in face recognition are typically perceptual in origin, those with normal perception of voices can manifest severe deficits in their identification. 3. Congenital prosopagnosics (CPros) and congenital phonagnosics (CPhon) are generally unable to imagine familiar faces and voices, respectively. Only in CPros, however, is this deficit a manifestation of a general inability to form visual images of any kind. CPhons report no deficit in imaging non-voice sounds. 4. The prevalence of CPhons of 3.2% is somewhat higher than the reported prevalence of approximately 2.0% for CPros in the population. There is evidence that CPhon represents a distinct condition statistically and not just normal variation. 5. Face and voice recognition proficiency are uncorrelated rather than reflecting limitations of a general capacity for person individuation. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Aging effects on selective attention-related electroencephalographic patterns during face encoding.

    PubMed

    Deiber, M-P; Rodriguez, C; Jaques, D; Missonnier, P; Emch, J; Millet, P; Gold, G; Giannakopoulos, P; Ibañez, V

    2010-11-24

    Previous electrophysiological studies revealed that human faces elicit an early visual event-related potential (ERP) within the occipito-temporal cortex, the N170 component. Although face perception has been proposed to rely on automatic processing, the impact of selective attention on N170 remains controversial both in young and elderly individuals. Using early visual ERP and alpha power analysis, we assessed the influence of aging on selective attention to faces during delayed-recognition tasks for face and letter stimuli, examining 36 elderly and 20 young adults with preserved cognition. Face recognition performance worsened with age. Aging induced a latency delay of the N1 component for faces and letters, as well as of the face N170 component. Contrasting with letters, ignored faces elicited larger N1 and N170 components than attended faces in both age groups. This counterintuitive attention effect on face processing persisted when scenes replaced letters. In contrast with young, elderly subjects failed to suppress irrelevant letters when attending faces. Whereas attended stimuli induced a parietal alpha band desynchronization within 300-1000 ms post-stimulus with bilateral-to-right distribution for faces and left lateralization for letters, ignored and passively viewed stimuli elicited a central alpha synchronization larger on the right hemisphere. Aging delayed the latency of this alpha synchronization for both face and letter stimuli, and reduced its amplitude for ignored letters. These results suggest that due to their social relevance, human faces may cause paradoxical attention effects on early visual ERP components, but they still undergo classical top-down control as a function of endogenous selective attention. Aging does not affect the face bottom-up alerting mechanism but reduces the top-down suppression of distracting letters, possibly impinging upon face recognition, and more generally delays the top-down suppression of task-irrelevant information. Copyright © 2010 IBRO. Published by Elsevier Ltd. All rights reserved.

  16. Capturing specific abilities as a window into human individuality: the example of face recognition.

    PubMed

    Wilmer, Jeremy B; Germine, Laura; Chabris, Christopher F; Chatterjee, Garga; Gerbasi, Margaret; Nakayama, Ken

    2012-01-01

    Proper characterization of each individual's unique pattern of strengths and weaknesses requires good measures of diverse abilities. Here, we advocate combining our growing understanding of neural and cognitive mechanisms with modern psychometric methods in a renewed effort to capture human individuality through a consideration of specific abilities. We articulate five criteria for the isolation and measurement of specific abilities, then apply these criteria to face recognition. We cleanly dissociate face recognition from more general visual and verbal recognition. This dissociation stretches across ability as well as disability, suggesting that specific developmental face recognition deficits are a special case of a broader specificity that spans the entire spectrum of human face recognition performance. Item-by-item results from 1,471 web-tested participants, included as supplementary information, fuel item analyses, validation, norming, and item response theory (IRT) analyses of our three tests: (a) the widely used Cambridge Face Memory Test (CFMT); (b) an Abstract Art Memory Test (AAMT), and (c) a Verbal Paired-Associates Memory Test (VPMT). The availability of this data set provides a solid foundation for interpreting future scores on these tests. We argue that the allied fields of experimental psychology, cognitive neuroscience, and vision science could fuel the discovery of additional specific abilities to add to face recognition, thereby providing new perspectives on human individuality.

  17. A systematic review of visual processing and associated treatments in body dysmorphic disorder.

    PubMed

    Beilharz, F; Castle, D J; Grace, S; Rossell, S L

    2017-07-01

    Recent advances in body dysmorphic disorder (BDD) have explored abnormal visual processing, yet it is unclear how this relates to treatment. The aim of this study was to summarize our current understanding of visual processing in BDD and review associated treatments. The literature was collected through PsycInfo and PubMed. Visual processing articles were included if written in English after 1970, had a specific BDD group compared to healthy controls and were not case studies. Due to the lack of research regarding treatments associated with visual processing, case studies were included. A number of visual processing abnormalities are present in BDD, including face recognition, emotion identification, aesthetics, object recognition and gestalt processing. Differences to healthy controls include a dominance of detailed local processing over global processing and associated changes in brain activation in visual regions. Perceptual mirror retraining and some forms of self-exposure have demonstrated improved treatment outcomes, but have not been examined in isolation from broader treatments. Despite these abnormalities in perception, particularly concerning face and emotion recognition, few BDD treatments attempt to specifically remediate this. The development of a novel visual training programme which addresses these widespread abnormalities may provide an effective treatment modality. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. The time course of individual face recognition: A pattern analysis of ERP signals.

    PubMed

    Nemrodov, Dan; Niemeier, Matthias; Mok, Jenkin Ngo Yin; Nestor, Adrian

    2016-05-15

    An extensive body of work documents the time course of neural face processing in the human visual cortex. However, the majority of this work has focused on specific temporal landmarks, such as N170 and N250 components, derived through univariate analyses of EEG data. Here, we take on a broader evaluation of ERP signals related to individual face recognition as we attempt to move beyond the leading theoretical and methodological framework through the application of pattern analysis to ERP data. Specifically, we investigate the spatiotemporal profile of identity recognition across variation in emotional expression. To this end, we apply pattern classification to ERP signals both in time, for any single electrode, and in space, across multiple electrodes. Our results confirm the significance of traditional ERP components in face processing. At the same time though, they support the idea that the temporal profile of face recognition is incompletely described by such components. First, we show that signals associated with different facial identities can be discriminated from each other outside the scope of these components, as early as 70ms following stimulus presentation. Next, electrodes associated with traditional ERP components as well as, critically, those not associated with such components are shown to contribute information to stimulus discriminability. And last, the levels of ERP-based pattern discrimination are found to correlate with recognition accuracy across subjects confirming the relevance of these methods for bridging brain and behavior data. Altogether, the current results shed new light on the fine-grained time course of neural face processing and showcase the value of novel methods for pattern analysis to investigating fundamental aspects of visual recognition. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. The role of the amygdala and the basal ganglia in visual processing of central vs. peripheral emotional content.

    PubMed

    Almeida, Inês; van Asselen, Marieke; Castelo-Branco, Miguel

    2013-09-01

    In human cognition, most relevant stimuli, such as faces, are processed in central vision. However, it is widely believed that recognition of relevant stimuli (e.g. threatening animal faces) at peripheral locations is also important due to their survival value. Moreover, task instructions have been shown to modulate brain regions involved in threat recognition (e.g. the amygdala). In this respect it is also controversial whether tasks requiring explicit focus on stimulus threat content vs. implicit processing differently engage primitive subcortical structures involved in emotional appraisal. Here we have addressed the role of central vs. peripheral processing in the human amygdala using animal threatening vs. non-threatening face stimuli. First, a simple animal face recognition task with threatening and non-threatening animal faces, as well as non-face control stimuli, was employed in naïve subjects (implicit task). A subsequent task was then performed with the same stimulus categories (but different stimuli) in which subjects were told to explicitly detect threat signals. We found lateralized amygdala responses both to the spatial location of stimuli and to the threatening content of faces depending on the task performed: the right amygdala showed increased responses to central compared to left presented stimuli specifically during the threat detection task, while the left amygdala was better prone to discriminate threatening faces from non-facial displays during the animal face recognition task. Additionally, the right amygdala responded to faces during the threat detection task but only when centrally presented. Moreover, we have found no evidence for superior responses of the amygdala to peripheral stimuli. Importantly, we have found that striatal regions activate differentially depending on peripheral vs. central processing of threatening faces. Accordingly, peripheral processing of these stimuli activated more strongly the putaminal region, while central processing engaged mainly the caudate nucleus. We conclude that the human amygdala has a central bias for face stimuli, and that visual processing recruits different striatal regions, putaminal or caudate based, depending on the task and on whether peripheral or central visual processing is involved. © 2013 Elsevier Ltd. All rights reserved.

  20. The roles of perceptual and conceptual information in face recognition.

    PubMed

    Schwartz, Linoy; Yovel, Galit

    2016-11-01

    The representation of familiar objects is comprised of perceptual information about their visual properties as well as the conceptual knowledge that we have about them. What is the relative contribution of perceptual and conceptual information to object recognition? Here, we examined this question by designing a face familiarization protocol during which participants were either exposed to rich perceptual information (viewing each face in different angles and illuminations) or with conceptual information (associating each face with a different name). Both conditions were compared with single-view faces presented with no labels. Recognition was tested on new images of the same identities to assess whether learning generated a view-invariant representation. Results showed better recognition of novel images of the learned identities following association of a face with a name label, but no enhancement following exposure to multiple face views. Whereas these findings may be consistent with the role of category learning in object recognition, face recognition was better for labeled faces only when faces were associated with person-related labels (name, occupation), but not with person-unrelated labels (object names or symbols). These findings suggest that association of meaningful conceptual information with an image shifts its representation from an image-based percept to a view-invariant concept. They further indicate that the role of conceptual information should be considered to account for the superior recognition that we have for familiar faces and objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  1. [Neural mechanisms of facial recognition].

    PubMed

    Nagai, Chiyoko

    2007-01-01

    We review recent researches in neural mechanisms of facial recognition in the light of three aspects: facial discrimination and identification, recognition of facial expressions, and face perception in itself. First, it has been demonstrated that the fusiform gyrus has a main role of facial discrimination and identification. However, whether the FFA (fusiform face area) is really a special area for facial processing or not is controversial; some researchers insist that the FFA is related to 'becoming an expert' for some kinds of visual objects, including faces. Neural mechanisms of prosopagnosia would be deeply concerned to this issue. Second, the amygdala seems to be very concerned to recognition of facial expressions, especially fear. The amygdala, connected with the superior temporal sulcus and the orbitofrontal cortex, appears to operate the cortical function. The amygdala and the superior temporal sulcus are related to gaze recognition, which explains why a patient with bilateral amygdala damage could not recognize only a fear expression; the information from eyes is necessary for fear recognition. Finally, even a newborn infant can recognize a face as a face, which is congruent with the innate hypothesis of facial recognition. Some researchers speculate that the neural basis of such face perception is the subcortical network, comprised of the amygdala, the superior colliculus, and the pulvinar. This network would relate to covert recognition that prosopagnosic patients have.

  2. [Face recognition in patients with schizophrenia].

    PubMed

    Doi, Hirokazu; Shinohara, Kazuyuki

    2012-07-01

    It is well known that patients with schizophrenia show severe deficiencies in social communication skills. These deficiencies are believed to be partly derived from abnormalities in face recognition. However, the exact nature of these abnormalities exhibited by schizophrenic patients with respect to face recognition has yet to be clarified. In the present paper, we review the main findings on face recognition deficiencies in patients with schizophrenia, particularly focusing on abnormalities in the recognition of facial expression and gaze direction, which are the primary sources of information of others' mental states. The existing studies reveal that the abnormal recognition of facial expression and gaze direction in schizophrenic patients is attributable to impairments in both perceptual processing of visual stimuli, and cognitive-emotional responses to social information. Furthermore, schizophrenic patients show malfunctions in distributed neural regions, ranging from the fusiform gyrus recruited in the structural encoding of facial stimuli, to the amygdala which plays a primary role in the detection of the emotional significance of stimuli. These findings were obtained from research in patient groups with heterogeneous characteristics. Because previous studies have indicated that impairments in face recognition in schizophrenic patients might vary according to the types of symptoms, it is of primary importance to compare the nature of face recognition deficiencies and the impairments of underlying neural functions across sub-groups of patients.

  3. Hemifield memory for attractiveness.

    PubMed

    Deblieck, C; Zaidel, D W

    2003-07-01

    In order to determine whether or not facial attractiveness plays a role in hemispheric facial memory, 35 right-handed participants first assigned attractiveness ratings to faces and then performed a recognition test on those faces in the left visual half-field (LVF) and right visual half-field (RVF). We found significant interactions between the experimental factors and visual half-field. There were significant differences in the extreme ends of the rating scale, that is, the very unattractive versus the very attractive faces: Female participants remembered very attractive faces of both women and men, with memory being superior in the RVF than in the LVF. In contrast, the male participants remembered very unattractive faces of both women and men; RVF memory was better than the LVF for women faces while for men faces memory was superior in the LVF. The interactions with visual half-field suggest that hemispheric biases in remembering faces are influenced by degree of attractiveness.

  4. A family at risk: congenital prosopagnosia, poor face recognition and visuoperceptual deficits within one family.

    PubMed

    Johnen, Andreas; Schmukle, Stefan C; Hüttenbrink, Judith; Kischka, Claudia; Kennerknecht, Ingo; Dobel, Christian

    2014-05-01

    Congenital prosopagnosia (CP) describes a severe face processing impairment despite intact early vision and in the absence of overt brain damage. CP is assumed to be present from birth and often transmitted within families. Previous studies reported conflicting findings regarding associated deficits in nonface visuoperceptual tasks. However, diagnostic criteria for CP significantly differed between studies, impeding conclusions on the heterogeneity of the impairment. Following current suggestions for clinical diagnoses of CP, we administered standardized tests for face processing, a self-report questionnaire and general visual processing tests to an extended family (N=28), in which many members reported difficulties with face recognition. This allowed us to assess the degree of heterogeneity of the deficit within a large sample of suspected CPs of similar genetic and environmental background. (a) We found evidence for a severe face processing deficit but intact nonface visuoperceptual skills in three family members - a father and his two sons - who fulfilled conservative criteria for a CP diagnosis on standardized tests and a self-report questionnaire, thus corroborating findings of familial transmissions of CP. (b) Face processing performance of the remaining family members was also significantly below the mean of the general population, suggesting that face processing impairments are transmitted as a continuous trait rather than in a dichotomous all-or-nothing fashion. (c) Self-rating scores of face recognition showed acceptable correlations with standardized tests, suggesting this method as a viable screening procedure for CP diagnoses. (d) Finally, some family members revealed severe impairments in general visual processing and nonface visual memory tasks either in conjunction with face perception deficits or as an isolated impairment. This finding may indicate an elevated risk for more general visuoperceptual deficits in families with prosopagnosic members. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Emotional tears facilitate the recognition of sadness and the perceived need for social support.

    PubMed

    Balsters, Martijn J H; Krahmer, Emiel J; Swerts, Marc G J; Vingerhoets, Ad J J M

    2013-02-12

    The tearing effect refers to the relevance of tears as an important visual cue adding meaning to human facial expression. However, little is known about how people process these visual cues and their mediating role in terms of emotion perception and person judgment. We therefore conducted two experiments in which we measured the influence of tears on the identification of sadness and the perceived need for social support at an early perceptional level. In two experiments (1 and 2), participants were exposed to sad and neutral faces. In both experiments, the face stimuli were presented for 50 milliseconds. In experiment 1, tears were digitally added to sad faces in one condition. Participants demonstrated a significant faster recognition of sad faces with tears compared to those without tears. In experiment 2, tears were added to neutral faces as well. Participants had to indicate to what extent the displayed individuals were in need of social support. Study participants reported a greater perceived need for social support to both sad and neutral faces with tears than to those without tears. This study thus demonstrated that emotional tears serve as important visual cues at an early (pre-attentive) level.

  6. Recognition of own-race and other-race faces by three-month-old infants.

    PubMed

    Sangrigoli, Sandy; De Schonen, Scania

    2004-10-01

    People are better at recognizing faces of their own race than faces of another race. Such race specificity may be due to differential expertise in the two races. In order to find out whether this other-race effect develops as early as face-recognition skills or whether it is a long-term effect of acquired expertise, we tested face recognition in 3-month-old Caucasian infants by conducting two experiments using Caucasian and Asiatic faces and a visual pair-comparison task. We hypothesized that if the other race effect develops together with face processing skills during the first months of life, the ability to recognize own-race faces will be greater than the ability to recognize other-race faces: 3-month-old Caucasian infants should be better at recognizing Caucasian faces than Asiatic faces. If, on the contrary, the other-race effect is the long-term result of acquired expertise, no difference between recognizing own- and other-race faces will be observed at that age. In Experiment 1, Caucasian infants were habituated to a single face. Recognition was assessed by a novelty preference paradigm. The infants' recognition performance was better for Caucasian than for Asiatic faces. In Experiment 2, Caucasian infants were familiarized with three individual faces. Recognition was demonstrated with both Caucasian and Asiatic faces. These results suggest that (i) the representation of face information by 3-month-olds may be race-experience-dependent (Experiment 1), and (ii) short-term familiarization with exemplars of another race group is sufficient to reduce the other-race effect and to extend the power of face processing (Experiment 2).

  7. Development of a battery of functional tests for low vision.

    PubMed

    Dougherty, Bradley E; Martin, Scott R; Kelly, Corey B; Jones, Lisa A; Raasch, Thomas W; Bullimore, Mark A

    2009-08-01

    We describe the development and evaluation of a battery of tests of functional visual performance of everyday tasks intended to be suitable for assessment of low vision patients. The functional test battery comprises-Reading rate: reading aloud 20 unrelated words for each of four print sizes (8, 4, 2, & 1 M); Telephone book: finding a name and reading the telephone number; Medicine bottle label: reading the name and dosing; Utility bill: reading the due date and amount due; Cooking instructions: reading cooking time on a food package; Coin sorting: making a specified amount from coins placed on a table; Playing card recognition: identifying denomination and suit; and Face recognition: identifying expressions of printed, life-size faces at 1 and 3 m. All tests were timed except face and playing card recognition. Fourteen normally sighted and 24 low vision subjects were assessed with the functional test battery. Visual acuity, contrast sensitivity, and quality of life (National Eye Institute Visual Function Questionnaire 25 [NEI-VFQ 25]) were measured and the functional tests repeated. Subsequently, 23 low vision patients participated in a pilot randomized clinical trial with half receiving low vision rehabilitation and half a delayed intervention. The functional tests were administered at enrollment and 3 months later. Normally sighted subjects could perform all tasks but the proportion of trials performed correctly by the low vision subjects ranged from 35% for face recognition at 3 m, to 95% for the playing card identification. On average, low vision subjects performed three times slower than the normally sighted subjects. Timed tasks with a visual search component showed poorer repeatability. In the pilot clinical trial, low vision rehabilitation produced the greatest improvement for the medicine bottle and cooking instruction tasks. Performance of patients on these functional tests has been assessed. Some appear responsive to low vision rehabilitation.

  8. Enhanced ERPs to visual stimuli in unaffected male siblings of ASD children.

    PubMed

    Anzures, Gizelle; Goyet, Louise; Ganea, Natasa; Johnson, Mark H

    2016-01-01

    Autism spectrum disorders are characterized by deficits in social and communication abilities. While unaffected relatives lack severe deficits, milder impairments have been reported in some first-degree relatives. The present study sought to verify whether mild deficits in face perception are evident among the unaffected younger siblings of children with ASD. Children between 6-9 years of age completed a face-recognition task and a passive viewing ERP task with face and house stimuli. Sixteen children were typically developing with no family history of ASD, and 17 were unaffected children with an older sibling with ASD. Findings indicate that, while unaffected siblings are comparable to controls in their face-recognition abilities, unaffected male siblings in particular show relatively enhanced P100 and P100-N170 peak-to-peak amplitude responses to faces and houses. Enhanced ERPs among unaffected male siblings is discussed in relation to potential differences in neural network recruitment during visual and face processing.

  9. Revisiting the earliest electrophysiological correlate of familiar face recognition.

    PubMed

    Huang, Wanyi; Wu, Xia; Hu, Liping; Wang, Lei; Ding, Yulong; Qu, Zhe

    2017-10-01

    The present study used event-related potentials (ERPs) to reinvestigate the earliest face familiarity effect (FFE: ERP differences between familiar and unfamiliar faces) that genuinely reflects cognitive processes underlying recognition of familiar faces in long-term memory. To trigger relatively early FFEs, participants were required to categorize upright and inverted famous faces and unknown faces in a task that placed high demand on face recognition. More importantly, to determine whether an observed FFE was linked to on-line face recognition, systematical investigation about the relationship between the FFE and behavioral performance of face recognition was conducted. The results showed significant FFEs on P1, N170, N250, and P300 waves. The FFEs on occipital P1 and N170 (<200ms) showed reversed polarities for upright and inverted faces, and were not correlated with any behavioral measure (accuracy, response time) or modulated by learning, indicating that they might merely reflect low-level visual differences between face sets. In contrast, the later FFEs on occipito-temporal N250 (~230ms) and centro-parietal P300 (~350ms) showed consistent polarities for upright and inverted faces. The N250 FFE was individually correlated with recognition speed for upright faces, and could be obtained for inverted faces through learning. The P300 FFE was also related to behavior in many aspects. These findings provide novel evidence supporting that cognitive discrimination of familiar and unfamiliar faces starts no less than 200ms after stimulus onset, and the familiarity effect on N250 may be the first electrophysiological correlate underlying recognition of familiar faces in long-term memory. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Recognition of facial and musical emotions in Parkinson's disease.

    PubMed

    Saenz, A; Doé de Maindreville, A; Henry, A; de Labbey, S; Bakchine, S; Ehrlé, N

    2013-03-01

    Patients with amygdala lesions were found to be impaired in recognizing the fear emotion both from face and from music. In patients with Parkinson's disease (PD), impairment in recognition of emotions from facial expressions was reported for disgust, fear, sadness and anger, but no studies had yet investigated this population for the recognition of emotions from both face and music. The ability to recognize basic universal emotions (fear, happiness and sadness) from both face and music was investigated in 24 medicated patients with PD and 24 healthy controls. The patient group was tested for language (verbal fluency tasks), memory (digit and spatial span), executive functions (Similarities and Picture Completion subtests of the WAIS III, Brixton and Stroop tests), visual attention (Bells test), and fulfilled self-assessment tests for anxiety and depression. Results showed that the PD group was significantly impaired for recognition of both fear and sadness emotions from facial expressions, whereas their performance in recognition of emotions from musical excerpts was not different from that of the control group. The scores of fear and sadness recognition from faces were neither correlated to scores in tests for executive and cognitive functions, nor to scores in self-assessment scales. We attributed the observed dissociation to the modality (visual vs. auditory) of presentation and to the ecological value of the musical stimuli that we used. We discuss the relevance of our findings for the care of patients with PD. © 2012 The Author(s) European Journal of Neurology © 2012 EFNS.

  11. REM sleep and emotional face memory in typically-developing children and children with autism.

    PubMed

    Tessier, Sophie; Lambert, Andréane; Scherzer, Peter; Jemel, Boutheina; Godbout, Roger

    2015-09-01

    Relationship between REM sleep and memory was assessed in 13 neurotypical and 13 children with Autistic Spectrum Disorder (ASD). A neutral/positive/negative face recognition task was administered the evening before (learning and immediate recognition) and the morning after (delayed recognition) sleep. The number of rapid eye movements (REMs), beta and theta EEG activity over the visual areas were measured during REM sleep. Compared to neurotypical children, children with ASD showed more theta activity and longer reaction time (RT) for correct responses in delayed recognition of neutral faces. Both groups showed a positive correlation between sleep and performance but different patterns emerged: in neurotypical children, accuracy for recalling neutral faces and overall RT improvement overnight was correlated with EEG activity and REMs; in children with ASD, overnight RT improvement for positive and negative faces correlated with theta and beta activity, respectively. These results suggest that neurotypical and children with ASD use different sleep-related brain networks to process faces. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Females scan more than males: a potential mechanism for sex differences in recognition memory.

    PubMed

    Heisz, Jennifer J; Pottruff, Molly M; Shore, David I

    2013-07-01

    Recognition-memory tests reveal individual differences in episodic memory; however, by themselves, these tests provide little information regarding the stage (or stages) in memory processing at which differences are manifested. We used eye-tracking technology, together with a recognition paradigm, to achieve a more detailed analysis of visual processing during encoding and retrieval. Although this approach may be useful for assessing differences in memory across many different populations, we focused on sex differences in face memory. Females outperformed males on recognition-memory tests, and this advantage was directly related to females' scanning behavior at encoding. Moreover, additional exposures to the faces reduced sex differences in face recognition, which suggests that males may be able to improve their recognition memory by extracting more information at encoding through increased scanning. A strategy of increased scanning at encoding may prove to be a simple way to enhance memory performance in other populations with memory impairment.

  13. An ERP investigation of the co-development of hemispheric lateralization of face and word recognition

    PubMed Central

    Dundas, Eva M.; Plaut, David C.; Behrmann, Marlene

    2014-01-01

    The adult human brain would appear to have specialized and independent neural systems for the visual processing of words and faces. Extensive evidence has demonstrated greater selectivity for written words in the left over right hemisphere, and, conversely, greater selectivity for faces in the right over left hemisphere. This study examines the emergence of these complementary neural profiles, as well as the possible relationship between them. Using behavioral and neurophysiological measures, in adults, we observed the standard finding of greater accuracy and a larger N170 ERP component in the left over right hemisphere for words, and conversely, greater accuracy and a larger N170 in the right over the left hemisphere for faces. We also found that, although children aged 7-12 years revealed the adult hemispheric pattern for words, they showed neither a behavioral nor a neural hemispheric superiority for faces. Of particular interest, the magnitude of their N170 for faces in the right hemisphere was related to that of the N170 for words in their left hemisphere. These findings suggest that the hemispheric organization of face recognition and of word recognition do not develop independently, and that word lateralization may precede and drive later face lateralization. A theoretical account for the findings, in which competition for visual representations unfolds over the course of development, is discussed. PMID:24933662

  14. An ERP investigation of the co-development of hemispheric lateralization of face and word recognition.

    PubMed

    Dundas, Eva M; Plaut, David C; Behrmann, Marlene

    2014-08-01

    The adult human brain would appear to have specialized and independent neural systems for the visual processing of words and faces. Extensive evidence has demonstrated greater selectivity for written words in the left over right hemisphere, and, conversely, greater selectivity for faces in the right over left hemisphere. This study examines the emergence of these complementary neural profiles, as well as the possible relationship between them. Using behavioral and neurophysiological measures, in adults, we observed the standard finding of greater accuracy and a larger N170 ERP component in the left over right hemisphere for words, and conversely, greater accuracy and a larger N170 in the right over the left hemisphere for faces. We also found that although children aged 7-12 years revealed the adult hemispheric pattern for words, they showed neither a behavioral nor a neural hemispheric superiority for faces. Of particular interest, the magnitude of their N170 for faces in the right hemisphere was related to that of the N170 for words in their left hemisphere. These findings suggest that the hemispheric organization of face recognition and of word recognition does not develop independently, and that word lateralization may precede and drive later face lateralization. A theoretical account for the findings, in which competition for visual representations unfolds over the course of development, is discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Reward learning modulates the attentional processing of faces in children with and without autism spectrum disorder.

    PubMed

    Li, Tianbi; Wang, Xueqin; Pan, Junhao; Feng, Shuyuan; Gong, Mengyuan; Wu, Yaxue; Li, Guoxiang; Li, Sheng; Yi, Li

    2017-11-01

    The processing of social stimuli, such as human faces, is impaired in individuals with autism spectrum disorder (ASD), which could be accounted for by their lack of social motivation. The current study examined how the attentional processing of faces in children with ASD could be modulated by the learning of face-reward associations. Sixteen high-functioning children with ASD and 20 age- and ability-matched typically developing peers participated in the experiments. All children started with a reward learning task, in which the children were presented with three female faces that were attributed with positive, negative, and neutral values, and were required to remember the faces and their associated values. After this, they were tested on the recognition of the learned faces and a visual search task in which the learned faces served as the distractor. We found a modulatory effect of the face-reward associations on the visual search but not the recognition performance in both groups despite the lower efficacy among children with ASD in learning the face-reward associations. Specifically, both groups responded faster when one of the distractor faces was associated with positive or negative values than when the distractor face was neutral, suggesting an efficient attentional processing of these reward-associated faces. Our findings provide direct evidence for the perceptual-level modulatory effect of reward learning on the attentional processing of faces in individuals with ASD. Autism Res 2017, 10: 1797-1807. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. In our study, we tested whether the face processing of individuals with ASD could be changed when the faces were associated with different social meanings. We found no effect of social meanings on face recognition, but both groups responded faster in the visual search task when one of the distractor faces was associated with positive or negative values than when the neutral face. The findings suggest that children with ASD could efficiently process faces associated with different values like typical children. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.

  16. Capturing specific abilities as a window into human individuality: The example of face recognition

    PubMed Central

    Wilmer, Jeremy B.; Germine, Laura; Chabris, Christopher F.; Chatterjee, Garga; Gerbasi, Margaret; Nakayama, Ken

    2013-01-01

    Proper characterization of each individual's unique pattern of strengths and weaknesses requires good measures of diverse abilities. Here, we advocate combining our growing understanding of neural and cognitive mechanisms with modern psychometric methods in a renewed effort to capture human individuality through a consideration of specific abilities. We articulate five criteria for the isolation and measurement of specific abilities, then apply these criteria to face recognition. We cleanly dissociate face recognition from more general visual and verbal recognition. This dissociation stretches across ability as well as disability, suggesting that specific developmental face recognition deficits are a special case of a broader specificity that spans the entire spectrum of human face recognition performance. Item-by-item results from 1,471 web-tested participants, included as supplementary information, fuel item analyses, validation, norming, and item response theory (IRT) analyses of our three tests: (a) the widely used Cambridge Face Memory Test (CFMT); (b) an Abstract Art Memory Test (AAMT), and (c) a Verbal Paired-Associates Memory Test (VPMT). The availability of this data set provides a solid foundation for interpreting future scores on these tests. We argue that the allied fields of experimental psychology, cognitive neuroscience, and vision science could fuel the discovery of additional specific abilities to add to face recognition, thereby providing new perspectives on human individuality. PMID:23428079

  17. The Impact of Early Bilingualism on Face Recognition Processes.

    PubMed

    Kandel, Sonia; Burfin, Sabine; Méary, David; Ruiz-Tada, Elisa; Costa, Albert; Pascalis, Olivier

    2016-01-01

    Early linguistic experience has an impact on the way we decode audiovisual speech in face-to-face communication. The present study examined whether differences in visual speech decoding could be linked to a broader difference in face processing. To identify a phoneme we have to do an analysis of the speaker's face to focus on the relevant cues for speech decoding (e.g., locating the mouth with respect to the eyes). Face recognition processes were investigated through two classic effects in face recognition studies: the Other-Race Effect (ORE) and the Inversion Effect. Bilingual and monolingual participants did a face recognition task with Caucasian faces (own race), Chinese faces (other race), and cars that were presented in an Upright or Inverted position. The results revealed that monolinguals exhibited the classic ORE. Bilinguals did not. Overall, bilinguals were slower than monolinguals. These results suggest that bilinguals' face processing abilities differ from monolinguals'. Early exposure to more than one language may lead to a perceptual organization that goes beyond language processing and could extend to face analysis. We hypothesize that these differences could be due to the fact that bilinguals focus on different parts of the face than monolinguals, making them more efficient in other race face processing but slower. However, more studies using eye-tracking techniques are necessary to confirm this explanation.

  18. Visual scanpath abnormalities in 22q11.2 deletion syndrome: is this a face specific deficit?

    PubMed

    McCabe, Kathryn; Rich, Dominique; Loughland, Carmel Maree; Schall, Ulrich; Campbell, Linda Elisabet

    2011-09-30

    People with 22q11.2 deletion syndrome (22q11DS) have deficits in face emotion recognition. However, it is not known whether this is a deficit specific to faces, or represents maladaptive information processing strategies to complex stimuli in general. This study examined the specificity of face emotion processing deficits in 22q11DS by exploring recognition accuracy and visual scanpath performance to a Faces task compared to a Weather Scene task. Seventeen adolescents with 22q11DS (11=females, age=17.4) and 18 healthy controls (11=females, age=17.7) participated in the study. People with 22q11DS displayed an overall impoverished scanning strategy to face and weather stimuli alike, resulting in poorer accuracy across all stimuli for the 22q11DS participants compared to controls. While the control subjects altered their information processing in response to faces, a similar change was not present in the 22q11DS group indicating different visual scanpath strategies to identify category within each of the tasks, of which faces appear to represent a particularly difficult subcategory. To conclude, while this study indicates that people with 22q11DS have a general visual processing deficit, the lack of strategic change between tasks suggest that the 22q11DS group did not adapt to the change in stimuli content as well as the controls, indicative of cognitive inflexibility rather than a face specific deficit. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Multi-stream face recognition on dedicated mobile devices for crime-fighting

    NASA Astrophysics Data System (ADS)

    Jassim, Sabah A.; Sellahewa, Harin

    2006-09-01

    Automatic face recognition is a useful tool in the fight against crime and terrorism. Technological advance in mobile communication systems and multi-application mobile devices enable the creation of hybrid platforms for active and passive surveillance. A dedicated mobile device that incorporates audio-visual sensors would not only complement existing networks of fixed surveillance devices (e.g. CCTV) but could also provide wide geographical coverage in almost any situation and anywhere. Such a device can hold a small portion of a law-enforcing agency biometric database that consist of audio and/or visual data of a number of suspects/wanted or missing persons who are expected to be in a local geographical area. This will assist law-enforcing officers on the ground in identifying persons whose biometric templates are downloaded onto their devices. Biometric data on the device can be regularly updated which will reduce the number of faces an officer has to remember. Such a dedicated device would act as an active/passive mobile surveillance unit that incorporate automatic identification. This paper is concerned with the feasibility of using wavelet-based face recognition schemes on such devices. The proposed schemes extend our recently developed face verification scheme for implementation on a currently available PDA. In particular we will investigate the use of a combination of wavelet frequency channels for multi-stream face recognition. We shall present experimental results on the performance of our proposed schemes for a number of publicly available face databases including a new AV database of videos recorded on a PDA.

  20. Recognition memory of newly learned faces.

    PubMed

    Ishai, Alumit; Yago, Elena

    2006-12-11

    We used event-related fMRI to study recognition memory of newly learned faces. Caucasian subjects memorized unfamiliar, neutral and happy South Korean faces and 4 days later performed a memory retrieval task in the MR scanner. We predicted that previously seen faces would be recognized faster and more accurately and would elicit stronger neural activation than novel faces. Consistent with our hypothesis, novel faces were recognized more slowly and less accurately than previously seen faces. We found activation in a distributed cortical network that included face-responsive regions in the visual cortex, parietal and prefrontal regions, and the hippocampus. Within all regions, correctly recognized, previously seen faces evoked stronger activation than novel faces. Additionally, in parietal and prefrontal cortices, stronger activation was observed during correct than incorrect trials. Finally, in the hippocampus, false alarms to happy faces elicited stronger responses than false alarms to neutral faces. Our findings suggest that face recognition memory is mediated by stimulus-specific representations stored in extrastriate regions; parietal and prefrontal regions where old and new items are classified; and the hippocampus where veridical memory traces are recovered.

  1. Using Prosopagnosia to Test and Modify Visual Recognition Theory.

    PubMed

    O'Brien, Alexander M

    2018-02-01

    Biederman's contemporary theory of basic visual object recognition (Recognition-by-Components) is based on structural descriptions of objects and presumes 36 visual primitives (geons) people can discriminate, but there has been no empirical test of the actual use of these 36 geons to visually distinguish objects. In this study, we tested for the actual use of these geons in basic visual discrimination by comparing object discrimination performance patterns (when distinguishing varied stimuli) of an acquired prosopagnosia patient (LB) and healthy control participants. LB's prosopagnosia left her heavily reliant on structural descriptions or categorical object differences in visual discrimination tasks versus the control participants' additional ability to use face recognition or coordinate systems (Coordinate Relations Hypothesis). Thus, when LB performed comparably to control participants with a given stimulus, her restricted reliance on basic or categorical discriminations meant that the stimuli must be distinguishable on the basis of a geon feature. By varying stimuli in eight separate experiments and presenting all 36 geons, we discerned that LB coded only 12 (vs. 36) distinct visual primitives (geons), apparently reflective of human visual systems generally.

  2. Scene and human face recognition in the central vision of patients with glaucoma

    PubMed Central

    Aptel, Florent; Attye, Arnaud; Guyader, Nathalie; Boucart, Muriel; Chiquet, Christophe; Peyrin, Carole

    2018-01-01

    Primary open-angle glaucoma (POAG) firstly mainly affects peripheral vision. Current behavioral studies support the idea that visual defects of patients with POAG extend into parts of the central visual field classified as normal by static automated perimetry analysis. This is particularly true for visual tasks involving processes of a higher level than mere detection. The purpose of this study was to assess visual abilities of POAG patients in central vision. Patients were assigned to two groups following a visual field examination (Humphrey 24–2 SITA-Standard test). Patients with both peripheral and central defects and patients with peripheral but no central defect, as well as age-matched controls, participated in the experiment. All participants had to perform two visual tasks where low-contrast stimuli were presented in the central 6° of the visual field. A categorization task of scene images and human face images assessed high-level visual recognition abilities. In contrast, a detection task using the same stimuli assessed low-level visual function. The difference in performance between detection and categorization revealed the cost of high-level visual processing. Compared to controls, patients with a central visual defect showed a deficit in both detection and categorization of all low-contrast images. This is consistent with the abnormal retinal sensitivity as assessed by perimetry. However, the deficit was greater for categorization than detection. Patients without a central defect showed similar performances to the controls concerning the detection and categorization of faces. However, while the detection of scene images was well-maintained, these patients showed a deficit in their categorization. This suggests that the simple loss of peripheral vision could be detrimental to scene recognition, even when the information is displayed in central vision. This study revealed subtle defects in the central visual field of POAG patients that cannot be predicted by static automated perimetry assessment using Humphrey 24–2 SITA-Standard test. PMID:29481572

  3. Dissecting contributions of prefrontal cortex and fusiform face area to face working memory.

    PubMed

    Druzgal, T Jason; D'Esposito, Mark

    2003-08-15

    Interactions between prefrontal cortex (PFC) and stimulus-specific visual cortical association areas are hypothesized to mediate visual working memory in behaving monkeys. To clarify the roles for homologous regions in humans, event-related fMRI was used to assess neural activity in PFC and fusiform face area (FFA) of subjects performing a delay-recognition task for faces. In both PFC and FFA, activity increased parametrically with memory load during encoding and maintenance of face stimuli, despite quantitative differences in the magnitude of activation. Moreover, timing differences in PFC and FFA activation during memory encoding and retrieval implied a context dependence in the flow of neural information. These results support existing neurophysiological models of visual working memory developed in the nonhuman primate.

  4. A model of attention-guided visual perception and recognition.

    PubMed

    Rybak, I A; Gusakova, V I; Golovan, A V; Podladchikova, L N; Shevtsova, N A

    1998-08-01

    A model of visual perception and recognition is described. The model contains: (i) a low-level subsystem which performs both a fovea-like transformation and detection of primary features (edges), and (ii) a high-level subsystem which includes separated 'what' (sensory memory) and 'where' (motor memory) structures. Image recognition occurs during the execution of a 'behavioral recognition program' formed during the primary viewing of the image. The recognition program contains both programmed attention window movements (stored in the motor memory) and predicted image fragments (stored in the sensory memory) for each consecutive fixation. The model shows the ability to recognize complex images (e.g. faces) invariantly with respect to shift, rotation and scale.

  5. Facial Recognition in a Group-Living Cichlid Fish.

    PubMed

    Kohda, Masanori; Jordan, Lyndon Alexander; Hotta, Takashi; Kosaka, Naoya; Karino, Kenji; Tanaka, Hirokazu; Taniyama, Masami; Takeyama, Tomohiro

    2015-01-01

    The theoretical underpinnings of the mechanisms of sociality, e.g. territoriality, hierarchy, and reciprocity, are based on assumptions of individual recognition. While behavioural evidence suggests individual recognition is widespread, the cues that animals use to recognise individuals are established in only a handful of systems. Here, we use digital models to demonstrate that facial features are the visual cue used for individual recognition in the social fish Neolamprologus pulcher. Focal fish were exposed to digital images showing four different combinations of familiar and unfamiliar face and body colorations. Focal fish attended to digital models with unfamiliar faces longer and from a further distance to the model than to models with familiar faces. These results strongly suggest that fish can distinguish individuals accurately using facial colour patterns. Our observations also suggest that fish are able to rapidly (≤ 0.5 sec) discriminate between familiar and unfamiliar individuals, a speed of recognition comparable to primates including humans.

  6. Subject independent facial expression recognition with robust face detection using a convolutional neural network.

    PubMed

    Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji

    2003-01-01

    Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.

  7. [Neural basis of self-face recognition: social aspects].

    PubMed

    Sugiura, Motoaki

    2012-07-01

    Considering the importance of the face in social survival and evidence from evolutionary psychology of visual self-recognition, it is reasonable that we expect neural mechanisms for higher social-cognitive processes to underlie self-face recognition. A decade of neuroimaging studies so far has, however, not provided an encouraging finding in this respect. Self-face specific activation has typically been reported in the areas for sensory-motor integration in the right lateral cortices. This observation appears to reflect the physical nature of the self-face which representation is developed via the detection of contingency between one's own action and sensory feedback. We have recently revealed that the medial prefrontal cortex, implicated in socially nuanced self-referential process, is activated during self-face recognition under a rich social context where multiple other faces are available for reference. The posterior cingulate cortex has also exhibited this activation modulation, and in the separate experiment showed a response to attractively manipulated self-face suggesting its relevance to positive self-value. Furthermore, the regions in the right lateral cortices typically showing self-face-specific activation have responded also to the face of one's close friend under the rich social context. This observation is potentially explained by the fact that the contingency detection for physical self-recognition also plays a role in physical social interaction, which characterizes the representation of personally familiar people. These findings demonstrate that neuroscientific exploration reveals multiple facets of the relationship between self-face recognition and social-cognitive process, and that technically the manipulation of social context is key to its success.

  8. Neural network face recognition using wavelets

    NASA Astrophysics Data System (ADS)

    Karunaratne, Passant V.; Jouny, Ismail I.

    1997-04-01

    The recognition of human faces is a phenomenon that has been mastered by the human visual system and that has been researched extensively in the domain of computer neural networks and image processing. This research is involved in the study of neural networks and wavelet image processing techniques in the application of human face recognition. The objective of the system is to acquire a digitized still image of a human face, carry out pre-processing on the image as required, an then, given a prior database of images of possible individuals, be able to recognize the individual in the image. The pre-processing segment of the system includes several procedures, namely image compression, denoising, and feature extraction. The image processing is carried out using Daubechies wavelets. Once the images have been passed through the wavelet-based image processor they can be efficiently analyzed by means of a neural network. A back- propagation neural network is used for the recognition segment of the system. The main constraints of the system is with regard to the characteristics of the images being processed. The system should be able to carry out effective recognition of the human faces irrespective of the individual's facial-expression, presence of extraneous objects such as head-gear or spectacles, and face/head orientation. A potential application of this face recognition system would be as a secondary verification method in an automated teller machine.

  9. Recognition memory is modulated by visual similarity.

    PubMed

    Yago, Elena; Ishai, Alumit

    2006-06-01

    We used event-related fMRI to test whether recognition memory depends on visual similarity between familiar prototypes and novel exemplars. Subjects memorized portraits, landscapes, and abstract compositions by six painters with a unique style, and later performed a memory recognition task. The prototypes were presented with new exemplars that were either visually similar or dissimilar. Behaviorally, novel, dissimilar items were detected faster and more accurately. We found activation in a distributed cortical network that included face- and object-selective regions in the visual cortex, where familiar prototypes evoked stronger responses than new exemplars; attention-related regions in parietal cortex, where responses elicited by new exemplars were reduced with decreased similarity to the prototypes; and the hippocampus and memory-related regions in parietal and prefrontal cortices, where stronger responses were evoked by the dissimilar exemplars. Our findings suggest that recognition memory is mediated by classification of novel exemplars as a match or a mismatch, based on their visual similarity to familiar prototypes.

  10. Stereotype Priming in Face Recognition: Interactions between Semantic and Visual Information in Face Encoding

    ERIC Educational Resources Information Center

    Hills, Peter J.; Lewis, Michael B.; Honey, R. C.

    2008-01-01

    The accuracy with which previously unfamiliar faces are recognised is increased by the presentation of a stereotype-congruent occupation label [Klatzky, R. L., Martin, G. L., & Kane, R. A. (1982a). "Semantic interpretation effects on memory for faces." "Memory & Cognition," 10, 195-206; Klatzky, R. L., Martin, G. L., & Kane, R. A. (1982b).…

  11. Visual Search for Basic Emotional Expressions in Autism; Impaired Processing of Anger, Fear and Sadness, but a Typical Happy Face Advantage

    ERIC Educational Resources Information Center

    Farran, Emily K.; Branson, Amanda; King, Ben J.

    2011-01-01

    Facial expression recognition was investigated in 20 males with high functioning autism (HFA) or Asperger syndrome (AS), compared to typically developing individuals matched for chronological age (TD CA group) and verbal and non-verbal ability (TD V/NV group). This was the first study to employ a visual search, "face in the crowd" paradigm with a…

  12. Massively-Parallel Architectures for Automatic Recognition of Visual Speech Signals

    DTIC Science & Technology

    1988-10-12

    Secusrity Clamifieation, Nlassively-Parallel Architectures for Automa ic Recognitio of Visua, Speech Signals 12. PERSONAL AUTHOR(S) Terrence J...characteristics of speech from tJhe, visual speech signals. Neural networks have been trained on a database of vowels. The rqw images of faces , aligned and...images of faces , aligned and preprocessed, were used as input to these network which were trained to estimate the corresponding envelope of the

  13. Selective impairment of facial recognition due to a haematoma restricted to the right fusiform and lateral occipital region

    PubMed Central

    Wada, Y; Yamamoto, T

    2001-01-01

    A 67 year old right handed Japanese man developed prosopagnosia caused by a haemorrhage. His only deficit was the inability to perceive and discriminate unfamiliar faces, and to recognise familiar faces. He did not show deficits in visual or visuospatial perception of non-facial stimuli, alexia, visual agnosia, or topographical disorientation. Brain MRI showed a haematoma limited to the right fusiform and the lateral occipital region. Single photon emission computed tomography confirmed that there was no decreased blood flow in the opposite left cerebral hemisphere. The present case indicates that a well placed small right fusiform gyrus and the adjacent area can cause isolated impairment of facial recognition. As far as we know, there has been no published case that has demonstrated this exact lesion site, which was indicated by recent functional MRI studies as the most critical area in facial recognition.

 PMID:11459906

  14. Neurocognitive and electrophysiological evidence of altered face processing in parents of children with autism: implications for a model of abnormal development of social brain circuitry in autism.

    PubMed

    Dawson, Geraldine; Webb, Sara Jane; Wijsman, Ellen; Schellenberg, Gerard; Estes, Annette; Munson, Jeffrey; Faja, Susan

    2005-01-01

    Neuroimaging and behavioral studies have shown that children and adults with autism have impaired face recognition. Individuals with autism also exhibit atypical event-related brain potentials to faces, characterized by a failure to show a negative component (N170) latency advantage to face compared to nonface stimuli and a bilateral, rather than right lateralized, pattern of N170 distribution. In this report, performance by 143 parents of children with autism on standardized verbal, visual-spatial, and face recognition tasks was examined. It was found that parents of children with autism exhibited a significant decrement in face recognition ability relative to their verbal and visual spatial abilities. Event-related brain potentials to face and nonface stimuli were examined in 21 parents of children with autism and 21 control adults. Parents of children with autism showed an atypical event-related potential response to faces, which mirrored the pattern shown by children and adults with autism. These results raise the possibility that face processing might be a functional trait marker of genetic susceptibility to autism. Discussion focuses on hypotheses regarding the neurodevelopmental and genetic basis of altered face processing in autism. A general model of the normal emergence of social brain circuitry in the first year of life is proposed, followed by a discussion of how the trajectory of normal development of social brain circuitry, including cortical specialization for face processing, is altered in individuals with autism. The hypothesis that genetic-mediated dysfunction of the dopamine reward system, especially its functioning in social contexts, might account for altered face processing in individuals with autism and their relatives is discussed.

  15. Benefits for Voice Learning Caused by Concurrent Faces Develop over Time.

    PubMed

    Zäske, Romi; Mühl, Constanze; Schweinberger, Stefan R

    2015-01-01

    Recognition of personally familiar voices benefits from the concurrent presentation of the corresponding speakers' faces. This effect of audiovisual integration is most pronounced for voices combined with dynamic articulating faces. However, it is unclear if learning unfamiliar voices also benefits from audiovisual face-voice integration or, alternatively, is hampered by attentional capture of faces, i.e., "face-overshadowing". In six study-test cycles we compared the recognition of newly-learned voices following unimodal voice learning vs. bimodal face-voice learning with either static (Exp. 1) or dynamic articulating faces (Exp. 2). Voice recognition accuracies significantly increased for bimodal learning across study-test cycles while remaining stable for unimodal learning, as reflected in numerical costs of bimodal relative to unimodal voice learning in the first two study-test cycles and benefits in the last two cycles. This was independent of whether faces were static images (Exp. 1) or dynamic videos (Exp. 2). In both experiments, slower reaction times to voices previously studied with faces compared to voices only may result from visual search for faces during memory retrieval. A general decrease of reaction times across study-test cycles suggests facilitated recognition with more speaker repetitions. Overall, our data suggest two simultaneous and opposing mechanisms during bimodal face-voice learning: while attentional capture of faces may initially impede voice learning, audiovisual integration may facilitate it thereafter.

  16. Multiple brain networks for visual self-recognition with different sensitivity for motion and body part.

    PubMed

    Sugiura, Motoaki; Sassa, Yuko; Jeong, Hyeonjeong; Miura, Naoki; Akitsuki, Yuko; Horie, Kaoru; Sato, Shigeru; Kawashima, Ryuta

    2006-10-01

    Multiple brain networks may support visual self-recognition. It has been hypothesized that the left ventral occipito-temporal cortex processes one's own face as a symbol, and the right parieto-frontal network processes self-image in association with motion-action contingency. Using functional magnetic resonance imaging, we first tested these hypotheses based on the prediction that these networks preferentially respond to a static self-face and to moving one's whole body, respectively. Brain activation specifically related to self-image during familiarity judgment was compared across four stimulus conditions comprising a two factorial design: factor Motion contrasted picture (Picture) and movie (Movie), and factor Body part a face (Face) and whole body (Body). Second, we attempted to segregate self-specific networks using a principal component analysis (PCA), assuming an independent pattern of inter-subject variability in activation over the four stimulus conditions in each network. The bilateral ventral occipito-temporal and the right parietal and frontal cortices exhibited self-specific activation. The left ventral occipito-temporal cortex exhibited greater self-specific activation for Face than for Body, in Picture, consistent with the prediction for this region. The activation profiles of the right parietal and frontal cortices did not show preference for Movie Body predicted by the assumed roles of these regions. The PCA extracted two cortical networks, one with its peaks in the right posterior, and another in frontal cortices; their possible roles in visuo-spatial and conceptual self-representations, respectively, were suggested by previous findings. The results thus supported and provided evidence of multiple brain networks for visual self-recognition.

  17. Visual agnosia and focal brain injury.

    PubMed

    Martinaud, O

    Visual agnosia encompasses all disorders of visual recognition within a selective visual modality not due to an impairment of elementary visual processing or other cognitive deficit. Based on a sequential dichotomy between the perceptual and memory systems, two different categories of visual object agnosia are usually considered: 'apperceptive agnosia' and 'associative agnosia'. Impaired visual recognition within a single category of stimuli is also reported in: (i) visual object agnosia of the ventral pathway, such as prosopagnosia (for faces), pure alexia (for words), or topographagnosia (for landmarks); (ii) visual spatial agnosia of the dorsal pathway, such as cerebral akinetopsia (for movement), or orientation agnosia (for the placement of objects in space). Focal brain injuries provide a unique opportunity to better understand regional brain function, particularly with the use of effective statistical approaches such as voxel-based lesion-symptom mapping (VLSM). The aim of the present work was twofold: (i) to review the various agnosia categories according to the traditional visual dual-pathway model; and (ii) to better assess the anatomical network underlying visual recognition through lesion-mapping studies correlating neuroanatomical and clinical outcomes. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  18. Context Effects on Facial Affect Recognition in Schizophrenia and Autism: Behavioral and Eye-Tracking Evidence.

    PubMed

    Sasson, Noah J; Pinkham, Amy E; Weittenhiller, Lauren P; Faso, Daniel J; Simpson, Claire

    2016-05-01

    Although Schizophrenia (SCZ) and Autism Spectrum Disorder (ASD) share impairments in emotion recognition, the mechanisms underlying these impairments may differ. The current study used the novel "Emotions in Context" task to examine how the interpretation and visual inspection of facial affect is modulated by congruent and incongruent emotional contexts in SCZ and ASD. Both adults with SCZ (n= 44) and those with ASD (n= 21) exhibited reduced affect recognition relative to typically-developing (TD) controls (n= 39) when faces were integrated within broader emotional scenes but not when they were presented in isolation, underscoring the importance of using stimuli that better approximate real-world contexts. Additionally, viewing faces within congruent emotional scenes improved accuracy and visual attention to the face for controls more so than the clinical groups, suggesting that individuals with SCZ and ASD may not benefit from the presence of complementary emotional information as readily as controls. Despite these similarities, important distinctions between SCZ and ASD were found. In every condition, IQ was related to emotion-recognition accuracy for the SCZ group but not for the ASD or TD groups. Further, only the ASD group failed to increase their visual attention to faces in incongruent emotional scenes, suggesting a lower reliance on facial information within ambiguous emotional contexts relative to congruent ones. Collectively, these findings highlight both shared and distinct social cognitive processes in SCZ and ASD that may contribute to their characteristic social disabilities. © The Author 2015. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  19. Contributions of feature shapes and surface cues to the recognition and neural representation of facial identity.

    PubMed

    Andrews, Timothy J; Baseler, Heidi; Jenkins, Rob; Burton, A Mike; Young, Andrew W

    2016-10-01

    A full understanding of face recognition will involve identifying the visual information that is used to discriminate different identities and how this is represented in the brain. The aim of this study was to explore the importance of shape and surface properties in the recognition and neural representation of familiar faces. We used image morphing techniques to generate hybrid faces that mixed shape properties (more specifically, second order spatial configural information as defined by feature positions in the 2D-image) from one identity and surface properties from a different identity. Behavioural responses showed that recognition and matching of these hybrid faces was primarily based on their surface properties. These behavioural findings contrasted with neural responses recorded using a block design fMRI adaptation paradigm to test the sensitivity of Haxby et al.'s (2000) core face-selective regions in the human brain to the shape or surface properties of the face. The fusiform face area (FFA) and occipital face area (OFA) showed a lower response (adaptation) to repeated images of the same face (same shape, same surface) compared to different faces (different shapes, different surfaces). From the behavioural data indicating the critical contribution of surface properties to the recognition of identity, we predicted that brain regions responsible for familiar face recognition should continue to adapt to faces that vary in shape but not surface properties, but show a release from adaptation to faces that vary in surface properties but not shape. However, we found that the FFA and OFA showed an equivalent release from adaptation to changes in both shape and surface properties. The dissociation between the neural and perceptual responses suggests that, although they may play a role in the process, these core face regions are not solely responsible for the recognition of facial identity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Adaptive non-local smoothing-based weberface for illumination-insensitive face recognition

    NASA Astrophysics Data System (ADS)

    Yao, Min; Zhu, Changming

    2017-07-01

    Compensating the illumination of a face image is an important process to achieve effective face recognition under severe illumination conditions. This paper present a novel illumination normalization method which specifically considers removing the illumination boundaries as well as reducing the regional illumination. We begin with the analysis of the commonly used reflectance model and then expatiate the hybrid usage of adaptive non-local smoothing and the local information coding based on Weber's law. The effectiveness and advantages of this combination are evidenced visually and experimentally. Results on Extended YaleB database show its better performance than several other famous methods.

  1. Multisensory emotion perception in congenitally, early, and late deaf CI users

    PubMed Central

    Nava, Elena; Villwock, Agnes K.; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte

    2017-01-01

    Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences. PMID:29023525

  2. Multisensory emotion perception in congenitally, early, and late deaf CI users.

    PubMed

    Fengler, Ineke; Nava, Elena; Villwock, Agnes K; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte

    2017-01-01

    Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences.

  3. Newborns' Face Recognition Is Based on Spatial Frequencies below 0.5 Cycles per Degree

    ERIC Educational Resources Information Center

    de Heering, Adelaide; Turati, Chiara; Rossion, Bruno; Bulf, Hermann; Goffaux, Valerie; Simion, Francesca

    2008-01-01

    A critical question in Cognitive Science concerns how knowledge of specific domains emerges during development. Here we examined how limitations of the visual system during the first days of life may shape subsequent development of face processing abilities. By manipulating the bands of spatial frequencies of face images, we investigated what is…

  4. Fear recognition impairment in early-stage Alzheimer's disease: when focusing on the eyes region improves performance.

    PubMed

    Hot, Pascal; Klein-Koerkamp, Yanica; Borg, Céline; Richard-Mornas, Aurélie; Zsoldos, Isabella; Paignon Adeline, Adeline; Thomas Antérion, Catherine; Baciu, Monica

    2013-06-01

    A decline in the ability to identify fearful expression has been frequently reported in patients with Alzheimer's disease (AD). In patients with severe destruction of the bilateral amygdala, similar difficulties have been reduced by using an explicit visual exploration strategy focusing on gaze. The current study assessed the possibility of applying a similar strategy in AD patients to improve fear recognition. It also assessed the possibility of improving fear recognition when a visual exploration strategy induced AD patients to process the eyes region. Seventeen patients with mild AD and 34 healthy subjects (17 young adults and 17 older adults) performed a classical task of emotional identification of faces expressing happiness, anger, and fear in two conditions: The face appeared progressively from the eyes region to the periphery (eyes region condition) or it appeared as a whole (global condition). Specific impairment in identifying a fearful expression was shown in AD patients compared with older adult controls during the global condition. Fear expression recognition was significantly improved in AD patients during the eyes region condition, in which they performed similarly to older adult controls. Our results suggest that using a different strategy of face exploration, starting first with processing of the eyes region, may compensate for a fear recognition deficit in AD patients. Findings suggest that a part of this deficit could be related to visuo-perceptual impairments. Additionally, these findings suggest that the decline of fearful face recognition reported in both normal aging and in AD may result from impairment of non-amygdalar processing in both groups and impairment of amygdalar-dependent processing in AD. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Sex Differences in Face Processing: Are Women Less Lateralized and Faster than Men?

    ERIC Educational Resources Information Center

    Godard, Ornella; Fiori, Nicole

    2010-01-01

    The aim of this study was to determine the influence of sex on hemispheric asymmetry and cooperation in a face recognition task. We used a masked priming paradigm in which the prime stimulus was centrally presented; it could be a bisymmetric face or a hemi-face in which facial information was presented in the left or the right visual field and…

  6. “A room full of strangers every day”: The psychosocial impact of developmental prosopagnosia on children and their families

    PubMed Central

    Dalrymple, Kirsten A.; Fletcher, Kimberley; Corrow, Sherryse; Nair, Roshan das; Barton, Jason J. S.; Yonas, Albert; Duchaine, Brad

    2014-01-01

    Objective Individuals with developmental prosopagnosia (‘face blindness’) have severe face recognition difficulties due to a failure to develop the necessary visual mechanisms for recognizing faces. These difficulties occur in the absence of brain damage and despite normal low-level vision and intellect. Adults with developmental prosopagnosia report serious personal and emotional consequences from their inability to recognize faces, but little is known about the psychosocial consequences in childhood. Given the importance of face recognition in daily life, and the potential for unique social consequences of impaired face recognition in childhood, we sought to evaluate the impact of developmental prosopagnosia on children and their families. Methods We conducted semi-structured interviews with 8 children with developmental prosopagnosia and their parents. A battery of face recognition tests was used to confirm the face recognition impairment reported by the parents of each child. We used thematic analysis to develop common themes among the psychosocial experiences of the children and their parents. Results Three themes were developed from the child reports: 1) awareness of their difficulties, 2) coping strategies, such as using non-facial cues to identify others, and 3) social implications, such as discomfort in, and avoidance of, social situations. These themes were paralleled by the parent reports and highlight the unique social and practical challenges associated with childhood developmental prosopagnosia. Conclusion Our findings indicate a need for increased awareness and treatment of developmental prosopagnosia to help these children manage their face recognition difficulties and to promote their social and emotional wellbeing. PMID:25077856

  7. Unseen stimuli modulate conscious visual experience: evidence from inter-hemispheric summation.

    PubMed

    de Gelder, B; Pourtois, G; van Raamsdonk, M; Vroomen, J; Weiskrantz, L

    2001-02-12

    Emotional facial expression can be discriminated despite extensive lesions of striate cortex. Here we report differential performance with recognition of facial stimuli in the intact visual field depending on simultaneous presentation of congruent or incongruent stimuli in the blind field. Three experiments were based on inter-hemispheric summation. Redundant stimulation in the blind field led to shorter latencies for stimulus detection in the intact field. Recognition of the expression of a half-face expression in the intact field was faster when the other half of the face presented to the blind field had a congruent expression. Finally, responses to the expression of whole faces to the intact field were delayed for incongruent facial expressions presented in the blind field. These results indicate that the neuro-anatomical pathways (extra-striate cortical and sub-cortical) sustaining inter-hemispheric summation can operate in the absence of striate cortex.

  8. Hybrid Speaker Recognition Using Universal Acoustic Model

    NASA Astrophysics Data System (ADS)

    Nishimura, Jun; Kuroda, Tadahiro

    We propose a novel speaker recognition approach using a speaker-independent universal acoustic model (UAM) for sensornet applications. In sensornet applications such as “Business Microscope”, interactions among knowledge workers in an organization can be visualized by sensing face-to-face communication using wearable sensor nodes. In conventional studies, speakers are detected by comparing energy of input speech signals among the nodes. However, there are often synchronization errors among the nodes which degrade the speaker recognition performance. By focusing on property of the speaker's acoustic channel, UAM can provide robustness against the synchronization error. The overall speaker recognition accuracy is improved by combining UAM with the energy-based approach. For 0.1s speech inputs and 4 subjects, speaker recognition accuracy of 94% is achieved at the synchronization error less than 100ms.

  9. Investigating the Influence of Biological Sex on the Behavioral and Neural Basis of Face Recognition

    PubMed Central

    2017-01-01

    Abstract There is interest in understanding the influence of biological factors, like sex, on the organization of brain function. We investigated the influence of biological sex on the behavioral and neural basis of face recognition in healthy, young adults. In behavior, there were no sex differences on the male Cambridge Face Memory Test (CFMT)+ or the female CFMT+ (that we created) and no own-gender bias (OGB) in either group. We evaluated the functional topography of ventral stream organization by measuring the magnitude and functional neural size of 16 individually defined face-, two object-, and two place-related regions bilaterally. There were no sex differences in any of these measures of neural function in any of the regions of interest (ROIs) or in group level comparisons. These findings reveal that men and women have similar category-selective topographic organization in the ventral visual pathway. Next, in a separate task, we measured activation within the 16 face-processing ROIs specifically during recognition of target male and female faces. There were no sex differences in the magnitude of the neural responses in any face-processing region. Furthermore, there was no OGB in the neural responses of either the male or female participants. Our findings suggest that face recognition behavior, including the OGB, is not inherently sexually dimorphic. Face recognition is an essential skill for navigating human social interactions, which is reflected equally in the behavior and neural architecture of men and women. PMID:28497111

  10. Investigating the Influence of Biological Sex on the Behavioral and Neural Basis of Face Recognition.

    PubMed

    Scherf, K Suzanne; Elbich, Daniel B; Motta-Mena, Natalie V

    2017-01-01

    There is interest in understanding the influence of biological factors, like sex, on the organization of brain function. We investigated the influence of biological sex on the behavioral and neural basis of face recognition in healthy, young adults. In behavior, there were no sex differences on the male Cambridge Face Memory Test (CFMT)+ or the female CFMT+ (that we created) and no own-gender bias (OGB) in either group. We evaluated the functional topography of ventral stream organization by measuring the magnitude and functional neural size of 16 individually defined face-, two object-, and two place-related regions bilaterally. There were no sex differences in any of these measures of neural function in any of the regions of interest (ROIs) or in group level comparisons. These findings reveal that men and women have similar category-selective topographic organization in the ventral visual pathway. Next, in a separate task, we measured activation within the 16 face-processing ROIs specifically during recognition of target male and female faces. There were no sex differences in the magnitude of the neural responses in any face-processing region. Furthermore, there was no OGB in the neural responses of either the male or female participants. Our findings suggest that face recognition behavior, including the OGB, is not inherently sexually dimorphic. Face recognition is an essential skill for navigating human social interactions, which is reflected equally in the behavior and neural architecture of men and women.

  11. Target-context unitization effect on the familiarity-related FN400: a face recognition exclusion task.

    PubMed

    Guillaume, Fabrice; Etienne, Yann

    2015-03-01

    Using two exclusion tasks, the present study examined how the ERP correlates of face recognition are affected by the nature of the information to be retrieved. Intrinsic (facial expression) and extrinsic (background scene) visual information were paired with face identity and constituted the exclusion criterion at test time. Although perceptual information had to be taken into account in both situations, the FN400 old-new effect was observed only for old target faces on the expression-exclusion task, whereas it was found for both old target and old non-target faces in the background-exclusion situation. These results reveal that the FN400, which is generally interpreted as a correlate of familiarity, was modulated by the retrieval of intra-item and intrinsic face information, but not by the retrieval of extrinsic information. The observed effects on the FN400 depended on the nature of the information to be retrieved and its relationship (unitization) to the recognition target. On the other hand, the parietal old-new effect (generally described as an ERP correlate of recollection) reflected the retrieval of both types of contextual features equivalently. The current findings are discussed in relation to recent controversies about the nature of the recognition processes reflected by the ERP correlates of face recognition. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Beyond perceptual expertise: revisiting the neural substrates of expert object recognition

    PubMed Central

    Harel, Assaf; Kravitz, Dwight; Baker, Chris I.

    2013-01-01

    Real-world expertise provides a valuable opportunity to understand how experience shapes human behavior and neural function. In the visual domain, the study of expert object recognition, such as in car enthusiasts or bird watchers, has produced a large, growing, and often-controversial literature. Here, we synthesize this literature, focusing primarily on results from functional brain imaging, and propose an interactive framework that incorporates the impact of high-level factors, such as attention and conceptual knowledge, in supporting expertise. This framework contrasts with the perceptual view of object expertise that has concentrated largely on stimulus-driven processing in visual cortex. One prominent version of this perceptual account has almost exclusively focused on the relation of expertise to face processing and, in terms of the neural substrates, has centered on face-selective cortical regions such as the Fusiform Face Area (FFA). We discuss the limitations of this face-centric approach as well as the more general perceptual view, and highlight that expert related activity is: (i) found throughout visual cortex, not just FFA, with a strong relationship between neural response and behavioral expertise even in the earliest stages of visual processing, (ii) found outside visual cortex in areas such as parietal and prefrontal cortices, and (iii) modulated by the attentional engagement of the observer suggesting that it is neither automatic nor driven solely by stimulus properties. These findings strongly support a framework in which object expertise emerges from extensive interactions within and between the visual system and other cognitive systems, resulting in widespread, distributed patterns of expertise-related activity across the entire cortex. PMID:24409134

  13. Neuronal integration in visual cortex elevates face category tuning to conscious face perception

    PubMed Central

    Fahrenfort, Johannes J.; Snijders, Tineke M.; Heinen, Klaartje; van Gaal, Simon; Scholte, H. Steven; Lamme, Victor A. F.

    2012-01-01

    The human brain has the extraordinary capability to transform cluttered sensory input into distinct object representations. For example, it is able to rapidly and seemingly without effort detect object categories in complex natural scenes. Surprisingly, category tuning is not sufficient to achieve conscious recognition of objects. What neural process beyond category extraction might elevate neural representations to the level where objects are consciously perceived? Here we show that visible and invisible faces produce similar category-selective responses in the ventral visual cortex. The pattern of neural activity evoked by visible faces could be used to decode the presence of invisible faces and vice versa. However, only visible faces caused extensive response enhancements and changes in neural oscillatory synchronization, as well as increased functional connectivity between higher and lower visual areas. We conclude that conscious face perception is more tightly linked to neural processes of sustained information integration and binding than to processes accommodating face category tuning. PMID:23236162

  14. Boy with cortical visual impairment and unilateral hemiparesis in Jeff Huntington's "Slip" (2011).

    PubMed

    Bianucci, R; Perciaccante, A; Appenzeller, O

    2016-11-15

    Face recognition is strongly associated with the human face and face perception is an important part in identifying health qualities of a person and is an integral part of so called spot diagnosis in clinical neurology. Neurology depends in part on observation, description and interpretation of visual information. Similar skills are required in visual art. Here we report a case of eye cortical visual impairment (CVI) and unilateral facial weakness in a boy depicted by the painter Jeff Huntington (2011). The corollary of this is that art serves medical clinical exercise. Art interpretation helps neurology students to apply the same skills they will use in clinical experience and to develop their observational and interpretive skills in non-clinical settings. Furthermore, the development of an increased awareness of emotional and character expression in the human face may facilitate successful doctor-patient relationships. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Effect of Visual Experience on Face Processing: A Developmental Study of Inversion and Non-Native Effects

    ERIC Educational Resources Information Center

    Sangrigoli, Sandy; de Schonen, Scania

    2004-01-01

    In adults, three phenomena are taken to demonstrate an experience effect on face recognition: an inversion effect, a non-native face effect (so-called "other-race" effect) and their interaction. It is crucial for our understanding of the developmental perception mechanisms of object processing to discover when these effects are present in…

  16. Impaired face detection may explain some but not all cases of developmental prosopagnosia.

    PubMed

    Dalrymple, Kirsten A; Duchaine, Brad

    2016-05-01

    Developmental prosopagnosia (DP) is defined by severe face recognition difficulties due to the failure to develop the visual mechanisms for processing faces. The two-process theory of face recognition (Morton & Johnson, 1991) implies that DP could result from a failure of an innate face detection system; this failure could prevent an individual from then tuning higher-level processes for face recognition (Johnson, 2005). Work with adults indicates that some individuals with DP have normal face detection whereas others are impaired. However, face detection has not been addressed in children with DP, even though their results may be especially informative because they have had less opportunity to develop strategies that could mask detection deficits. We tested the face detection abilities of seven children with DP. Four were impaired at face detection to some degree (i.e. abnormally slow, or failed to find faces) while the remaining three children had normal face detection. Hence, the cases with impaired detection are consistent with the two-process account suggesting that DP could result from a failure of face detection. However, the cases with normal detection implicate a higher-level origin. The dissociation between normal face detection and impaired identity perception also indicates that these abilities depend on different neurocognitive processes. © 2015 John Wiley & Sons Ltd.

  17. Beyond sensory images: Object-based representation in the human ventral pathway

    PubMed Central

    Pietrini, Pietro; Furey, Maura L.; Ricciardi, Emiliano; Gobbini, M. Ida; Wu, W.-H. Carolyn; Cohen, Leonardo; Guazzelli, Mario; Haxby, James V.

    2004-01-01

    We investigated whether the topographically organized, category-related patterns of neural response in the ventral visual pathway are a representation of sensory images or a more abstract representation of object form that is not dependent on sensory modality. We used functional MRI to measure patterns of response evoked during visual and tactile recognition of faces and manmade objects in sighted subjects and during tactile recognition in blind subjects. Results showed that visual and tactile recognition evoked category-related patterns of response in a ventral extrastriate visual area in the inferior temporal gyrus that were correlated across modality for manmade objects. Blind subjects also demonstrated category-related patterns of response in this “visual” area, and in more ventral cortical regions in the fusiform gyrus, indicating that these patterns are not due to visual imagery and, furthermore, that visual experience is not necessary for category-related representations to develop in these cortices. These results demonstrate that the representation of objects in the ventral visual pathway is not simply a representation of visual images but, rather, is a representation of more abstract features of object form. PMID:15064396

  18. Autistic traits are linked to reduced adaptive coding of face identity and selectively poorer face recognition in men but not women.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Ewing, Louise

    2013-11-01

    Our ability to discriminate and recognize thousands of faces despite their similarity as visual patterns relies on adaptive, norm-based, coding mechanisms that are continuously updated by experience. Reduced adaptive coding of face identity has been proposed as a neurocognitive endophenotype for autism, because it is found in autism and in relatives of individuals with autism. Autistic traits can also extend continuously into the general population, raising the possibility that reduced adaptive coding of face identity may be more generally associated with autistic traits. In the present study, we investigated whether adaptive coding of face identity decreases as autistic traits increase in an undergraduate population. Adaptive coding was measured using face identity aftereffects, and autistic traits were measured using the Autism-Spectrum Quotient (AQ) and its subscales. We also measured face and car recognition ability to determine whether autistic traits are selectively related to face recognition difficulties. We found that men who scored higher on levels of autistic traits related to social interaction had reduced adaptive coding of face identity. This result is consistent with the idea that atypical adaptive face-coding mechanisms are an endophenotype for autism. Autistic traits were also linked with face-selective recognition difficulties in men. However, there were some unexpected sex differences. In women, autistic traits were linked positively, rather than negatively, with adaptive coding of identity, and were unrelated to face-selective recognition difficulties. These sex differences indicate that autistic traits can have different neurocognitive correlates in men and women and raise the intriguing possibility that endophenotypes of autism can differ in males and females. © 2013 Elsevier Ltd. All rights reserved.

  19. Theory of mind and recognition of facial emotion in dementia: challenge to current concepts.

    PubMed

    Freedman, Morris; Binns, Malcolm A; Black, Sandra E; Murphy, Cara; Stuss, Donald T

    2013-01-01

    Current literature suggests that theory of mind (ToM) and recognition of facial emotion are impaired in behavioral variant frontotemporal dementia (bvFTD). In contrast, studies suggest that ToM is spared in Alzheimer disease (AD). However, there is controversy whether recognition of emotion in faces is impaired in AD. This study challenges the concepts that ToM is preserved in AD and that recognition of facial emotion is impaired in bvFTD. ToM, recognition of facial emotion, and identification of emotions associated with video vignettes were studied in bvFTD, AD, and normal controls. ToM was assessed using false-belief and visual perspective-taking tasks. Identification of facial emotion was tested using Ekman and Friesen's pictures of facial affect. After adjusting for relevant covariates, there were significant ToM deficits in bvFTD and AD compared with controls, whereas neither group was impaired in the identification of emotions associated with video vignettes. There was borderline impairment in recognizing angry faces in bvFTD. Patients with AD showed significant deficits on false belief and visual perspective taking, and bvFTD patients were impaired on second-order false belief. We report novel findings challenging the concepts that ToM is spared in AD and that recognition of facial emotion is impaired in bvFTD.

  20. A special purpose knowledge-based face localization method

    NASA Astrophysics Data System (ADS)

    Hassanat, Ahmad; Jassim, Sabah

    2008-04-01

    This paper is concerned with face localization for visual speech recognition (VSR) system. Face detection and localization have got a great deal of attention in the last few years, because it is an essential pre-processing step in many techniques that handle or deal with faces, (e.g. age, face, gender, race and visual speech recognition). We shall present an efficient method for localization human's faces in video images captured on mobile constrained devices, under a wide variation in lighting conditions. We use a multiphase method that may include all or some of the following steps starting with image pre-processing, followed by a special purpose edge detection, then an image refinement step. The output image will be passed through a discrete wavelet decomposition procedure, and the computed LL sub-band at a certain level will be transformed into a binary image that will be scanned by using a special template to select a number of possible candidate locations. Finally, we fuse the scores from the wavelet step with scores determined by color information for the candidate location and employ a form of fuzzy logic to distinguish face from non-face locations. We shall present results of large number of experiments to demonstrate that the proposed face localization method is efficient and achieve high level of accuracy that outperforms existing general-purpose face detection methods.

  1. Visual and auditory socio-cognitive perception in unilateral temporal lobe epilepsy in children and adolescents: a prospective controlled study.

    PubMed

    Laurent, Agathe; Arzimanoglou, Alexis; Panagiotakaki, Eleni; Sfaello, Ignacio; Kahane, Philippe; Ryvlin, Philippe; Hirsch, Edouard; de Schonen, Scania

    2014-12-01

    A high rate of abnormal social behavioural traits or perceptual deficits is observed in children with unilateral temporal lobe epilepsy. In the present study, perception of auditory and visual social signals, carried by faces and voices, was evaluated in children or adolescents with temporal lobe epilepsy. We prospectively investigated a sample of 62 children with focal non-idiopathic epilepsy early in the course of the disorder. The present analysis included 39 children with a confirmed diagnosis of temporal lobe epilepsy. Control participants (72), distributed across 10 age groups, served as a control group. Our socio-perceptual evaluation protocol comprised three socio-visual tasks (face identity, facial emotion and gaze direction recognition), two socio-auditory tasks (voice identity and emotional prosody recognition), and three control tasks (lip reading, geometrical pattern and linguistic intonation recognition). All 39 patients also benefited from a neuropsychological examination. As a group, children with temporal lobe epilepsy performed at a significantly lower level compared to the control group with regards to recognition of facial identity, direction of eye gaze, and emotional facial expressions. We found no relationship between the type of visual deficit and age at first seizure, duration of epilepsy, or the epilepsy-affected cerebral hemisphere. Deficits in socio-perceptual tasks could be found independently of the presence of deficits in visual or auditory episodic memory, visual non-facial pattern processing (control tasks), or speech perception. A normal FSIQ did not exempt some of the patients from an underlying deficit in some of the socio-perceptual tasks. Temporal lobe epilepsy not only impairs development of emotion recognition, but can also impair development of perception of other socio-perceptual signals in children with or without intellectual deficiency. Prospective studies need to be designed to evaluate the results of appropriate re-education programs in children presenting with deficits in social cue processing.

  2. Holistic processing is finely tuned for faces of one's own race.

    PubMed

    Michel, Caroline; Rossion, Bruno; Han, Jaehyun; Chung, Chan-Sup; Caldara, Roberto

    2006-07-01

    Recognizing individual faces outside one's race poses difficulty, a phenomenon known as the other-race effect. Most researchers agree that this effect results from differential experience with same-race (SR) and other-race (OR) faces. However, the specific processes that develop with visual experience and underlie the other-race effect remain to be clarified. We tested whether the integration of facial features into a whole representation-holistic processing-was larger for SR than OR faces in Caucasians and Asians without life experience with OR faces. For both classes of participants, recognition of the upper half of a composite-face stimulus was more disrupted by the bottom half (the composite-face effect) for SR than OR faces, demonstrating that SR faces are processed more holistically than OR faces. This differential holistic processing for faces of different races, probably a by-product of visual experience, may be a critical factor in the other-race effect.

  3. A defense of the subordinate-level expertise account for the N170 component.

    PubMed

    Rossion, Bruno; Curran, Tim; Gauthier, Isabel

    2002-09-01

    A recent paper in this journal reports two event-related potential (ERP) experiments interpreted as supporting the domain specificity of the visual mechanisms implicated in processing faces (Cognition 83 (2002) 1). The authors argue that because a large neurophysiological response to faces (N170) is less influenced by the task than the response to objects, and because the response for human faces extends to ape faces (for which we are not expert), we should reject the hypothesis that the face-sensitivity reflected by the N170 can be accounted for by the subordinate-level expertise model of object recognition (Nature Neuroscience 3 (2000) 764). In this commentary, we question this conclusion based on some of our own ERP work on expert object recognition as well as the work of others.

  4. Some effects of alcohol and eye movements on cross-race face learning.

    PubMed

    Harvey, Alistair J

    2014-01-01

    This study examines the impact of acute alcohol intoxication on visual scanning in cross-race face learning. The eye movements of a group of white British participants were recorded as they encoded a series of own-and different-race faces, under alcohol and placebo conditions. Intoxication reduced the rate and extent of visual scanning during face encoding, reorienting the focus of foveal attention away from the eyes and towards the nose. Differences in encoding eye movements also varied between own-and different-race face conditions as a function of alcohol. Fixations to both face types were less frequent and more lingering following intoxication, but in the placebo condition this was only the case for different-race faces. While reducing visual scanning, however, alcohol had no adverse effect on memory, only encoding restrictions associated with sober different-race face processing led to poorer recognition. These results support perceptual expertise accounts of own-race face processing, but suggest the adverse effects of alcohol on face learning published previously are not caused by foveal encoding restrictions. The implications of these findings for alcohol myopia theory are discussed.

  5. On the facilitative effects of face motion on face recognition and its development

    PubMed Central

    Xiao, Naiqi G.; Perrotta, Steve; Quinn, Paul C.; Wang, Zhe; Sun, Yu-Hao P.; Lee, Kang

    2014-01-01

    For the past century, researchers have extensively studied human face processing and its development. These studies have advanced our understanding of not only face processing, but also visual processing in general. However, most of what we know about face processing was investigated using static face images as stimuli. Therefore, an important question arises: to what extent does our understanding of static face processing generalize to face processing in real-life contexts in which faces are mostly moving? The present article addresses this question by examining recent studies on moving face processing to uncover the influence of facial movements on face processing and its development. First, we describe evidence on the facilitative effects of facial movements on face recognition and two related theoretical hypotheses: the supplementary information hypothesis and the representation enhancement hypothesis. We then highlight several recent studies suggesting that facial movements optimize face processing by activating specific face processing strategies that accommodate to task requirements. Lastly, we review the influence of facial movements on the development of face processing in the first year of life. We focus on infants' sensitivity to facial movements and explore the facilitative effects of facial movements on infants' face recognition performance. We conclude by outlining several future directions to investigate moving face processing and emphasize the importance of including dynamic aspects of facial information to further understand face processing in real-life contexts. PMID:25009517

  6. Localized Scleroderma

    MedlinePlus

    ... diagnosis of localized scleroderma is mainly by visual recognition, though a biopsy often may be done to ... sclerodactyly), and thickening of the skin on the face (without color changes).There are other clues that ...

  7. Face-n-Food: Gender Differences in Tuning to Faces.

    PubMed

    Pavlova, Marina A; Scheffler, Klaus; Sokolov, Alexander N

    2015-01-01

    Faces represent valuable signals for social cognition and non-verbal communication. A wealth of research indicates that women tend to excel in recognition of facial expressions. However, it remains unclear whether females are better tuned to faces. We presented healthy adult females and males with a set of newly created food-plate images resembling faces (slightly bordering on the Giuseppe Arcimboldo style). In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Females not only more readily recognized the images as a face (they reported resembling a face on images, on which males still did not), but gave on overall more face responses. The findings are discussed in the light of gender differences in deficient face perception. As most neuropsychiatric, neurodevelopmental and psychosomatic disorders characterized by social brain abnormalities are sex specific, the task may serve as a valuable tool for uncovering impairments in visual face processing.

  8. Face-n-Food: Gender Differences in Tuning to Faces

    PubMed Central

    Pavlova, Marina A.; Scheffler, Klaus; Sokolov, Alexander N.

    2015-01-01

    Faces represent valuable signals for social cognition and non-verbal communication. A wealth of research indicates that women tend to excel in recognition of facial expressions. However, it remains unclear whether females are better tuned to faces. We presented healthy adult females and males with a set of newly created food-plate images resembling faces (slightly bordering on the Giuseppe Arcimboldo style). In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Females not only more readily recognized the images as a face (they reported resembling a face on images, on which males still did not), but gave on overall more face responses. The findings are discussed in the light of gender differences in deficient face perception. As most neuropsychiatric, neurodevelopmental and psychosomatic disorders characterized by social brain abnormalities are sex specific, the task may serve as a valuable tool for uncovering impairments in visual face processing. PMID:26154177

  9. Tensor discriminant color space for face recognition.

    PubMed

    Wang, Su-Jing; Yang, Jian; Zhang, Na; Zhou, Chun-Guang

    2011-09-01

    Recent research efforts reveal that color may provide useful information for face recognition. For different visual tasks, the choice of a color space is generally different. How can a color space be sought for the specific face recognition problem? To address this problem, this paper represents a color image as a third-order tensor and presents the tensor discriminant color space (TDCS) model. The model can keep the underlying spatial structure of color images. With the definition of n-mode between-class scatter matrices and within-class scatter matrices, TDCS constructs an iterative procedure to obtain one color space transformation matrix and two discriminant projection matrices by maximizing the ratio of these two scatter matrices. The experiments are conducted on two color face databases, AR and Georgia Tech face databases, and the results show that both the performance and the efficiency of the proposed method are better than those of the state-of-the-art color image discriminant model, which involve one color space transformation matrix and one discriminant projection matrix, specifically in a complicated face database with various pose variations.

  10. DeitY-TU face database: its design, multiple camera capturing, characteristics, and evaluation

    NASA Astrophysics Data System (ADS)

    Bhowmik, Mrinal Kanti; Saha, Kankan; Saha, Priya; Bhattacharjee, Debotosh

    2014-10-01

    The development of the latest face databases is providing researchers different and realistic problems that play an important role in the development of efficient algorithms for solving the difficulties during automatic recognition of human faces. This paper presents the creation of a new visual face database, named the Department of Electronics and Information Technology-Tripura University (DeitY-TU) face database. It contains face images of 524 persons belonging to different nontribes and Mongolian tribes of north-east India, with their anthropometric measurements for identification. Database images are captured within a room with controlled variations in illumination, expression, and pose along with variability in age, gender, accessories, make-up, and partial occlusion. Each image contains the combined primary challenges of face recognition, i.e., illumination, expression, and pose. This database also represents some new features: soft biometric traits such as mole, freckle, scar, etc., and facial anthropometric variations that may be helpful for researchers for biometric recognition. It also gives an equivalent study of the existing two-dimensional face image databases. The database has been tested using two baseline algorithms: linear discriminant analysis and principal component analysis, which may be used by other researchers as the control algorithm performance score.

  11. Effects of anticaricaturing vs. caricaturing and their neural correlates elucidate a role of shape for face learning.

    PubMed

    Schulz, Claudia; Kaufmann, Jürgen M; Walther, Lydia; Schweinberger, Stefan R

    2012-08-01

    To assess the role of shape information for unfamiliar face learning, we investigated effects of photorealistic spatial anticaricaturing and caricaturing on later face recognition. We assessed behavioural performance and event-related brain potential (ERP) correlates of recognition, using different images of anticaricatures, veridical faces, or caricatures at learning and test. Relative to veridical faces, recognition performance improved for caricatures, with performance decrements for anticaricatures in response times. During learning, an amplitude pattern with caricatures>veridicals=anticaricatures was seen for N170, left-hemispheric ERP negativity during the P200 and N250 time segments (200-380 ms), and for a late positive component (LPC, 430-830 ms), whereas P200 and N250 responses exhibited an additional difference between veridicals and anticaricatures over the right hemisphere. During recognition, larger amplitudes for caricatures again started in the N170, whereas the P200 and the right-hemispheric N250 exhibited a more graded pattern of amplitude effects (caricatures>veridicals>anticaricatures), a result which was specific to learned but not novel faces in the N250. Together, the results (i) emphasise the role of facial shape for visual encoding in the learning of previously unfamiliar faces and (ii) provide important information about the neuronal timing of the encoding advantage enjoyed by faces with distinctive shape. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. The impact of inverted text on visual word processing: An fMRI study.

    PubMed

    Sussman, Bethany L; Reddigari, Samir; Newman, Sharlene D

    2018-06-01

    Visual word recognition has been studied for decades. One question that has received limited attention is how different text presentation orientations disrupt word recognition. By examining how word recognition processes may be disrupted by different text orientations it is hoped that new insights can be gained concerning the process. Here, we examined the impact of rotating and inverting text on the neural network responsible for visual word recognition focusing primarily on a region of the occipto-temporal cortex referred to as the visual word form area (VWFA). A lexical decision task was employed in which words and pseudowords were presented in one of three orientations (upright, rotated or inverted). The results demonstrate that inversion caused the greatest disruption of visual word recognition processes. Both rotated and inverted text elicited increased activation in spatial attention regions within the right parietal cortex. However, inverted text recruited phonological and articulatory processing regions within the left inferior frontal and left inferior parietal cortices. Finally, the VWFA was found to not behave similarly to the fusiform face area in that unusual text orientations resulted in increased activation and not decreased activation. It is hypothesized here that the VWFA activation is modulated by feedback from linguistic processes. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Right hemispheric dominance of visual phenomena evoked by intracerebral stimulation of the human visual cortex.

    PubMed

    Jonas, Jacques; Frismand, Solène; Vignal, Jean-Pierre; Colnat-Coulbois, Sophie; Koessler, Laurent; Vespignani, Hervé; Rossion, Bruno; Maillard, Louis

    2014-07-01

    Electrical brain stimulation can provide important information about the functional organization of the human visual cortex. Here, we report the visual phenomena evoked by a large number (562) of intracerebral electrical stimulations performed at low-intensity with depth electrodes implanted in the occipito-parieto-temporal cortex of 22 epileptic patients. Focal electrical stimulation evoked primarily visual hallucinations with various complexities: simple (spot or blob), intermediary (geometric forms), or complex meaningful shapes (faces); visual illusions and impairments of visual recognition were more rarely observed. With the exception of the most posterior cortical sites, the probability of evoking a visual phenomenon was significantly higher in the right than the left hemisphere. Intermediary and complex hallucinations, illusions, and visual recognition impairments were almost exclusively evoked by stimulation in the right hemisphere. The probability of evoking a visual phenomenon decreased substantially from the occipital pole to the most anterior sites of the temporal lobe, and this decrease was more pronounced in the left hemisphere. The greater sensitivity of the right occipito-parieto-temporal regions to intracerebral electrical stimulation to evoke visual phenomena supports a predominant role of right hemispheric visual areas from perception to recognition of visual forms, regardless of visuospatial and attentional factors. Copyright © 2013 Wiley Periodicals, Inc.

  14. Obligatory and facultative brain regions for voice-identity recognition

    PubMed Central

    Roswandowitz, Claudia; Kappes, Claudia; Obrig, Hellmuth; von Kriegstein, Katharina

    2018-01-01

    Abstract Recognizing the identity of others by their voice is an important skill for social interactions. To date, it remains controversial which parts of the brain are critical structures for this skill. Based on neuroimaging findings, standard models of person-identity recognition suggest that the right temporal lobe is the hub for voice-identity recognition. Neuropsychological case studies, however, reported selective deficits of voice-identity recognition in patients predominantly with right inferior parietal lobe lesions. Here, our aim was to work towards resolving the discrepancy between neuroimaging studies and neuropsychological case studies to find out which brain structures are critical for voice-identity recognition in humans. We performed a voxel-based lesion-behaviour mapping study in a cohort of patients (n = 58) with unilateral focal brain lesions. The study included a comprehensive behavioural test battery on voice-identity recognition of newly learned (voice-name, voice-face association learning) and familiar voices (famous voice recognition) as well as visual (face-identity recognition) and acoustic control tests (vocal-pitch and vocal-timbre discrimination). The study also comprised clinically established tests (neuropsychological assessment, audiometry) and high-resolution structural brain images. The three key findings were: (i) a strong association between voice-identity recognition performance and right posterior/mid temporal and right inferior parietal lobe lesions; (ii) a selective association between right posterior/mid temporal lobe lesions and voice-identity recognition performance when face-identity recognition performance was factored out; and (iii) an association of right inferior parietal lobe lesions with tasks requiring the association between voices and faces but not voices and names. The results imply that the right posterior/mid temporal lobe is an obligatory structure for voice-identity recognition, while the inferior parietal lobe is only a facultative component of voice-identity recognition in situations where additional face-identity processing is required. PMID:29228111

  15. Obligatory and facultative brain regions for voice-identity recognition.

    PubMed

    Roswandowitz, Claudia; Kappes, Claudia; Obrig, Hellmuth; von Kriegstein, Katharina

    2018-01-01

    Recognizing the identity of others by their voice is an important skill for social interactions. To date, it remains controversial which parts of the brain are critical structures for this skill. Based on neuroimaging findings, standard models of person-identity recognition suggest that the right temporal lobe is the hub for voice-identity recognition. Neuropsychological case studies, however, reported selective deficits of voice-identity recognition in patients predominantly with right inferior parietal lobe lesions. Here, our aim was to work towards resolving the discrepancy between neuroimaging studies and neuropsychological case studies to find out which brain structures are critical for voice-identity recognition in humans. We performed a voxel-based lesion-behaviour mapping study in a cohort of patients (n = 58) with unilateral focal brain lesions. The study included a comprehensive behavioural test battery on voice-identity recognition of newly learned (voice-name, voice-face association learning) and familiar voices (famous voice recognition) as well as visual (face-identity recognition) and acoustic control tests (vocal-pitch and vocal-timbre discrimination). The study also comprised clinically established tests (neuropsychological assessment, audiometry) and high-resolution structural brain images. The three key findings were: (i) a strong association between voice-identity recognition performance and right posterior/mid temporal and right inferior parietal lobe lesions; (ii) a selective association between right posterior/mid temporal lobe lesions and voice-identity recognition performance when face-identity recognition performance was factored out; and (iii) an association of right inferior parietal lobe lesions with tasks requiring the association between voices and faces but not voices and names. The results imply that the right posterior/mid temporal lobe is an obligatory structure for voice-identity recognition, while the inferior parietal lobe is only a facultative component of voice-identity recognition in situations where additional face-identity processing is required. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain.

  16. Ventromedial prefrontal cortex mediates visual attention during facial emotion recognition.

    PubMed

    Wolf, Richard C; Philippi, Carissa L; Motzkin, Julian C; Baskaya, Mustafa K; Koenigs, Michael

    2014-06-01

    The ventromedial prefrontal cortex is known to play a crucial role in regulating human social and emotional behaviour, yet the precise mechanisms by which it subserves this broad function remain unclear. Whereas previous neuropsychological studies have largely focused on the role of the ventromedial prefrontal cortex in higher-order deliberative processes related to valuation and decision-making, here we test whether ventromedial prefrontal cortex may also be critical for more basic aspects of orienting attention to socially and emotionally meaningful stimuli. Using eye tracking during a test of facial emotion recognition in a sample of lesion patients, we show that bilateral ventromedial prefrontal cortex damage impairs visual attention to the eye regions of faces, particularly for fearful faces. This finding demonstrates a heretofore unrecognized function of the ventromedial prefrontal cortex-the basic attentional process of controlling eye movements to faces expressing emotion. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Friends with Faces: How Social Networks Can Enhance Face Recognition and Vice Versa

    NASA Astrophysics Data System (ADS)

    Mavridis, Nikolaos; Kazmi, Wajahat; Toulis, Panos

    The "friendship" relation, a social relation among individuals, is one of the primary relations modeled in some of the world's largest online social networking sites, such as "FaceBook." On the other hand, the "co-occurrence" relation, as a relation among faces appearing in pictures, is one that is easily detectable using modern face detection techniques. These two relations, though appearing in different realms (social vs. visual sensory), have a strong correlation: faces that co-occur in photos often belong to individuals who are friends. Using real-world data gathered from "Facebook," which were gathered as part of the "FaceBots" project, the world's first physical face-recognizing and conversing robot that can utilize and publish information on "Facebook" was established. We present here methods as well as results for utilizing this correlation in both directions. Both algorithms for utilizing knowledge of the social context for faster and better face recognition are given, as well as algorithms for estimating the friendship network of a number of individuals given photos containing their faces. The results are quite encouraging. In the primary example, doubling of the recognition accuracy as well as a sixfold improvement in speed is demonstrated. Various improvements, interesting statistics, as well as an empirical investigation leading to predictions of scalability to much bigger data sets are discussed.

  18. Monocular Advantage for Face Perception Implicates Subcortical Mechanisms in Adult Humans

    PubMed Central

    Gabay, Shai; Nestor, Adrian; Dundas, Eva; Behrmann, Marlene

    2014-01-01

    The ability to recognize faces accurately and rapidly is an evolutionarily adaptive process. Most studies examining the neural correlates of face perception in adult humans have focused on a distributed cortical network of face-selective regions. There is, however, robust evidence from phylogenetic and ontogenetic studies that implicates subcortical structures, and recently, some investigations in adult humans indicate subcortical correlates of face perception as well. The questions addressed here are whether low-level subcortical mechanisms for face perception (in the absence of changes in expression) are conserved in human adults, and if so, what is the nature of these subcortical representations. In a series of four experiments, we presented pairs of images to the same or different eyes. Participants’ performance demonstrated that subcortical mechanisms, indexed by monocular portions of the visual system, play a functional role in face perception. These mechanisms are sensitive to face-like configurations and afford a coarse representation of a face, comprised of primarily low spatial frequency information, which suffices for matching faces but not for more complex aspects of face perception such as sex differentiation. Importantly, these subcortical mechanisms are not implicated in the perception of other visual stimuli, such as cars or letter strings. These findings suggest a conservation of phylogenetically and ontogenetically lower-order systems in adult human face perception. The involvement of subcortical structures in face recognition provokes a reconsideration of current theories of face perception, which are reliant on cortical level processing, inasmuch as it bolsters the cross-species continuity of the biological system for face recognition. PMID:24236767

  19. Beyond the FFA: The Role of the Ventral Anterior Temporal Lobes in Face Processing

    PubMed Central

    Collins, Jessica A.; Olson, Ingrid R.

    2014-01-01

    Extensive research has supported the existence of a specialized face-processing network that is distinct from the visual processing areas used for general object recognition. The majority of this work has been aimed at characterizing the response properties of the fusiform face area (FFA) and the occipital face area (OFA), which together are thought to constitute the core network of brain areas responsible for facial identification. Although accruing evidence has shown that face-selective patches in the ventral anterior temporal lobes (vATLs) are interconnected with the FFA and OFA, and that they play a role in facial identification, the relative contribution of these brain areas to the core face-processing network has remained unarticulated. Here we review recent research critically implicating the vATLs in face perception and memory. We propose that current models of face processing should be revised such that the ventral anterior temporal lobes serve a centralized role in the visual face-processing network. We speculate that a hierarchically organized system of face processing areas extends bilaterally from the inferior occipital gyri to the vATLs, with facial representations becoming increasingly complex and abstracted from low-level perceptual features as they move forward along this network. The anterior temporal face areas may serve as the apex of this hierarchy, instantiating the final stages of face recognition. We further argue that the anterior temporal face areas are ideally suited to serve as an interface between face perception and face memory, linking perceptual representations of individual identity with person-specific semantic knowledge. PMID:24937188

  20. Who is who: Italian norms for visual recognition and identification of celebrities.

    PubMed

    Bizzozero, I; Ferrari, F; Pozzoli, S; Saetti, M C; Spinnler, H

    2005-06-01

    Age-, education- and sex-adjusted norms are provided for two new neuropsychological tests, namely (i) Face Recognition (guess of familiarity) and (ii) Person Identification (biographical contextualisation). Sixty-three pictures of celebrities and 63 of unknown people were selected following two interwoven criteria(1): the realm of their celebrity (i.e., entertainment, culture and politics) and the period of celebrity acquisition (i.e., pre-war, post-war and contemporary). Both media- and education-dependent knowledge of celebrity were considered. Ninety-eight unpaid healthy participants aged between 50 and 93 years and with at least 8 years of formal education took part in this study. Reference is made to serial models of familiar face/persons processing. Recognition is held to tackle the activity of Personal Identity Nodes (PINs) and identification of the Exemplar Semantics Archives. Given the seriality of the reference model, the Identification Test is embedded in the Recognition test. This entailed that only previously recognised faces were employed to achieve norms for identification.

  1. The look of fear and anger: facial maturity modulates recognition of fearful and angry expressions.

    PubMed

    Sacco, Donald F; Hugenberg, Kurt

    2009-02-01

    The current series of studies provide converging evidence that facial expressions of fear and anger may have co-evolved to mimic mature and babyish faces in order to enhance their communicative signal. In Studies 1 and 2, fearful and angry facial expressions were manipulated to have enhanced babyish features (larger eyes) or enhanced mature features (smaller eyes) and in the context of a speeded categorization task in Study 1 and a visual noise paradigm in Study 2, results indicated that larger eyes facilitated the recognition of fearful facial expressions, while smaller eyes facilitated the recognition of angry facial expressions. Study 3 manipulated facial roundness, a stable structure that does not vary systematically with expressions, and found that congruency between maturity and expression (narrow face-anger; round face-fear) facilitated expression recognition accuracy. Results are discussed as representing a broad co-evolutionary relationship between facial maturity and fearful and angry facial expressions. (c) 2009 APA, all rights reserved

  2. Social Experience Does Not Abolish Cultural Diversity in Eye Movements

    PubMed Central

    Kelly, David J.; Jack, Rachael E.; Miellet, Sébastien; De Luca, Emanuele; Foreman, Kay; Caldara, Roberto

    2011-01-01

    Adults from Eastern (e.g., China) and Western (e.g., USA) cultural groups display pronounced differences in a range of visual processing tasks. For example, the eye movement strategies used for information extraction during a variety of face processing tasks (e.g., identification and facial expressions of emotion categorization) differs across cultural groups. Currently, many of the differences reported in previous studies have asserted that culture itself is responsible for shaping the way we process visual information, yet this has never been directly investigated. In the current study, we assessed the relative contribution of genetic and cultural factors by testing face processing in a population of British Born Chinese adults using face recognition and expression classification tasks. Contrary to predictions made by the cultural differences framework, the majority of British Born Chinese adults deployed “Eastern” eye movement strategies, while approximately 25% of participants displayed “Western” strategies. Furthermore, the cultural eye movement strategies used by individuals were consistent across recognition and expression tasks. These findings suggest that “culture” alone cannot straightforwardly account for diversity in eye movement patterns. Instead a more complex understanding of how the environment and individual experiences can influence the mechanisms that govern visual processing is required. PMID:21886626

  3. MPEG-7 audio-visual indexing test-bed for video retrieval

    NASA Astrophysics Data System (ADS)

    Gagnon, Langis; Foucher, Samuel; Gouaillier, Valerie; Brun, Christelle; Brousseau, Julie; Boulianne, Gilles; Osterrath, Frederic; Chapdelaine, Claude; Dutrisac, Julie; St-Onge, Francis; Champagne, Benoit; Lu, Xiaojian

    2003-12-01

    This paper reports on the development status of a Multimedia Asset Management (MAM) test-bed for content-based indexing and retrieval of audio-visual documents within the MPEG-7 standard. The project, called "MPEG-7 Audio-Visual Document Indexing System" (MADIS), specifically targets the indexing and retrieval of video shots and key frames from documentary film archives, based on audio-visual content like face recognition, motion activity, speech recognition and semantic clustering. The MPEG-7/XML encoding of the film database is done off-line. The description decomposition is based on a temporal decomposition into visual segments (shots), key frames and audio/speech sub-segments. The visible outcome will be a web site that allows video retrieval using a proprietary XQuery-based search engine and accessible to members at the Canadian National Film Board (NFB) Cineroute site. For example, end-user will be able to ask to point on movie shots in the database that have been produced in a specific year, that contain the face of a specific actor who tells a specific word and in which there is no motion activity. Video streaming is performed over the high bandwidth CA*net network deployed by CANARIE, a public Canadian Internet development organization.

  4. Developmental Origins of the Other-Race Effect

    PubMed Central

    Anzures, Gizelle; Quinn, Paul C.; Pascalis, Olivier; Slater, Alan M.; Tanaka, James W.; Lee, Kang

    2013-01-01

    The other-race effect (ORE) in face recognition refers to better recognition memory for faces of one’s own race than faces of another race—a common phenomenon among individuals living in primarily mono-racial societies. In this article, we review findings suggesting that early visual and sociocultural experiences shape one’s processing of familiar and unfamiliar race classes and give rise to the ORE within the 1st year of life. However, despite its early development, the ORE can be prevented, attenuated, and even reversed given experience with a novel race class. Social implications of the ORE are discussed in relation to development of race-based preferences for social partners and racial prejudices. PMID:24049246

  5. Famous faces and voices: Differential profiles in early right and left semantic dementia and in Alzheimer's disease.

    PubMed

    Luzzi, Simona; Baldinelli, Sara; Ranaldi, Valentina; Fabi, Katia; Cafazzo, Viviana; Fringuelli, Fabio; Silvestrini, Mauro; Provinciali, Leandro; Reverberi, Carlo; Gainotti, Guido

    2017-01-08

    Famous face and voice recognition is reported to be impaired both in semantic dementia (SD) and in Alzheimer's Disease (AD), although more severely in the former. In AD a coexistence of perceptual impairment in face and voice processing has also been reported and this could contribute to the altered performance in complex semantic tasks. On the other hand, in SD both face and voice recognition disorders could be related to the prevalence of atrophy in the right temporal lobe (RTL). The aim of the present study was twofold: (1) to investigate famous faces and voices recognition in SD and AD to verify if the two diseases show a differential pattern of impairment, resulting from disruption of different cognitive mechanisms; (2) to check if face and voice recognition disorders prevail in patients with atrophy mainly affecting the RTL. To avoid the potential influence of primary perceptual problems in face and voice recognition, a pool of patients suffering from early SD and AD were administered a detailed set of tests exploring face and voice perception. Thirteen SD (8 with prevalence of right and 5 with prevalence of left temporal atrophy) and 25 CE patients, who did not show visual and auditory perceptual impairment, were finally selected and were administered an experimental battery exploring famous face and voice recognition and naming. Twelve SD patients underwent cerebral PET imaging and were classified in right and left SD according to the onset modality and to the prevalent decrease in FDG uptake in right or left temporal lobe respectively. Correlation of PET imaging and famous face and voice recognition was performed. Results showed a differential performance profile in the two diseases, because AD patients were significantly impaired in the naming tests, but showed preserved recognition, whereas SD patients were profoundly impaired both in naming and in recognition of famous faces and voices. Furthermore, face and voice recognition disorders prevailed in SD patients with RTL atrophy, who also showed a conceptual impairment on the Pyramids and Palm Trees test more important in the pictorial than in the verbal modality. Finally, in 12SD patients in whom PET was available, a strong correlation between FDG uptake and face-to-name and voice-to-name matching data was found in the right but not in the left temporal lobe. The data support the hypothesis of a different cognitive basis for impairment of face and voice recognition in the two dementias and suggest that the pattern of impairment in SD may be due to a loss of semantic representations, while a defect of semantic control, with impaired naming and preserved recognition might be hypothesized in AD. Furthermore, the correlation between face and voice recognition disorders and RTL damage are consistent with the hypothesis assuming that in the RTL person-specific knowledge may be mainly based upon non-verbal representations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Reference frames for spatial frequency in face representation differ in the temporal visual cortex and amygdala.

    PubMed

    Inagaki, Mikio; Fujita, Ichiro

    2011-07-13

    Social communication in nonhuman primates and humans is strongly affected by facial information from other individuals. Many cortical and subcortical brain areas are known to be involved in processing facial information. However, how the neural representation of faces differs across different brain areas remains unclear. Here, we demonstrate that the reference frame for spatial frequency (SF) tuning of face-responsive neurons differs in the temporal visual cortex and amygdala in monkeys. Consistent with psychophysical properties for face recognition, temporal cortex neurons were tuned to image-based SFs (cycles/image) and showed viewing distance-invariant representation of face patterns. On the other hand, many amygdala neurons were influenced by retina-based SFs (cycles/degree), a characteristic that is useful for social distance computation. The two brain areas also differed in the luminance contrast sensitivity of face-responsive neurons; amygdala neurons sharply reduced their responses to low luminance contrast images, while temporal cortex neurons maintained the level of their responses. From these results, we conclude that different types of visual processing in the temporal visual cortex and the amygdala contribute to the construction of the neural representations of faces.

  7. Orientation Encoding and Viewpoint Invariance in Face Recognition: Inferring Neural Properties from Large-Scale Signals.

    PubMed

    Ramírez, Fernando M

    2018-05-01

    Viewpoint-invariant face recognition is thought to be subserved by a distributed network of occipitotemporal face-selective areas that, except for the human anterior temporal lobe, have been shown to also contain face-orientation information. This review begins by highlighting the importance of bilateral symmetry for viewpoint-invariant recognition and face-orientation perception. Then, monkey electrophysiological evidence is surveyed describing key tuning properties of face-selective neurons-including neurons bimodally tuned to mirror-symmetric face-views-followed by studies combining functional magnetic resonance imaging (fMRI) and multivariate pattern analyses to probe the representation of face-orientation and identity information in humans. Altogether, neuroimaging studies suggest that face-identity is gradually disentangled from face-orientation information along the ventral visual processing stream. The evidence seems to diverge, however, regarding the prevalent form of tuning of neural populations in human face-selective areas. In this context, caveats possibly leading to erroneous inferences regarding mirror-symmetric coding are exposed, including the need to distinguish angular from Euclidean distances when interpreting multivariate pattern analyses. On this basis, this review argues that evidence from the fusiform face area is best explained by a view-sensitive code reflecting head angular disparity, consistent with a role of this area in face-orientation perception. Finally, the importance is stressed of explicit models relating neural properties to large-scale signals.

  8. Recognition memory for low- and high-frequency-filtered emotional faces: Low spatial frequencies drive emotional memory enhancement, whereas high spatial frequencies drive the emotion-induced recognition bias.

    PubMed

    Rohr, Michaela; Tröger, Johannes; Michely, Nils; Uhde, Alarith; Wentura, Dirk

    2017-07-01

    This article deals with two well-documented phenomena regarding emotional stimuli: emotional memory enhancement-that is, better long-term memory for emotional than for neutral stimuli-and the emotion-induced recognition bias-that is, a more liberal response criterion for emotional than for neutral stimuli. Studies on visual emotion perception and attention suggest that emotion-related processes can be modulated by means of spatial-frequency filtering of the presented emotional stimuli. Specifically, low spatial frequencies are assumed to play a primary role for the influence of emotion on attention and judgment. Given this theoretical background, we investigated whether spatial-frequency filtering also impacts (1) the memory advantage for emotional faces and (2) the emotion-induced recognition bias, in a series of old/new recognition experiments. Participants completed incidental-learning tasks with high- (HSF) and low- (LSF) spatial-frequency-filtered emotional and neutral faces. The results of the surprise recognition tests showed a clear memory advantage for emotional stimuli. Most importantly, the emotional memory enhancement was significantly larger for face images containing only low-frequency information (LSF faces) than for HSF faces across all experiments, suggesting that LSF information plays a critical role in this effect, whereas the emotion-induced recognition bias was found only for HSF stimuli. We discuss our findings in terms of both the traditional account of different processing pathways for HSF and LSF information and a stimulus features account. The double dissociation in the results favors the latter account-that is, an explanation in terms of differences in the characteristics of HSF and LSF stimuli.

  9. CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset

    PubMed Central

    Cao, Houwei; Cooper, David G.; Keutmann, Michael K.; Gur, Ruben C.; Nenkova, Ani; Verma, Ragini

    2014-01-01

    People convey their emotional state in their face and voice. We present an audio-visual data set uniquely suited for the study of multi-modal emotion expression and perception. The data set consists of facial and vocal emotional expressions in sentences spoken in a range of basic emotional states (happy, sad, anger, fear, disgust, and neutral). 7,442 clips of 91 actors with diverse ethnic backgrounds were rated by multiple raters in three modalities: audio, visual, and audio-visual. Categorical emotion labels and real-value intensity values for the perceived emotion were collected using crowd-sourcing from 2,443 raters. The human recognition of intended emotion for the audio-only, visual-only, and audio-visual data are 40.9%, 58.2% and 63.6% respectively. Recognition rates are highest for neutral, followed by happy, anger, disgust, fear, and sad. Average intensity levels of emotion are rated highest for visual-only perception. The accurate recognition of disgust and fear requires simultaneous audio-visual cues, while anger and happiness can be well recognized based on evidence from a single modality. The large dataset we introduce can be used to probe other questions concerning the audio-visual perception of emotion. PMID:25653738

  10. Emotional recognition of dynamic facial expressions before and after cochlear implantation in adults with progressive deafness.

    PubMed

    Ambert-Dahan, Emmanuèle; Giraud, Anne-Lise; Mecheri, Halima; Sterkers, Olivier; Mosnier, Isabelle; Samson, Séverine

    2017-10-01

    Visual processing has been extensively explored in deaf subjects in the context of verbal communication, through the assessment of speech reading and sign language abilities. However, little is known about visual emotional processing in adult progressive deafness, and after cochlear implantation. The goal of our study was thus to assess the influence of acquired post-lingual progressive deafness on the recognition of dynamic facial emotions that were selected to express canonical fear, happiness, sadness, and anger. A total of 23 adults with post-lingual deafness separated into two groups; those assessed either before (n = 10) and those assessed after (n = 13) cochlear implantation (CI); and 13 normal hearing (NH) individuals participated in the current study. Participants were asked to rate the expression of the four cardinal emotions, and to evaluate both their emotional valence (unpleasant-pleasant) and arousal potential (relaxing-stimulating). We found that patients with deafness were impaired in the recognition of sad faces, and that patients equipped with a CI were additionally impaired in the recognition of happiness and fear (but not anger). Relative to controls, all patients with deafness showed a deficit in perceiving arousal expressed in faces, while valence ratings remained unaffected. The current results show for the first time that acquired and progressive deafness is associated with a reduction of emotional sensitivity to visual stimuli. This negative impact of progressive deafness on the perception of dynamic facial cues for emotion recognition contrasts with the proficiency of deaf subjects with and without CIs in processing visual speech cues (Rouger et al., 2007; Strelnikov et al., 2009; Lazard and Giraud, 2017). Altogether these results suggest there to be a trade-off between the processing of linguistic and non-linguistic visual stimuli. Copyright © 2017. Published by Elsevier B.V.

  11. Hemispheric asymmetry in holistic processing of words.

    PubMed

    Ventura, Paulo; Delgado, João; Ferreira, Miguel; Farinha-Fernandes, António; Guerreiro, José C; Faustino, Bruno; Leite, Isabel; Wong, Alan C-N

    2018-05-13

    Holistic processing has been regarded as a hallmark of face perception, indicating the automatic and obligatory tendency of the visual system to process all face parts as a perceptual unit rather than in isolation. Studies involving lateralized stimulus presentation suggest that the right hemisphere dominates holistic face processing. Holistic processing can also be shown with other categories such as words and thus it is not specific to faces or face-like expertize. Here, we used divided visual field presentation to investigate the possibly different contributions of the two hemispheres for holistic word processing. Observers performed same/different judgment on the cued parts of two sequentially presented words in the complete composite paradigm. Our data indicate a right hemisphere specialization for holistic word processing. Thus, these markers of expert object recognition are domain general.

  12. Improving the Quality of Facial Composites Using a Holistic Cognitive Interview

    ERIC Educational Resources Information Center

    Frowd, Charlie D.; Bruce, Vicki; Smith, Ashley J.; Hancock, Peter J. B.

    2008-01-01

    Witnesses to and victims of serious crime are normally asked to describe the appearance of a criminal suspect, using a Cognitive Interview (CI), and to construct a facial composite, a visual representation of the face. Research suggests that focusing on the global aspects of a face, as opposed to its facial features, facilitates recognition and…

  13. Successful Face Recognition Is Associated with Increased Prefrontal Cortex Activation in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Herrington, John D.; Riley, Meghan E.; Grupe, Daniel W.; Schultz, Robert T.

    2015-01-01

    This study examines whether deficits in visual information processing in autism-spectrum disorder (ASD) can be offset by the recruitment of brain structures involved in selective attention. During functional MRI, 12 children with ASD and 19 control participants completed a selective attention one-back task in which images of faces and houses were…

  14. Visual scan paths are abnormal in deluded schizophrenics.

    PubMed

    Phillips, M L; David, A S

    1997-01-01

    One explanation for delusion formation is that they result from distorted appreciation of complex stimuli. The study investigated delusions in schizophrenia using a physiological marker of visual attention and information processing, the visual scan path-a map tracing the direction and duration of gaze when an individual views a stimulus. The aim was to demonstrate the presence of a specific deficit in processing meaningful stimuli (e.g. human faces) in deluded schizophrenics (DS) by relating this to abnormal viewing strategies. Visual scan paths were measured in acutely-deluded (n = 7) and non-deluded (n = 7) schizophrenics matched for medication, illness duration and negative symptoms, plus 10 age-matched normal controls. DS employed abnormal strategies for viewing single faces and face pairs in a recognition task, staring at fewer points and fixating non-feature areas to a significantly greater extent than both control groups (P < 0.05). The results indicate that DS direct their attention to less salient visual information when viewing faces. Future paradigms employing more complex stimuli and testing DS when less-deluded will allow further clarification of the relationship between viewing strategies and delusions.

  15. Specific and Nonspecific Neural Activity during Selective Processing of Visual Representations in Working Memory

    ERIC Educational Resources Information Center

    Oh, Hwamee; Leung, Hoi-Chung

    2010-01-01

    In this fMRI study, we investigated prefrontal cortex (PFC) and visual association regions during selective information processing. We recorded behavioral responses and neural activity during a delayed recognition task with a cue presented during the delay period. A specific cue ("Face" or "Scene") was used to indicate which one of the two…

  16. Meet The Simpsons: top-down effects in face learning.

    PubMed

    Bonner, Lesley; Burton, A Mike; Jenkins, Rob; McNeill, Allan; Vicki, Bruce

    2003-01-01

    We examined whether prior knowledge of a person affects the visual processes involved in learning a face. In two experiments, subjects were taught to associate human faces with characters they knew (from the TV show The Simpsons) or characters they did not (novel names). In each experiment, knowledge of the character predicted performance in a recognition memory test, relying only on old/new confidence ratings. In experiment 1, we established the technique and showed that there is a face-learning advantage for known people, even when face items are counterbalanced for familiarity across the experiment. In experiment 2 we replicated the effect in a setting which discouraged subjects from attending more to known than unknown people, and eliminated any visual association between face stimuli and a character from The Simpsons. We conclude that prior knowledge about a person can enhance learning of a new face.

  17. Neural Mechanism for Mirrored Self-face Recognition.

    PubMed

    Sugiura, Motoaki; Miyauchi, Carlos Makoto; Kotozaki, Yuka; Akimoto, Yoritaka; Nozawa, Takayuki; Yomogida, Yukihito; Hanawa, Sugiko; Yamamoto, Yuki; Sakuma, Atsushi; Nakagawa, Seishu; Kawashima, Ryuta

    2015-09-01

    Self-face recognition in the mirror is considered to involve multiple processes that integrate 2 perceptual cues: temporal contingency of the visual feedback on one's action (contingency cue) and matching with self-face representation in long-term memory (figurative cue). The aim of this study was to examine the neural bases of these processes by manipulating 2 perceptual cues using a "virtual mirror" system. This system allowed online dynamic presentations of real-time and delayed self- or other facial actions. Perception-level processes were identified as responses to only a single perceptual cue. The effect of the contingency cue was identified in the cuneus. The regions sensitive to the figurative cue were subdivided by the response to a static self-face, which was identified in the right temporal, parietal, and frontal regions, but not in the bilateral occipitoparietal regions. Semantic- or integration-level processes, including amodal self-representation and belief validation, which allow modality-independent self-recognition and the resolution of potential conflicts between perceptual cues, respectively, were identified in distinct regions in the right frontal and insular cortices. The results are supportive of the multicomponent notion of self-recognition and suggest a critical role for contingency detection in the co-emergence of self-recognition and empathy in infants. © The Author 2014. Published by Oxford University Press.

  18. Neural Mechanism for Mirrored Self-face Recognition

    PubMed Central

    Sugiura, Motoaki; Miyauchi, Carlos Makoto; Kotozaki, Yuka; Akimoto, Yoritaka; Nozawa, Takayuki; Yomogida, Yukihito; Hanawa, Sugiko; Yamamoto, Yuki; Sakuma, Atsushi; Nakagawa, Seishu; Kawashima, Ryuta

    2015-01-01

    Self-face recognition in the mirror is considered to involve multiple processes that integrate 2 perceptual cues: temporal contingency of the visual feedback on one's action (contingency cue) and matching with self-face representation in long-term memory (figurative cue). The aim of this study was to examine the neural bases of these processes by manipulating 2 perceptual cues using a “virtual mirror” system. This system allowed online dynamic presentations of real-time and delayed self- or other facial actions. Perception-level processes were identified as responses to only a single perceptual cue. The effect of the contingency cue was identified in the cuneus. The regions sensitive to the figurative cue were subdivided by the response to a static self-face, which was identified in the right temporal, parietal, and frontal regions, but not in the bilateral occipitoparietal regions. Semantic- or integration-level processes, including amodal self-representation and belief validation, which allow modality-independent self-recognition and the resolution of potential conflicts between perceptual cues, respectively, were identified in distinct regions in the right frontal and insular cortices. The results are supportive of the multicomponent notion of self-recognition and suggest a critical role for contingency detection in the co-emergence of self-recognition and empathy in infants. PMID:24770712

  19. A rodent model for the study of invariant visual object recognition

    PubMed Central

    Zoccolan, Davide; Oertelt, Nadja; DiCarlo, James J.; Cox, David D.

    2009-01-01

    The human visual system is able to recognize objects despite tremendous variation in their appearance on the retina resulting from variation in view, size, lighting, etc. This ability—known as “invariant” object recognition—is central to visual perception, yet its computational underpinnings are poorly understood. Traditionally, nonhuman primates have been the animal model-of-choice for investigating the neuronal substrates of invariant recognition, because their visual systems closely mirror our own. Meanwhile, simpler and more accessible animal models such as rodents have been largely overlooked as possible models of higher-level visual functions, because their brains are often assumed to lack advanced visual processing machinery. As a result, little is known about rodents' ability to process complex visual stimuli in the face of real-world image variation. In the present work, we show that rats possess more advanced visual abilities than previously appreciated. Specifically, we trained pigmented rats to perform a visual task that required them to recognize objects despite substantial variation in their appearance, due to changes in size, view, and lighting. Critically, rats were able to spontaneously generalize to previously unseen transformations of learned objects. These results provide the first systematic evidence for invariant object recognition in rats and argue for an increased focus on rodents as models for studying high-level visual processing. PMID:19429704

  20. Facial patterns in a tropical social wasp correlate with colony membership

    NASA Astrophysics Data System (ADS)

    Baracchi, David; Turillazzi, Stefano; Chittka, Lars

    2016-10-01

    Social insects excel in discriminating nestmates from intruders, typically relying on colony odours. Remarkably, some wasp species achieve such discrimination using visual information. However, while it is universally accepted that odours mediate a group level recognition, the ability to recognise colony members visually has been considered possible only via individual recognition by which wasps discriminate `friends' and `foes'. Using geometric morphometric analysis, which is a technique based on a rigorous statistical theory of shape allowing quantitative multivariate analyses on structure shapes, we first quantified facial marking variation of Liostenogaster flavolineata wasps. We then compared this facial variation with that of chemical profiles (generated by cuticular hydrocarbons) within and between colonies. Principal component analysis and discriminant analysis applied to sets of variables containing pure shape information showed that despite appreciable intra-colony variation, the faces of females belonging to the same colony resemble one another more than those of outsiders. This colony-specific variation in facial patterns was on a par with that observed for odours. While the occurrence of face discrimination at the colony level remains to be tested by behavioural experiments, overall our results suggest that, in this species, wasp faces display adequate information that might be potentially perceived and used by wasps for colony level recognition.

  1. The Facespan-the perceptual span for face recognition.

    PubMed

    Papinutto, Michael; Lao, Junpeng; Ramon, Meike; Caldara, Roberto; Miellet, Sébastien

    2017-05-01

    In reading, the perceptual span is a well-established concept that refers to the amount of information that can be read in a single fixation. Surprisingly, despite extensive empirical interest in determining the perceptual strategies deployed to process faces and an ongoing debate regarding the factors or mechanism(s) underlying efficient face processing, the perceptual span for faces-the Facespan-remains undetermined. To address this issue, we applied the gaze-contingent Spotlight technique implemented in an old-new face recognition paradigm. This procedure allowed us to parametrically vary the amount of facial information available at a fixated location in order to determine the minimal aperture size at which face recognition performance plateaus. As expected, accuracy increased nonlinearly with spotlight size apertures. Analyses of Structural Similarity comparing the available information during spotlight and natural viewing conditions indicate that the Facespan-the minimum spatial extent of preserved facial information leading to comparable performance as in natural viewing-encompasses 7° of visual angle in our viewing conditions (size of the face stimulus: 15.6°; viewing distance: 70 cm), which represents 45% of the face. The present findings provide a benchmark for future investigations that will address if and how the Facespan is modulated by factors such as cultural, developmental, idiosyncratic, or task-related differences.

  2. Geometric distortions affect face recognition in chimpanzees (Pan troglodytes) and monkeys (Macaca mulatta).

    PubMed

    Taubert, Jessica; Parr, Lisa A

    2011-01-01

    All primates can recognize faces and do so by analyzing the subtle variation that exists between faces. Through a series of three experiments, we attempted to clarify the nature of second-order information processing in nonhuman primates. Experiment one showed that both chimpanzees (Pan troglodytes) and rhesus monkeys (Macaca mulatta) tolerate geometric distortions along the vertical axis, suggesting that information about absolute position of features does not contribute to accurate face recognition. Chimpanzees differed from monkeys, however, in that they were more sensitive to distortions along the horizontal axis, suggesting that when building a global representation of facial identity, horizontal relations between features are more diagnostic of identity than vertical relations. Two further experiments were performed to determine whether the monkeys were simply less sensitive to horizontal relations compared to chimpanzees or were instead relying on local features. The results of these experiments confirm that monkeys can utilize a holistic strategy when discriminating between faces regardless of familiarity. In contrast, our data show that chimpanzees, like humans, use a combination of holistic and local features when the faces are unfamiliar, but primarily holistic information when the faces become familiar. We argue that our comparative approach to the study of face recognition reveals the impact that individual experience and social organization has on visual cognition.

  3. Image processing strategies based on saliency segmentation for object recognition under simulated prosthetic vision.

    PubMed

    Li, Heng; Su, Xiaofan; Wang, Jing; Kan, Han; Han, Tingting; Zeng, Yajie; Chai, Xinyu

    2018-01-01

    Current retinal prostheses can only generate low-resolution visual percepts constituted of limited phosphenes which are elicited by an electrode array and with uncontrollable color and restricted grayscale. Under this visual perception, prosthetic recipients can just complete some simple visual tasks, but more complex tasks like face identification/object recognition are extremely difficult. Therefore, it is necessary to investigate and apply image processing strategies for optimizing the visual perception of the recipients. This study focuses on recognition of the object of interest employing simulated prosthetic vision. We used a saliency segmentation method based on a biologically plausible graph-based visual saliency model and a grabCut-based self-adaptive-iterative optimization framework to automatically extract foreground objects. Based on this, two image processing strategies, Addition of Separate Pixelization and Background Pixel Shrink, were further utilized to enhance the extracted foreground objects. i) The results showed by verification of psychophysical experiments that under simulated prosthetic vision, both strategies had marked advantages over Direct Pixelization in terms of recognition accuracy and efficiency. ii) We also found that recognition performance under two strategies was tied to the segmentation results and was affected positively by the paired-interrelated objects in the scene. The use of the saliency segmentation method and image processing strategies can automatically extract and enhance foreground objects, and significantly improve object recognition performance towards recipients implanted a high-density implant. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Challenges older adults face in detecting deceit: the role of emotion recognition.

    PubMed

    Stanley, Jennifer Tehan; Blanchard-Fields, Fredda

    2008-03-01

    Facial expressions of emotion are key cues to deceit (M. G. Frank & P. Ekman, 1997). Given that the literature on aging has shown an age-related decline in decoding emotions, we investigated (a) whether there are age differences in deceit detection and (b) if so, whether they are related to impairments in emotion recognition. Young and older adults (N = 364) were presented with 20 interviews (crime and opinion topics) and asked to decide whether each interview subject was lying or telling the truth. There were 3 presentation conditions: visual, audio, or audiovisual. In older adults, reduced emotion recognition was related to poor deceit detection in the visual condition for crime interviews only. (c) 2008 APA, all rights reserved.

  5. Intact anger recognition in depression despite aberrant visual facial information usage.

    PubMed

    Clark, Cameron M; Chiu, Carina G; Diaz, Ruth L; Goghari, Vina M

    2014-08-01

    Previous literature has indicated abnormalities in facial emotion recognition abilities, as well as deficits in basic visual processes in major depression. However, the literature is unclear on a number of important factors including whether or not these abnormalities represent deficient or enhanced emotion recognition abilities compared to control populations, and the degree to which basic visual deficits might impact this process. The present study investigated emotion recognition abilities for angry versus neutral facial expressions in a sample of undergraduate students with Beck Depression Inventory-II (BDI-II) scores indicative of moderate depression (i.e., ≥20), compared to matched low-BDI-II score (i.e., ≤2) controls via the Bubbles Facial Emotion Perception Task. Results indicated unimpaired behavioural performance in discriminating angry from neutral expressions in the high depressive symptoms group relative to the minimal depressive symptoms group, despite evidence of an abnormal pattern of visual facial information usage. The generalizability of the current findings is limited by the highly structured nature of the facial emotion recognition task used, as well as the use of an analog sample undergraduates scoring high in self-rated symptoms of depression rather than a clinical sample. Our findings suggest that basic visual processes are involved in emotion recognition abnormalities in depression, demonstrating consistency with the emotion recognition literature in other psychopathologies (e.g., schizophrenia, autism, social anxiety). Future research should seek to replicate these findings in clinical populations with major depression, and assess the association between aberrant face gaze behaviours and symptom severity and social functioning. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Famous face identification in temporal lobe epilepsy: Support for a multimodal integration model of semantic memory

    PubMed Central

    Drane, Daniel L.; Ojemann, Jeffrey G.; Phatak, Vaishali; Loring, David W.; Gross, Robert E.; Hebb, Adam O.; Silbergeld, Daniel L.; Miller, John W.; Voets, Natalie L.; Saindane, Amit M.; Barsalou, Lawrence; Meador, Kimford J.; Ojemann, George A.; Tranel, Daniel

    2012-01-01

    This study aims to demonstrate that the left and right anterior temporal lobes (ATLs) perform critical but unique roles in famous face identification, with damage to either leading to differing deficit patterns reflecting decreased access to lexical or semantic concepts but not their degradation. Famous face identification was studied in 22 presurgical and 14 postsurgical temporal lobe epilepsy (TLE) patients and 20 healthy comparison subjects using free recall and multiple choice (MC) paradigms. Right TLE patients exhibited presurgical deficits in famous face recognition, and postsurgical deficits in both famous face recognition and familiarity judgments. However, they did not exhibit any problems with naming before or after surgery. In contrast, left TLE patients demonstrated both pre-and postsurgical deficits in famous face naming but no significant deficits in recognition or familiarity. Double dissociations in performance between groups were alleviated by altering task demands. Postsurgical right TLE patients provided with MC options correctly identified greater than 70% of famous faces they initially rated as unfamiliar. Left TLE patients accurately chose the name for nearly all famous faces they recognized (based on their verbal description) but initially failed to name, although they tended to rapidly lose access to this name. We believe alterations in task demands activate alternative routes to semantic and lexical networks, demonstrating that unique pathways to such stored information exist, and suggesting a different role for each ATL in identifying visually presented famous faces. The right ATL appears to play a fundamental role in accessing semantic information from a visual route, with the left ATL serving to link semantic information to the language system to produce a specific name. These findings challenge several assumptions underlying amodal models of semantic memory, and provide support for the integrated multimodal theories of semantic memory and a distributed representation of concepts. PMID:23040175

  7. Famous face identification in temporal lobe epilepsy: support for a multimodal integration model of semantic memory.

    PubMed

    Drane, Daniel L; Ojemann, Jeffrey G; Phatak, Vaishali; Loring, David W; Gross, Robert E; Hebb, Adam O; Silbergeld, Daniel L; Miller, John W; Voets, Natalie L; Saindane, Amit M; Barsalou, Lawrence; Meador, Kimford J; Ojemann, George A; Tranel, Daniel

    2013-06-01

    This study aims to demonstrate that the left and right anterior temporal lobes (ATLs) perform critical but unique roles in famous face identification, with damage to either leading to differing deficit patterns reflecting decreased access to lexical or semantic concepts but not their degradation. Famous face identification was studied in 22 presurgical and 14 postsurgical temporal lobe epilepsy (TLE) patients and 20 healthy comparison subjects using free recall and multiple choice (MC) paradigms. Right TLE patients exhibited presurgical deficits in famous face recognition, and postsurgical deficits in both famous face recognition and familiarity judgments. However, they did not exhibit any problems with naming before or after surgery. In contrast, left TLE patients demonstrated both pre- and postsurgical deficits in famous face naming but no significant deficits in recognition or familiarity. Double dissociations in performance between groups were alleviated by altering task demands. Postsurgical right TLE patients provided with MC options correctly identified greater than 70% of famous faces they initially rated as unfamiliar. Left TLE patients accurately chose the name for nearly all famous faces they recognized (based on their verbal description) but initially failed to name, although they tended to rapidly lose access to this name. We believe alterations in task demands activate alternative routes to semantic and lexical networks, demonstrating that unique pathways to such stored information exist, and suggesting a different role for each ATL in identifying visually presented famous faces. The right ATL appears to play a fundamental role in accessing semantic information from a visual route, with the left ATL serving to link semantic information to the language system to produce a specific name. These findings challenge several assumptions underlying amodal models of semantic memory, and provide support for the integrated multimodal theories of semantic memory and a distributed representation of concepts. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Not just the norm: exemplar-based models also predict face aftereffects.

    PubMed

    Ross, David A; Deroche, Mickael; Palmeri, Thomas J

    2014-02-01

    The face recognition literature has considered two competing accounts of how faces are represented within the visual system: Exemplar-based models assume that faces are represented via their similarity to exemplars of previously experienced faces, while norm-based models assume that faces are represented with respect to their deviation from an average face, or norm. Face identity aftereffects have been taken as compelling evidence in favor of a norm-based account over an exemplar-based account. After a relatively brief period of adaptation to an adaptor face, the perceived identity of a test face is shifted toward a face with attributes opposite to those of the adaptor, suggesting an explicit psychological representation of the norm. Surprisingly, despite near universal recognition that face identity aftereffects imply norm-based coding, there have been no published attempts to simulate the predictions of norm- and exemplar-based models in face adaptation paradigms. Here, we implemented and tested variations of norm and exemplar models. Contrary to common claims, our simulations revealed that both an exemplar-based model and a version of a two-pool norm-based model, but not a traditional norm-based model, predict face identity aftereffects following face adaptation.

  9. Not Just the Norm: Exemplar-Based Models also Predict Face Aftereffects

    PubMed Central

    Ross, David A.; Deroche, Mickael; Palmeri, Thomas J.

    2014-01-01

    The face recognition literature has considered two competing accounts of how faces are represented within the visual system: Exemplar-based models assume that faces are represented via their similarity to exemplars of previously experienced faces, while norm-based models assume that faces are represented with respect to their deviation from an average face, or norm. Face identity aftereffects have been taken as compelling evidence in favor of a norm-based account over an exemplar-based account. After a relatively brief period of adaptation to an adaptor face, the perceived identity of a test face is shifted towards a face with opposite attributes to the adaptor, suggesting an explicit psychological representation of the norm. Surprisingly, despite near universal recognition that face identity aftereffects imply norm-based coding, there have been no published attempts to simulate the predictions of norm- and exemplar-based models in face adaptation paradigms. Here we implemented and tested variations of norm and exemplar models. Contrary to common claims, our simulations revealed that both an exemplar-based model and a version of a two-pool norm-based model, but not a traditional norm-based model, predict face identity aftereffects following face adaptation. PMID:23690282

  10. Are Face and Object Recognition Independent? A Neurocomputational Modeling Exploration.

    PubMed

    Wang, Panqu; Gauthier, Isabel; Cottrell, Garrison

    2016-04-01

    Are face and object recognition abilities independent? Although it is commonly believed that they are, Gauthier et al. [Gauthier, I., McGugin, R. W., Richler, J. J., Herzmann, G., Speegle, M., & VanGulick, A. E. Experience moderates overlap between object and face recognition, suggesting a common ability. Journal of Vision, 14, 7, 2014] recently showed that these abilities become more correlated as experience with nonface categories increases. They argued that there is a single underlying visual ability, v, that is expressed in performance with both face and nonface categories as experience grows. Using the Cambridge Face Memory Test and the Vanderbilt Expertise Test, they showed that the shared variance between Cambridge Face Memory Test and Vanderbilt Expertise Test performance increases monotonically as experience increases. Here, we address why a shared resource across different visual domains does not lead to competition and to an inverse correlation in abilities? We explain this conundrum using our neurocomputational model of face and object processing ["The Model", TM, Cottrell, G. W., & Hsiao, J. H. Neurocomputational models of face processing. In A. J. Calder, G. Rhodes, M. Johnson, & J. Haxby (Eds.), The Oxford handbook of face perception. Oxford, UK: Oxford University Press, 2011]. We model the domain general ability v as the available computational resources (number of hidden units) in the mapping from input to label and experience as the frequency of individual exemplars in an object category appearing during network training. Our results show that, as in the behavioral data, the correlation between subordinate level face and object recognition accuracy increases as experience grows. We suggest that different domains do not compete for resources because the relevant features are shared between faces and objects. The essential power of experience is to generate a "spreading transform" for faces (separating them in representational space) that generalizes to objects that must be individuated. Interestingly, when the task of the network is basic level categorization, no increase in the correlation between domains is observed. Hence, our model predicts that it is the type of experience that matters and that the source of the correlation is in the fusiform face area, rather than in cortical areas that subserve basic level categorization. This result is consistent with our previous modeling elucidating why the FFA is recruited for novel domains of expertise [Tong, M. H., Joyce, C. A., & Cottrell, G. W. Why is the fusiform face area recruited for novel categories of expertise? A neurocomputational investigation. Brain Research, 1202, 14-24, 2008].

  11. [False recognition of faces associated with fronto-temporal dementia with prosopagnosia].

    PubMed

    Verstichel, P

    2005-09-01

    The association of prosopagnosia and false recognition of faces is unusual and contributes to our understanding of the generation of facial familiarity. A 67-year-old man with a left prefrontal traumatic lesion, developed a temporal variety of fronto-temporal dementia (semantic dementia) with amyotrophic lateral sclerosis. Cerebral imagery demonstrated a bilateral, temporal anterior atrophy predominating in the right hemisphere. The main cognitive signs consisted in severe difficulties to recognize faces of familiar people (prosopagnosia), associated with systematic false recognition of unfamiliar people. Neuropsychological testing indicated that the prosopagnosia probably resulted from the association of an associative/mnemonic mechanism (inability to activate the Face Recognition Units (FRU) from the visual input) and a semantic mechanism (degradation of semantic/biographical information or deconnexion between FRU and this information). At the early stage of the disease, the patient could activate residual semantic information about individuals from their names, but after a 4-year course, he failed to do so. This worsening could be attributed to the extension of the degenerative lesions to the left temporal lobe. Familiar and unfamiliar faces triggered a marked feeling of knowing. False recognition concerned all the unfamiliar faces, and the patient claimed spontaneously that they corresponded to actors, but he could not provide any additional information about their specific identities. The coexistence of prosopagnosia and false recognition suggests the existence of different interconnected systems processing face recognition, one intended to identification of individuals, and the other producing the sense of familiarity. Dysfunctions at different stages of one or the other of these two processes could result in distortions in the feeling of knowing. From this case and others reported in literature, we propose to complete the classical model of face processing by adding a pathway linked to limbic system and frontal structures. This later pathway could normally emit signals for familiarity, essentially autonomic, in response to the familiar faces. These signals, primitively unconscious, secondly reach consciousness and are then integrated by a central supervisor system which evaluates and verifies identity-specific biographical information in order to make a decision about the sense of familiarity.

  12. Automatic lip reading by using multimodal visual features

    NASA Astrophysics Data System (ADS)

    Takahashi, Shohei; Ohya, Jun

    2013-12-01

    Since long time ago, speech recognition has been researched, though it does not work well in noisy places such as in the car or in the train. In addition, people with hearing-impaired or difficulties in hearing cannot receive benefits from speech recognition. To recognize the speech automatically, visual information is also important. People understand speeches from not only audio information, but also visual information such as temporal changes in the lip shape. A vision based speech recognition method could work well in noisy places, and could be useful also for people with hearing disabilities. In this paper, we propose an automatic lip-reading method for recognizing the speech by using multimodal visual information without using any audio information such as speech recognition. First, the ASM (Active Shape Model) is used to track and detect the face and lip in a video sequence. Second, the shape, optical flow and spatial frequencies of the lip features are extracted from the lip detected by ASM. Next, the extracted multimodal features are ordered chronologically so that Support Vector Machine is performed in order to learn and classify the spoken words. Experiments for classifying several words show promising results of this proposed method.

  13. Cognitive penetrability and emotion recognition in human facial expressions

    PubMed Central

    Marchi, Francesco

    2015-01-01

    Do our background beliefs, desires, and mental images influence our perceptual experience of the emotions of others? In this paper, we will address the possibility of cognitive penetration (CP) of perceptual experience in the domain of social cognition. In particular, we focus on emotion recognition based on the visual experience of facial expressions. After introducing the current debate on CP, we review examples of perceptual adaptation for facial expressions of emotion. This evidence supports the idea that facial expressions are perceptually processed as wholes. That is, the perceptual system integrates lower-level facial features, such as eyebrow orientation, mouth angle etc., into facial compounds. We then present additional experimental evidence showing that in some cases, emotion recognition on the basis of facial expression is sensitive to and modified by the background knowledge of the subject. We argue that such sensitivity is best explained as a difference in the visual experience of the facial expression, not just as a modification of the judgment based on this experience. The difference in experience is characterized as the result of the interference of background knowledge with the perceptual integration process for faces. Thus, according to the best explanation, we have to accept CP in some cases of emotion recognition. Finally, we discuss a recently proposed mechanism for CP in the face-based recognition of emotion. PMID:26150796

  14. Right perceptual bias and self-face recognition in individuals with congenital prosopagnosia.

    PubMed

    Malaspina, Manuela; Albonico, Andrea; Daini, Roberta

    2016-01-01

    The existence of a drift to base judgments more on the right half-part of facial stimuli, which falls in the observer's left visual field (left perceptual bias (LPB)), in normal individuals has been demonstrated. However, less is known about the existence of this phenomenon in people affected by face impairment from birth, namely congenital prosopagnosics. In the current study, we aimed to investigate the presence of the LPB under face impairment conditions using chimeric stimuli and the most familiar face of all: the self-face. For this purpose we tested 10 participants with congenital prosopagnosia and 21 healthy controls with a face matching task using facial stimuli, involving a spatial manipulation of the left and the right hemi-faces of self-photos and photos of others. Even though congenital prosopagnosics performance was significantly lower than that of controls, both groups showed a consistent self-face advantage. Moreover, congenital prosopagnosics showed optimal performance when the right side of their face was presented, that is, right perceptual bias, suggesting a differential strategy for self-recognition in those subjects. A possible explanation for this result is discussed.

  15. Unconscious presentation of fearful face modulates electrophysiological responses to emotional prosody.

    PubMed

    Doi, Hirokazu; Shinohara, Kazuyuki

    2015-03-01

    Cross-modal integration of visual and auditory emotional cues is supposed to be advantageous in the accurate recognition of emotional signals. However, the neural locus of cross-modal integration between affective prosody and unconsciously presented facial expression in the neurologically intact population is still elusive at this point. The present study examined the influences of unconsciously presented facial expressions on the event-related potentials (ERPs) in emotional prosody recognition. In the experiment, fearful, happy, and neutral faces were presented without awareness by continuous flash suppression simultaneously with voices containing laughter and a fearful shout. The conventional peak analysis revealed that the ERPs were modulated interactively by emotional prosody and facial expression at multiple latency ranges, indicating that audio-visual integration of emotional signals takes place automatically without conscious awareness. In addition, the global field power during the late-latency range was larger for shout than for laughter only when a fearful face was presented unconsciously. The neural locus of this effect was localized to the left posterior fusiform gyrus, giving support to the view that the cortical region, traditionally considered to be unisensory region for visual processing, functions as the locus of audiovisual integration of emotional signals. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Preliminary findings on the Cross Cultural Test Of Face Recognition.

    PubMed

    O'Bryant, Sid E; McCaffrey, Robert J

    2006-01-01

    The utility of psychological tests cross-culturally has received a great deal of attention in psychology. However, this same level of attentiveness has yet to be realized in the field of neuropsychology. One such area of neuropsychological assessment that may be negatively impacted by racial and ethnic variables is the assessment of facial memory. There is a plethora of literature in cognitive psychology demonstrating an own-race recognition bias in human face memory. Simply stated, individuals are more apt to accurately remember faces of individuals who are of their own race rather than those of other races. Study 1 was conducted to evaluate the performance of the Cross Cultural Test of Face Recognition (CCTFR) in a diverse sample of non impaired individuals. Results suggested the presence of an own-race recognition bias present, but this had no impact on overall CCTFR performance. Study 2 was conducted to determine if CVA patients would perform more poorly than community controls on the CCTFR. As predicted, community controls significantly outperformed CVA patients, and the CCTFR demonstrated a moderate level of sensitivity and specificity. Combined, the results of these studies provide preliminary support for the utility of the CCTFR in the neuropsychological assessment of diverse patient populations. Clearly more research is needed to support the utility of this new test of face recognition, but preliminary results suggest that it may be a viable option for the assessment of visual memory across a spectrum of ethnic groups.

  17. Discriminating Power of Localized Three-Dimensional Facial Morphology

    PubMed Central

    Hammond, Peter; Hutton, Tim J.; Allanson, Judith E.; Buxton, Bernard; Campbell, Linda E.; Clayton-Smith, Jill; Donnai, Dian; Karmiloff-Smith, Annette; Metcalfe, Kay; Murphy, Kieran C.; Patton, Michael; Pober, Barbara; Prescott, Katrina; Scambler, Pete; Shaw, Adam; Smith, Ann C. M.; Stevens, Angela F.; Temple, I. Karen; Hennekam, Raoul; Tassabehji, May

    2005-01-01

    Many genetic syndromes involve a facial gestalt that suggests a preliminary diagnosis to an experienced clinical geneticist even before a clinical examination and genotyping are undertaken. Previously, using visualization and pattern recognition, we showed that dense surface models (DSMs) of full face shape characterize facial dysmorphology in Noonan and in 22q11 deletion syndromes. In this much larger study of 696 individuals, we extend the use of DSMs of the full face to establish accurate discrimination between controls and individuals with Williams, Smith-Magenis, 22q11 deletion, or Noonan syndromes and between individuals with different syndromes in these groups. However, the full power of the DSM approach is demonstrated by the comparable discriminating abilities of localized facial features, such as periorbital, perinasal, and perioral patches, and the correlation of DSM-based predictions and molecular findings. This study demonstrates the potential of face shape models to assist clinical training through visualization, to support clinical diagnosis of affected individuals through pattern recognition, and to enable the objective comparison of individuals sharing other phenotypic or genotypic properties. PMID:16380911

  18. Spatial Mechanisms within the Dorsal Visual Pathway Contribute to the Configural Processing of Faces.

    PubMed

    Zachariou, Valentinos; Nikas, Christine V; Safiullah, Zaid N; Gotts, Stephen J; Ungerleider, Leslie G

    2017-08-01

    Human face recognition is often attributed to configural processing; namely, processing the spatial relationships among the features of a face. If configural processing depends on fine-grained spatial information, do visuospatial mechanisms within the dorsal visual pathway contribute to this process? We explored this question in human adults using functional magnetic resonance imaging and transcranial magnetic stimulation (TMS) in a same-different face detection task. Within localized, spatial-processing regions of the posterior parietal cortex, configural face differences led to significantly stronger activation compared to featural face differences, and the magnitude of this activation correlated with behavioral performance. In addition, detection of configural relative to featural face differences led to significantly stronger functional connectivity between the right FFA and the spatial processing regions of the dorsal stream, whereas detection of featural relative to configural face differences led to stronger functional connectivity between the right FFA and left FFA. Critically, TMS centered on these parietal regions impaired performance on configural but not featural face difference detections. We conclude that spatial mechanisms within the dorsal visual pathway contribute to the configural processing of facial features and, more broadly, that the dorsal stream may contribute to the veridical perception of faces. Published by Oxford University Press 2016.

  19. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    PubMed

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. A self-organized learning strategy for object recognition by an embedded line of attraction

    NASA Astrophysics Data System (ADS)

    Seow, Ming-Jung; Alex, Ann T.; Asari, Vijayan K.

    2012-04-01

    For humans, a picture is worth a thousand words, but to a machine, it is just a seemingly random array of numbers. Although machines are very fast and efficient, they are vastly inferior to humans for everyday information processing. Algorithms that mimic the way the human brain computes and learns may be the solution. In this paper we present a theoretical model based on the observation that images of similar visual perceptions reside in a complex manifold in an image space. The perceived features are often highly structured and hidden in a complex set of relationships or high-dimensional abstractions. To model the pattern manifold, we present a novel learning algorithm using a recurrent neural network. The brain memorizes information using a dynamical system made of interconnected neurons. Retrieval of information is accomplished in an associative sense. It starts from an arbitrary state that might be an encoded representation of a visual image and converges to another state that is stable. The stable state is what the brain remembers. In designing a recurrent neural network, it is usually of prime importance to guarantee the convergence in the dynamics of the network. We propose to modify this picture: if the brain remembers by converging to the state representing familiar patterns, it should also diverge from such states when presented with an unknown encoded representation of a visual image belonging to a different category. That is, the identification of an instability mode is an indication that a presented pattern is far away from any stored pattern and therefore cannot be associated with current memories. These properties can be used to circumvent the plasticity-stability dilemma by using the fluctuating mode as an indicator to create new states. We capture this behavior using a novel neural architecture and learning algorithm, in which the system performs self-organization utilizing a stability mode and an instability mode for the dynamical system. Based on this observation we developed a self- organizing line attractor, which is capable of generating new lines in the feature space to learn unrecognized patterns. Experiments performed on UMIST pose database and CMU face expression variant database for face recognition have shown that the proposed nonlinear line attractor is able to successfully identify the individuals and it provided better recognition rate when compared to the state of the art face recognition techniques. Experiments on FRGC version 2 database has also provided excellent recognition rate in images captured in complex lighting environments. Experiments performed on the Japanese female face expression database and Essex Grimace database using the self organizing line attractor have also shown successful expression invariant face recognition. These results show that the proposed model is able to create nonlinear manifolds in a multidimensional feature space to distinguish complex patterns.

  1. Altered topology of neural circuits in congenital prosopagnosia.

    PubMed

    Rosenthal, Gideon; Tanzer, Michal; Simony, Erez; Hasson, Uri; Behrmann, Marlene; Avidan, Galia

    2017-08-21

    Using a novel, fMRI-based inter-subject functional correlation (ISFC) approach, which isolates stimulus-locked inter-regional correlation patterns, we compared the cortical topology of the neural circuit for face processing in participants with an impairment in face recognition, congenital prosopagnosia (CP), and matched controls. Whereas the anterior temporal lobe served as the major network hub for face processing in controls, this was not the case for the CPs. Instead, this group evinced hyper-connectivity in posterior regions of the visual cortex, mostly associated with the lateral occipital and the inferior temporal cortices. Moreover, the extent of this hyper-connectivity was correlated with the face recognition deficit. These results offer new insights into the perturbed cortical topology in CP, which may serve as the underlying neural basis of the behavioral deficits typical of this disorder. The approach adopted here has the potential to uncover altered topologies in other neurodevelopmental disorders, as well.

  2. Face recognition in capuchin monkeys (Cebus apella).

    PubMed

    Pokorny, Jennifer J; de Waal, Frans B M

    2009-05-01

    Primates live in complex social groups that necessitate recognition of the individuals with whom they interact. In humans, faces provide a visual means by which to gain information such as identity, allowing us to distinguish between both familiar and unfamiliar individuals. The current study used a computerized oddity task to investigate whether a New World primate, Cebus apella, can discriminate the faces of In-group and Out-group conspecifics based on identity. The current study, improved on past methodologies, demonstrates that capuchins recognize the faces of both familiar and unfamiliar conspecifics. Once a performance criterion had been reached, subjects successfully transferred to a large number of novel images within the first 100 trials thus ruling out performance based on previous conditioning. Capuchins can be added to a growing list of primates that appear to recognize two-dimensional facial images of conspecifics. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  3. Reading faces: investigating the use of a novel face-based orthography in acquired alexia.

    PubMed

    Moore, Michelle W; Brendel, Paul C; Fiez, Julie A

    2014-02-01

    Skilled visual word recognition is thought to rely upon a particular region within the left fusiform gyrus, the visual word form area (VWFA). We investigated whether an individual (AA1) with pure alexia resulting from acquired damage to the VWFA territory could learn an alphabetic "FaceFont" orthography, in which faces rather than typical letter-like units are used to represent phonemes. FaceFont was designed to distinguish between perceptual versus phonological influences on the VWFA. AA1 was unable to learn more than five face-phoneme mappings, performing well below that of controls. AA1 succeeded, however, in learning and using a proto-syllabary comprising 15 face-syllable mappings. These results suggest that the VWFA provides a "linguistic bridge" into left hemisphere speech and language regions, irrespective of the perceptual characteristics of a written language. They also suggest that some individuals may be able to acquire a non-alphabetic writing system more readily than an alphabetic writing system. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Reading faces: Investigating the use of a novel face-based orthography in acquired alexia

    PubMed Central

    Moore, Michelle W.; Brendel, Paul C.; Fiez, Julie A.

    2014-01-01

    Skilled visual word recognition is thought to rely upon a particular region within the left fusiform gyrus, the visual word form area (VWFA). We investigated whether an individual (AA1) with pure alexia resulting from acquired damage to the VWFA territory could learn an alphabetic “FaceFont” orthography, in which faces rather than typical letter-like units are used to represent phonemes. FaceFont was designed to distinguish between perceptual versus phonological influences on the VWFA. AA1 was unable to learn more than five face-phoneme mappings, performing well below that of controls. AA1 succeeded, however, in learning and using a proto-syllabary comprising 15 face-syllable mappings. These results suggest that the VWFA provides a “linguistic bridge” into left hemisphere speech and language regions, irrespective of the perceptual characteristics of a written language. They also suggest that some individuals may be able to acquire a non-alphabetic writing system more readily than an alphabetic writing system. PMID:24463310

  5. Brief Communication: visual-field superiority as a function of stimulus type and content: further evidence.

    PubMed

    Basu, Anamitra; Mandal, Manas K

    2004-07-01

    The present study examined visual-field advantage as a function of presentation mode (unilateral, bilateral), stimulus structure (facial, lexical), and stimulus content (emotional, neutral). The experiment was conducted in a split visual-field paradigm using a JAVA-based computer program with recognition accuracy as the dependent measure. Unilaterally, rather than bilaterally, presented stimuli were significantly better recognized. Words were significantly better recognized than faces in the right visual-field; the difference was nonsignificant in the left visual-field. Emotional content elicited left visual-field and neutral content elicited right visual-field advantages. Copyright Taylor and Francis Inc.

  6. Associated impairment of the categories of conspecifics and biological entities: cognitive and neuroanatomical aspects of a new case.

    PubMed

    Capitani, Erminio; Chieppa, Francesca; Laiacona, Marcella

    2010-05-01

    Case A.C.A. presented an associated impairment of visual recognition and semantic knowledge for celebrities and biological objects. This case was relevant for (a) the neuroanatomical correlations, and (b) the relationship between visual recognition and semantics within the biological domain and the conspecifics domain. A.C.A. was not affected by anterior temporal damage. Her bilateral vascular lesions were localized on the medial and inferior temporal gyrus on the right and on the intermediate fusiform gyrus on the left, without concomitant lesions of the parahippocampal gyrus or posterior fusiform. Data analysis was based on a novel methodology developed to estimate the rate of stored items in the visual structural description system (SDS) or in the face recognition unit. For each biological object, no particular correlation was found between the visual information accessed through the semantic system and that tapped by the picture reality judgement. Findings are discussed with reference to whether a putative resource commonality is likely between biological objects and conspecifics, and whether or not either category may depend on an exclusive neural substrate.

  7. Social Cognition in Williams Syndrome: Face Tuning

    PubMed Central

    Pavlova, Marina A.; Heiz, Julie; Sokolov, Alexander N.; Barisnikov, Koviljka

    2016-01-01

    Many neurological, neurodevelopmental, neuropsychiatric, and psychosomatic disorders are characterized by impairments in visual social cognition, body language reading, and facial assessment of a social counterpart. Yet a wealth of research indicates that individuals with Williams syndrome exhibit remarkable concern for social stimuli and face fascination. Here individuals with Williams syndrome were presented with a set of Face-n-Food images composed of food ingredients and in different degree resembling a face (slightly bordering on the Giuseppe Arcimboldo style). The primary advantage of these images is that single components do not explicitly trigger face-specific processing, whereas in face images commonly used for investigating face perception (such as photographs or depictions), the mere occurrence of typical cues already implicates face presence. In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Strikingly, individuals with Williams syndrome exhibited profound deficits in recognition of the Face-n-Food images as a face: they did not report seeing a face on the images, which typically developing controls effortlessly recognized as a face, and gave overall fewer face responses. This suggests atypical face tuning in Williams syndrome. The outcome is discussed in the light of a general pattern of social cognition in Williams syndrome and brain mechanisms underpinning face processing. PMID:27531986

  8. Social Cognition in Williams Syndrome: Face Tuning.

    PubMed

    Pavlova, Marina A; Heiz, Julie; Sokolov, Alexander N; Barisnikov, Koviljka

    2016-01-01

    Many neurological, neurodevelopmental, neuropsychiatric, and psychosomatic disorders are characterized by impairments in visual social cognition, body language reading, and facial assessment of a social counterpart. Yet a wealth of research indicates that individuals with Williams syndrome exhibit remarkable concern for social stimuli and face fascination. Here individuals with Williams syndrome were presented with a set of Face-n-Food images composed of food ingredients and in different degree resembling a face (slightly bordering on the Giuseppe Arcimboldo style). The primary advantage of these images is that single components do not explicitly trigger face-specific processing, whereas in face images commonly used for investigating face perception (such as photographs or depictions), the mere occurrence of typical cues already implicates face presence. In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Strikingly, individuals with Williams syndrome exhibited profound deficits in recognition of the Face-n-Food images as a face: they did not report seeing a face on the images, which typically developing controls effortlessly recognized as a face, and gave overall fewer face responses. This suggests atypical face tuning in Williams syndrome. The outcome is discussed in the light of a general pattern of social cognition in Williams syndrome and brain mechanisms underpinning face processing.

  9. Functional MRI study of diencephalic amnesia in Wernicke-Korsakoff syndrome.

    PubMed

    Caulo, M; Van Hecke, J; Toma, L; Ferretti, A; Tartaro, A; Colosimo, C; Romani, G L; Uncini, A

    2005-07-01

    Anterograde amnesia in Wernicke-Korsakoff syndrome is associated with diencephalic lesions, mainly in the anterior thalamic nuclei. Whether diencephalic and temporal lobe amnesias are distinct entities is still not clear. We investigated episodic memory for faces using functional MRI (fMRI) in eight controls and in a 34-year-old man with Wernicke-Korsakoff syndrome and diencephalic lesions but without medial temporal lobe (MTL) involvement at MRI. fMRI was performed with a 1.5 tesla unit. Three dual-choice tasks were employed: (i) face encoding (18 faces were randomly presented three times and subjects were asked to memorize the faces); (ii) face perception (subjects indicated which of two faces matched a third face); and (iii) face recognition (subjects indicated which of two faces belonged to the group they had been asked to memorize during encoding). All activation was greater in the right hemisphere. In controls both the encoding and recognition tasks activated two hippocampal regions (anterior and posterior). The anterior hippocampal region was more activated during recognition. Activation in the prefrontal cortex was greater during recognition. In the subject with Wernicke-Korsakoff syndrome, fMRI did not show hippocampal activation during either encoding or recognition. During recognition, although behavioural data showed defective retrieval, the prefrontal regions were activated as in controls, except for the ventrolateral prefrontal cortex. fMRI activation of the visual cortices and the behavioural score on the perception task indicated that the subject with Wernicke-Korsakoff syndrome perceived the faces, paid attention to the task and demonstrated accurate judgement. In the subject with Wernicke-Korsakoff syndrome, although the anatomical damage does not involve the MTL, the hippocampal memory encoding has been lost, possibly as a consequence of the hippocampal-anterior thalamic axis involvement. Anterograde amnesia could therefore be the expression of damage to an extended hippocampal system, and the distinction between temporal lobe and diencephalic amnesia has limited value. In the subject with Wernicke-Korsakoff syndrome, the preserved dorsolateral prefrontal cortex activation during incorrect recognition suggests that this region is more involved in either the orientation or attention at retrieval than in retrieval. The lack of activation of the prefrontal ventrolateral cortex confirms the role of this area in episodic memory formation.

  10. Oxytocin enhances attentional bias for neutral and positive expression faces in individuals with higher autistic traits.

    PubMed

    Xu, Lei; Ma, Xiaole; Zhao, Weihua; Luo, Lizhu; Yao, Shuxia; Kendrick, Keith M

    2015-12-01

    There is considerable interest in the potential therapeutic role of the neuropeptide oxytocin in altering attentional bias towards emotional social stimuli in psychiatric disorders. However, it is still unclear whether oxytocin primarily influences attention towards positive or negative valence social stimuli. Here in a double-blind, placebo controlled, between subject design experiment in 60 healthy male subjects we have used the highly sensitive dual-target rapid serial visual presentation (RSVP) paradigm to investigate whether intranasal oxytocin (40IU) treatment alters attentional bias for emotional faces. Results show that oxytocin improved recognition accuracy of neutral and happy expression faces presented in the second target position (T2) during the period of reduced attentional capacity following prior presentation of a first neutral face target (T1), but had no effect on recognition of negative expression faces (angry, fearful, sad). Oxytocin also had no effect on recognition of non-social stimuli (digits) in this task. Recognition accuracy for neutral faces at T2 was negatively associated with autism spectrum quotient (ASQ) scores in the placebo group, and oxytocin's facilitatory effects were restricted to a sub-group of subjects with higher ASQ scores. Our results therefore indicate that oxytocin primarily enhances the allocation of attentional resources towards faces expressing neutral or positive emotion and does not influence that towards negative emotion ones or non-social stimuli. This effect of oxytocin is strongest in healthy individuals with higher autistic trait scores, thereby providing further support for its potential therapeutic use in autism spectrum disorder. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Learning and Treatment of Anaphylaxis by Laypeople: A Simulation Study Using Pupilar Technology

    PubMed Central

    Fernandez-Mendez, Felipe; Barcala-Furelos, Roberto; Padron-Cabo, Alexis; Garcia-Magan, Carlos; Moure-Gonzalez, Jose; Contreras-Jordan, Onofre; Rodriguez-Nuñez, Antonio

    2017-01-01

    An anaphylactic shock is a time-critical emergency situation. The decision-making during emergencies is an important responsibility but difficult to study. Eye-tracking technology allows us to identify visual patterns involved in the decision-making. The aim of this pilot study was to evaluate two training models for the recognition and treatment of anaphylaxis by laypeople, based on expert assessment and eye-tracking technology. A cross-sectional quasi-experimental simulation study was made to evaluate the identification and treatment of anaphylaxis. 50 subjects were randomly assigned to four groups: three groups watching different training videos with content supervised by sanitary personnel and one control group who received face-to-face training during paediatric practice. To evaluate the learning, a simulation scenario represented by an anaphylaxis' victim was designed. A device capturing eye movement as well as expert valuation was used to evaluate the performance. The subjects that underwent paediatric face-to-face training achieved better and faster recognition of the anaphylaxis. They also used the adrenaline injector with better precision and less mistakes, and they needed a smaller number of visual fixations to recognise the anaphylaxis and to make the decision to inject epinephrine. Analysing the different video formats, mixed results were obtained. Therefore, they should be tested to evaluate their usability before implementation. PMID:28758128

  12. Learning and Treatment of Anaphylaxis by Laypeople: A Simulation Study Using Pupilar Technology.

    PubMed

    Fernandez-Mendez, Felipe; Saez-Gallego, Nieves Maria; Barcala-Furelos, Roberto; Abelairas-Gomez, Cristian; Padron-Cabo, Alexis; Perez-Ferreiros, Alexandra; Garcia-Magan, Carlos; Moure-Gonzalez, Jose; Contreras-Jordan, Onofre; Rodriguez-Nuñez, Antonio

    2017-01-01

    An anaphylactic shock is a time-critical emergency situation. The decision-making during emergencies is an important responsibility but difficult to study. Eye-tracking technology allows us to identify visual patterns involved in the decision-making. The aim of this pilot study was to evaluate two training models for the recognition and treatment of anaphylaxis by laypeople, based on expert assessment and eye-tracking technology. A cross-sectional quasi-experimental simulation study was made to evaluate the identification and treatment of anaphylaxis. 50 subjects were randomly assigned to four groups: three groups watching different training videos with content supervised by sanitary personnel and one control group who received face-to-face training during paediatric practice. To evaluate the learning, a simulation scenario represented by an anaphylaxis' victim was designed. A device capturing eye movement as well as expert valuation was used to evaluate the performance. The subjects that underwent paediatric face-to-face training achieved better and faster recognition of the anaphylaxis. They also used the adrenaline injector with better precision and less mistakes, and they needed a smaller number of visual fixations to recognise the anaphylaxis and to make the decision to inject epinephrine. Analysing the different video formats, mixed results were obtained. Therefore, they should be tested to evaluate their usability before implementation.

  13. Developmental plateau in visual object processing from adolescence to adulthood in autism

    PubMed Central

    O'Hearn, Kirsten; Tanaka, James; Lynn, Andrew; Fedor, Jennifer; Minshew, Nancy; Luna, Beatriz

    2016-01-01

    A lack of typical age-related improvement from adolescence to adulthood contributes to face recognition deficits in adults with autism on the Cambridge Face Memory Test (CFMT). The current studies examine if this atypical developmental trajectory generalizes to other tasks and objects, including parts of the face. The CFMT tests recognition of whole faces, often with a substantial delay. The current studies used the immediate memory (IM) task and the parts-whole face task from the Let's Face It! battery, which examines whole faces, face parts, and cars, without a delay between memorization and test trials. In the IM task, participants memorize a face or car. Immediately after the target disappears, participants identify the target from two similar distractors. In the part-whole task, participants memorize a whole face. Immediately after the face disappears, participants identify the target from a distractor with different eyes or mouth, either as a face part or a whole face. Results indicate that recognition deficits in autism become more robust by adulthood, consistent with previous work, and also become more general, including cars. In the IM task, deficits in autism were specific to faces in childhood, but included cars by adulthood. In the part-whole task, deficits in autism became more robust by adulthood, including both eyes and mouths as parts and in whole faces. Across tasks, the deficit in autism increased between adolescence and adulthood, reflecting a lack of typical improvement, leading to deficits with non-face stimuli and on a task without a memory delay. These results suggest that brain maturation continues to be affected into adulthood in autism, and that the transition from adolescence to adulthood is a vulnerable stage for those with autism. PMID:25019999

  14. Face features and face configurations both contribute to visual crowding.

    PubMed

    Sun, Hsin-Mei; Balas, Benjamin

    2015-02-01

    Crowding refers to the inability to recognize an object in peripheral vision when other objects are presented nearby (Whitney & Levi Trends in Cognitive Sciences, 15, 160-168, 2011). A popular explanation of crowding is that features of the target and flankers are combined inappropriately when they are located within an integration field, thus impairing target recognition (Pelli, Palomares, & Majaj Journal of Vision, 4(12), 12:1136-1169, 2004). However, it remains unclear which features of the target and flankers are combined inappropriately to cause crowding (Levi Vision Research, 48, 635-654, 2008). For example, in a complex stimulus (e.g., a face), to what extent does crowding result from the integration of features at a part-based level or at the level of global processing of the configural appearance? In this study, we used a face categorization task and different types of flankers to examine how much the magnitude of visual crowding depends on the similarity of face parts or of global configurations. We created flankers with face-like features (e.g., the eyes, nose, and mouth) in typical and scrambled configurations to examine the impacts of part appearance and global configuration on the visual crowding of faces. Additionally, we used "electrical socket" flankers that mimicked first-order face configuration but had only schematic features, to examine the extent to which global face geometry impacted crowding. Our results indicated that both face parts and configurations contribute to visual crowding, suggesting that face similarity as realized under crowded conditions includes both aspects of facial appearance.

  15. Age, cognitive style, and traffic signs.

    PubMed

    Lambert, L D; Fleury, M

    1994-04-01

    This study assessed the efficiency with which young and older adults of varying field dependence extract information from traffic signs. It also identified some visual attributes of signs which affect recognition time. Two experiments were conducted. In Exp. 1, digitized signs, embedded in rural and urban backgrounds, were presented on a computer monitor. Subjects indicated on which side a target sign had appeared. Analysis showed that recognition times were dependent on age and field-dependence scores. Also, visual backgrounds and spatial frequency of pictographs affected RTs. In Exp. 2, recognition RT to 2 signs with redesigned pictographs was measured as well as time taken to detect signs. The signs showing reduced spatial frequency were the fastest to recognize, although no effect was noticed during detection. The subjects who showed the worst performance when facing the original signs benefitted the most from the modifications.

  16. Tuning of temporo-occipital activity by frontal oscillations during virtual mirror exposure causes erroneous self-recognition.

    PubMed

    Serino, Andrea; Sforza, Anna Laura; Kanayama, Noriaki; van Elk, Michiel; Kaliuzhna, Mariia; Herbelin, Bruno; Blanke, Olaf

    2015-10-01

    Self-face recognition, a hallmark of self-awareness, depends on 'off-line' stored information about one's face and 'on-line' multisensory-motor face-related cues. The brain mechanisms of how on-line sensory-motor processes affect off-line neural self-face representations are unknown. This study used 3D virtual reality to create a 'virtual mirror' in which participants saw an avatar's face moving synchronously with their own face movements. Electroencephalographic (EEG) analysis during virtual mirror exposure revealed mu oscillations in sensory-motor cortex signalling on-line congruency between the avatar's and participants' movements. After such exposure and compatible with a change in their off-line self-face representation, participants were more prone to recognize the avatar's face as their own, and this was also reflected in the activation of face-specific regions in the inferotemporal cortex. Further EEG analysis showed that the on-line sensory-motor effects during virtual mirror exposure caused these off-line visual effects, revealing the brain mechanisms that maintain a coherent self-representation, despite our continuously changing appearance. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  17. Gender in facial representations: a contrast-based study of adaptation within and between the sexes.

    PubMed

    Oruç, Ipek; Guo, Xiaoyue M; Barton, Jason J S

    2011-01-18

    Face aftereffects are proving to be an effective means of examining the properties of face-specific processes in the human visual system. We examined the role of gender in the neural representation of faces using a contrast-based adaptation method. If faces of different genders share the same representational face space, then adaptation to a face of one gender should affect both same- and different-gender faces. Further, if these aftereffects differ in magnitude, this may indicate distinct gender-related factors in the organization of this face space. To control for a potential confound between physical similarity and gender, we used a Bayesian ideal observer and human discrimination data to construct a stimulus set in which pairs of different-gender faces were equally dissimilar as same-gender pairs. We found that the recognition of both same-gender and different-gender faces was suppressed following a brief exposure of 100 ms. Moreover, recognition was more suppressed for test faces of a different-gender than those of the same-gender as the adaptor, despite the equivalence in physical and psychophysical similarity. Our results suggest that male and female faces likely occupy the same face space, allowing transfer of aftereffects between the genders, but that there are special properties that emerge along gender-defining dimensions of this space.

  18. Sex differences in functional activation patterns revealed by increased emotion processing demands.

    PubMed

    Hall, Geoffrey B C; Witelson, Sandra F; Szechtman, Henry; Nahmias, Claude

    2004-02-09

    Two [O(15)] PET studies assessed sex differences regional brain activation in the recognition of emotional stimuli. Study I revealed that the recognition of emotion in visual faces resulted in bilateral frontal activation in women, and unilateral right-sided activation in men. In study II, the complexity of the emotional face task was increased through tje addition of associated auditory emotional stimuli. Men again showed unilateral frontal activation, in this case to the left; whereas women did not show bilateral frontal activation, but showed greater limbic activity. These results suggest that when processing broader cross-modal emotional stimuli, men engage more in associative cognitive strategies while women draw more on primary emotional references.

  19. Agnosic vision is like peripheral vision, which is limited by crowding.

    PubMed

    Strappini, Francesca; Pelli, Denis G; Di Pace, Enrico; Martelli, Marialuisa

    2017-04-01

    Visual agnosia is a neuropsychological impairment of visual object recognition despite near-normal acuity and visual fields. A century of research has provided only a rudimentary account of the functional damage underlying this deficit. We find that the object-recognition ability of agnosic patients viewing an object directly is like that of normally-sighted observers viewing it indirectly, with peripheral vision. Thus, agnosic vision is like peripheral vision. We obtained 14 visual-object-recognition tests that are commonly used for diagnosis of visual agnosia. Our "standard" normal observer took these tests at various eccentricities in his periphery. Analyzing the published data of 32 apperceptive agnosia patients and a group of 14 posterior cortical atrophy (PCA) patients on these tests, we find that each patient's pattern of object recognition deficits is well characterized by one number, the equivalent eccentricity at which our standard observer's peripheral vision is like the central vision of the agnosic patient. In other words, each agnosic patient's equivalent eccentricity is conserved across tests. Across patients, equivalent eccentricity ranges from 4 to 40 deg, which rates severity of the visual deficit. In normal peripheral vision, the required size to perceive a simple image (e.g., an isolated letter) is limited by acuity, and that for a complex image (e.g., a face or a word) is limited by crowding. In crowding, adjacent simple objects appear unrecognizably jumbled unless their spacing exceeds the crowding distance, which grows linearly with eccentricity. Besides conservation of equivalent eccentricity across object-recognition tests, we also find conservation, from eccentricity to agnosia, of the relative susceptibility of recognition of ten visual tests. These findings show that agnosic vision is like eccentric vision. Whence crowding? Peripheral vision, strabismic amblyopia, and possibly apperceptive agnosia are all limited by crowding, making it urgent to know what drives crowding. Acuity does not (Song et al., 2014), but neural density might: neurons per deg 2 in the crowding-relevant cortical area. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Invariant recognition drives neural representations of action sequences

    PubMed Central

    Poggio, Tomaso

    2017-01-01

    Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences. PMID:29253864

  1. An odd manifestation of the Capgras syndrome: loss of familiarity even with the sexual partner.

    PubMed

    Thomas Antérion, C; Convers, P; Desmales, S; Borg, C; Laurent, B

    2008-06-01

    We report the case of a patient who presented visual hallucinations and identification disorders associated with a Capgras syndrome. During the Capgras periods, there was not only a misidentification of his wife's face, but also a more global perceptive and emotional sexual identification disorder. Thus, he had sexual intercourse with his wife's "double" without having the slightest recollection feeling of familiarity towards his "wife" and even changed his sexual habits. To the best of our knowledge, he is the only neurological patient who made his wife a mistress. Starting from this global familiarity loss, we discuss the mechanism of Capgras delusion with reference to the role of the implicit system of face recognition. Such behavior of familiarity loss not only with face but also with all intimacy aspects argues for a specific disconnection between the ventral visual pathway of face identification and the limbic system involved in emotional and episodic memory contents.

  2. Using false colors to protect visual privacy of sensitive content

    NASA Astrophysics Data System (ADS)

    Ćiftçi, Serdar; Korshunov, Pavel; Akyüz, Ahmet O.; Ebrahimi, Touradj

    2015-03-01

    Many privacy protection tools have been proposed for preserving privacy. Tools for protection of visual privacy available today lack either all or some of the important properties that are expected from such tools. Therefore, in this paper, we propose a simple yet effective method for privacy protection based on false color visualization, which maps color palette of an image into a different color palette, possibly after a compressive point transformation of the original pixel data, distorting the details of the original image. This method does not require any prior face detection or other sensitive regions detection and, hence, unlike typical privacy protection methods, it is less sensitive to inaccurate computer vision algorithms. It is also secure as the look-up tables can be encrypted, reversible as table look-ups can be inverted, flexible as it is independent of format or encoding, adjustable as the final result can be computed by interpolating the false color image with the original using different degrees of interpolation, less distracting as it does not create visually unpleasant artifacts, and selective as it preserves better semantic structure of the input. Four different color scales and four different compression functions, one which the proposed method relies, are evaluated via objective (three face recognition algorithms) and subjective (50 human subjects in an online-based study) assessments using faces from FERET public dataset. The evaluations demonstrate that DEF and RBS color scales lead to the strongest privacy protection, while compression functions add little to the strength of privacy protection. Statistical analysis also shows that recognition algorithms and human subjects perceive the proposed protection similarly

  3. Reduced adaptability, but no fundamental disruption, of norm-based face-coding mechanisms in cognitively able children and adolescents with autism.

    PubMed

    Rhodes, Gillian; Ewing, Louise; Jeffery, Linda; Avard, Eleni; Taylor, Libby

    2014-09-01

    Faces are adaptively coded relative to visual norms that are updated by experience. This coding is compromised in autism and the broader autism phenotype, suggesting that atypical adaptive coding of faces may be an endophenotype for autism. Here we investigate the nature of this atypicality, asking whether adaptive face-coding mechanisms are fundamentally altered, or simply less responsive to experience, in autism. We measured adaptive coding, using face identity aftereffects, in cognitively able children and adolescents with autism and neurotypical age- and ability-matched participants. We asked whether these aftereffects increase with adaptor identity strength as in neurotypical populations, or whether they show a different pattern indicating a more fundamental alteration in face-coding mechanisms. As expected, face identity aftereffects were reduced in the autism group, but they nevertheless increased with adaptor strength, like those of our neurotypical participants, consistent with norm-based coding of face identity. Moreover, their aftereffects correlated positively with face recognition ability, consistent with an intact functional role for adaptive coding in face recognition ability. We conclude that adaptive norm-based face-coding mechanisms are basically intact in autism, but are less readily calibrated by experience. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. The man who mistook his neuropsychologist for a popstar: when configural processing fails in acquired prosopagnosia

    PubMed Central

    Jansari, Ashok; Miller, Scott; Pearce, Laura; Cobb, Stephanie; Sagiv, Noam; Williams, Adrian L.; Tree, Jeremy J.; Hanley, J. Richard

    2015-01-01

    We report the case of an individual with acquired prosopagnosia who experiences extreme difficulties in recognizing familiar faces in everyday life despite excellent object recognition skills. Formal testing indicates that he is also severely impaired at remembering pre-experimentally unfamiliar faces and that he takes an extremely long time to identify famous faces and to match unfamiliar faces. Nevertheless, he performs as accurately and quickly as controls at identifying inverted familiar and unfamiliar faces and can recognize famous faces from their external features. He also performs as accurately as controls at recognizing famous faces when fracturing conceals the configural information in the face. He shows evidence of impaired global processing but normal local processing of Navon figures. This case appears to reflect the clearest example yet of an acquired prosopagnosic patient whose familiar face recognition deficit is caused by a severe configural processing deficit in the absence of any problems in featural processing. These preserved featural skills together with apparently intact visual imagery for faces allow him to identify a surprisingly large number of famous faces when unlimited time is available. The theoretical implications of this pattern of performance for understanding the nature of acquired prosopagnosia are discussed. PMID:26236212

  5. Inverting faces elicits sensitivity to race on the N170 component: a cross-cultural study.

    PubMed

    Vizioli, Luca; Foreman, Kay; Rousselet, Guillaume A; Caldara, Roberto

    2010-01-29

    Human beings are natural experts at processing faces, with some notable exceptions. Same-race faces are better recognized than other-race faces: the so-called other-race effect (ORE). Inverting faces impairs recognition more than for any other inverted visual object: the so-called face inversion effect (FIE). Interestingly, the FIE is stronger for same- compared to other-race faces. At the electrophysiological level, inverted faces elicit consistently delayed and often larger N170 compared to upright faces. However, whether the N170 component is sensitive to race is still a matter of ongoing debate. Here we investigated the N170 sensitivity to race in the framework of the FIE. We recorded EEG from Western Caucasian and East Asian observers while presented with Western Caucasian, East Asian and African American faces in upright and inverted orientations. To control for potential confounds in the EEG signal that might be evoked by the intrinsic and salient differences in the low-level properties of faces from different races, we normalized their amplitude-spectra, luminance and contrast. No differences on the N170 were observed for upright faces. Critically, inverted same-race faces lead to greater recognition impairment and elicited larger N170 amplitudes compared to inverted other-race faces. Our results indicate a finer-grained neural tuning for same-race faces at early stages of processing in both groups of observers.

  6. Training-induced recovery of low-level vision followed by mid-level perceptual improvements in developmental object and face agnosia

    PubMed Central

    Lev, Maria; Gilaie-Dotan, Sharon; Gotthilf-Nezri, Dana; Yehezkel, Oren; Brooks, Joseph L; Perry, Anat; Bentin, Shlomo; Bonneh, Yoram; Polat, Uri

    2015-01-01

    Long-term deprivation of normal visual inputs can cause perceptual impairments at various levels of visual function, from basic visual acuity deficits, through mid-level deficits such as contour integration and motion coherence, to high-level face and object agnosia. Yet it is unclear whether training during adulthood, at a post-developmental stage of the adult visual system, can overcome such developmental impairments. Here, we visually trained LG, a developmental object and face agnosic individual. Prior to training, at the age of 20, LG's basic and mid-level visual functions such as visual acuity, crowding effects, and contour integration were underdeveloped relative to normal adult vision, corresponding to or poorer than those of 5–6 year olds (Gilaie-Dotan, Perry, Bonneh, Malach & Bentin, 2009). Intensive visual training, based on lateral interactions, was applied for a period of 9 months. LG's directly trained but also untrained visual functions such as visual acuity, crowding, binocular stereopsis and also mid-level contour integration improved significantly and reached near-age-level performance, with long-term (over 4 years) persistence. Moreover, mid-level functions that were tested post-training were found to be normal in LG. Some possible subtle improvement was observed in LG's higher-order visual functions such as object recognition and part integration, while LG's face perception skills have not improved thus far. These results suggest that corrective training at a post-developmental stage, even in the adult visual system, can prove effective, and its enduring effects are the basis for a revival of a developmental cascade that can lead to reduced perceptual impairments. PMID:24698161

  7. Training-induced recovery of low-level vision followed by mid-level perceptual improvements in developmental object and face agnosia.

    PubMed

    Lev, Maria; Gilaie-Dotan, Sharon; Gotthilf-Nezri, Dana; Yehezkel, Oren; Brooks, Joseph L; Perry, Anat; Bentin, Shlomo; Bonneh, Yoram; Polat, Uri

    2015-01-01

    Long-term deprivation of normal visual inputs can cause perceptual impairments at various levels of visual function, from basic visual acuity deficits, through mid-level deficits such as contour integration and motion coherence, to high-level face and object agnosia. Yet it is unclear whether training during adulthood, at a post-developmental stage of the adult visual system, can overcome such developmental impairments. Here, we visually trained LG, a developmental object and face agnosic individual. Prior to training, at the age of 20, LG's basic and mid-level visual functions such as visual acuity, crowding effects, and contour integration were underdeveloped relative to normal adult vision, corresponding to or poorer than those of 5-6 year olds (Gilaie-Dotan, Perry, Bonneh, Malach & Bentin, 2009). Intensive visual training, based on lateral interactions, was applied for a period of 9 months. LG's directly trained but also untrained visual functions such as visual acuity, crowding, binocular stereopsis and also mid-level contour integration improved significantly and reached near-age-level performance, with long-term (over 4 years) persistence. Moreover, mid-level functions that were tested post-training were found to be normal in LG. Some possible subtle improvement was observed in LG's higher-order visual functions such as object recognition and part integration, while LG's face perception skills have not improved thus far. These results suggest that corrective training at a post-developmental stage, even in the adult visual system, can prove effective, and its enduring effects are the basis for a revival of a developmental cascade that can lead to reduced perceptual impairments. © 2014 The Authors. Developmental Science Published by John Wiley & Sons Ltd.

  8. Interference with facial emotion recognition by verbal but not visual loads.

    PubMed

    Reed, Phil; Steed, Ian

    2015-12-01

    The ability to recognize emotions through facial characteristics is critical for social functioning, but is often impaired in those with a developmental or intellectual disability. The current experiments explored the degree to which interfering with the processing capacities of typically-developing individuals would produce a similar inability to recognize emotions through the facial elements of faces displaying particular emotions. It was found that increasing the cognitive load (in an attempt to model learning impairments in a typically developing population) produced deficits in correctly identifying emotions from facial elements. However, this effect was much more pronounced when using a concurrent verbal task than when employing a concurrent visual task, suggesting that there is a substantial verbal element to the labeling and subsequent recognition of emotions. This concurs with previous work conducted with those with developmental disabilities that suggests emotion recognition deficits are connected with language deficits. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Perceptual and affective mechanisms in facial expression recognition: An integrative review.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2016-09-01

    Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.

  10. Deep hierarchies in the primate visual cortex: what can we learn for computer vision?

    PubMed

    Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz

    2013-08-01

    Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.

  11. Modulation of fusiform cortex activity by cholinesterase inhibition predicts effects on subsequent memory.

    PubMed

    Bentley, P; Driver, J; Dolan, R J

    2009-09-01

    Cholinergic influences on memory are likely to be expressed at several processing stages, including via well-recognized effects of acetylcholine on stimulus processing during encoding. Since previous studies have shown that cholinesterase inhibition enhances visual extrastriate cortex activity during stimulus encoding, especially under attention-demanding tasks, we tested whether this effect correlates with improved subsequent memory. In a within-subject physostigmine versus placebo design, we measured brain activity with functional magnetic resonance imaging while healthy and mild Alzheimer's disease subjects performed superficial and deep encoding tasks on face (and building) visual stimuli. We explored regions in which physostigmine modulation of face-selective neural responses correlated with physostigmine effects on subsequent recognition performance. In healthy subjects physostigmine led to enhanced later recognition for deep- versus superficially-encoded faces, which correlated across subjects with a physostigmine-induced enhancement of face-selective responses in right fusiform cortex during deep- versus superficial-encoding tasks. In contrast, the Alzheimer's disease group showed neither a depth of processing effect nor restoration of this with physostigmine. Instead, patients showed a task-independent improvement in confident memory with physostigmine, an effect that correlated with enhancements in face-selective (but task-independent) responses in bilateral fusiform cortices. Our results indicate that one mechanism by which cholinesterase inhibitors can improve memory is by enhancing extrastriate cortex stimulus selectivity at encoding, in a manner that for healthy people but not in Alzheimer's disease is dependent upon depth of processing.

  12. Emotion Separation Is Completed Early and It Depends on Visual Field Presentation

    PubMed Central

    Liu, Lichan; Ioannides, Andreas A.

    2010-01-01

    It is now apparent that the visual system reacts to stimuli very fast, with many brain areas activated within 100 ms. It is, however, unclear how much detail is extracted about stimulus properties in the early stages of visual processing. Here, using magnetoencephalography we show that the visual system separates different facial expressions of emotion well within 100 ms after image onset, and that this separation is processed differently depending on where in the visual field the stimulus is presented. Seven right-handed males participated in a face affect recognition experiment in which they viewed happy, fearful and neutral faces. Blocks of images were shown either at the center or in one of the four quadrants of the visual field. For centrally presented faces, the emotions were separated fast, first in the right superior temporal sulcus (STS; 35–48 ms), followed by the right amygdala (57–64 ms) and medial pre-frontal cortex (83–96 ms). For faces presented in the periphery, the emotions were separated first in the ipsilateral amygdala and contralateral STS. We conclude that amygdala and STS likely play a different role in early visual processing, recruiting distinct neural networks for action: the amygdala alerts sub-cortical centers for appropriate autonomic system response for fight or flight decisions, while the STS facilitates more cognitive appraisal of situations and links appropriate cortical sites together. It is then likely that different problems may arise when either network fails to initiate or function properly. PMID:20339549

  13. Attention and memory bias to facial emotions underlying negative symptoms of schizophrenia.

    PubMed

    Jang, Seon-Kyeong; Park, Seon-Cheol; Lee, Seung-Hwan; Cho, Yang Seok; Choi, Kee-Hong

    2016-01-01

    This study assessed bias in selective attention to facial emotions in negative symptoms of schizophrenia and its influence on subsequent memory for facial emotions. Thirty people with schizophrenia who had high and low levels of negative symptoms (n = 15, respectively) and 21 healthy controls completed a visual probe detection task investigating selective attention bias (happy, sad, and angry faces randomly presented for 50, 500, or 1000 ms). A yes/no incidental facial memory task was then completed. Attention bias scores and recognition errors were calculated. Those with high negative symptoms exhibited reduced attention to emotional faces relative to neutral faces; those with low negative symptoms showed the opposite pattern when faces were presented for 500 ms regardless of the valence. Compared to healthy controls, those with high negative symptoms made more errors for happy faces in the memory task. Reduced attention to emotional faces in the probe detection task was significantly associated with less pleasure and motivation and more recognition errors for happy faces in schizophrenia group only. Attention bias away from emotional information relatively early in the attentional process and associated diminished positive memory may relate to pathological mechanisms for negative symptoms.

  14. Individual differences in false memory from misinformation: cognitive factors.

    PubMed

    Zhu, Bi; Chen, Chuansheng; Loftus, Elizabeth F; Lin, Chongde; He, Qinghua; Chen, Chunhui; Li, He; Xue, Gui; Lu, Zhonglin; Dong, Qi

    2010-07-01

    This research investigated the cognitive correlates of false memories that are induced by the misinformation paradigm. A large sample of Chinese college students (N=436) participated in a misinformation procedure and also took a battery of cognitive tests. Results revealed sizable and systematic individual differences in false memory arising from exposure to misinformation. False memories were significantly and negatively correlated with measures of intelligence (measured with Raven's Advanced Progressive Matrices and Wechsler Adult Intelligence Scale), perception (Motor-Free Visual Perception Test, Change Blindness, and Tone Discrimination), memory (Wechsler Memory Scales and 2-back Working Memory tasks), and face judgement (Face Recognition and Facial Expression Recognition). These findings suggest that people with relatively low intelligence and poor perceptual abilities might be more susceptible to the misinformation effect.

  15. Plastic reorganization of neural systems for perception of others in the congenitally blind.

    PubMed

    Fairhall, S L; Porter, K B; Bellucci, C; Mazzetti, M; Cipolli, C; Gobbini, M I

    2017-09-01

    Recent evidence suggests that the function of the core system for face perception might extend beyond visual face-perception to a broader role in person perception. To critically test the broader role of core face-system in person perception, we examined the role of the core system during the perception of others in 7 congenitally blind individuals and 15 sighted subjects by measuring their neural responses using fMRI while they listened to voices and performed identity and emotion recognition tasks. We hypothesised that in people who have had no visual experience of faces, core face-system areas may assume a role in the perception of others via voices. Results showed that emotions conveyed by voices can be decoded in homologues of the core face system only in the blind. Moreover, there was a specific enhancement of response to verbal as compared to non-verbal stimuli in bilateral fusiform face areas and the right posterior superior temporal sulcus showing that the core system also assumes some language-related functions in the blind. These results indicate that, in individuals with no history of visual experience, areas of the core system for face perception may assume a role in aspects of voice perception that are relevant to social cognition and perception of others' emotions. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  16. Performance evaluation of wavelet-based face verification on a PDA recorded database

    NASA Astrophysics Data System (ADS)

    Sellahewa, Harin; Jassim, Sabah A.

    2006-05-01

    The rise of international terrorism and the rapid increase in fraud and identity theft has added urgency to the task of developing biometric-based person identification as a reliable alternative to conventional authentication methods. Human Identification based on face images is a tough challenge in comparison to identification based on fingerprints or Iris recognition. Yet, due to its unobtrusive nature, face recognition is the preferred method of identification for security related applications. The success of such systems will depend on the support of massive infrastructures. Current mobile communication devices (3G smart phones) and PDA's are equipped with a camera which can capture both still and streaming video clips and a touch sensitive display panel. Beside convenience, such devices provide an adequate secure infrastructure for sensitive & financial transactions, by protecting against fraud and repudiation while ensuring accountability. Biometric authentication systems for mobile devices would have obvious advantages in conflict scenarios when communication from beyond enemy lines is essential to save soldier and civilian life. In areas of conflict or disaster the luxury of fixed infrastructure is not available or destroyed. In this paper, we present a wavelet-based face verification scheme that have been specifically designed and implemented on a currently available PDA. We shall report on its performance on the benchmark audio-visual BANCA database and on a newly developed PDA recorded audio-visual database that take include indoor and outdoor recordings.

  17. Gender in Facial Representations: A Contrast-Based Study of Adaptation within and between the Sexes

    PubMed Central

    Oruç, Ipek; Guo, Xiaoyue M.; Barton, Jason J. S.

    2011-01-01

    Face aftereffects are proving to be an effective means of examining the properties of face-specific processes in the human visual system. We examined the role of gender in the neural representation of faces using a contrast-based adaptation method. If faces of different genders share the same representational face space, then adaptation to a face of one gender should affect both same- and different-gender faces. Further, if these aftereffects differ in magnitude, this may indicate distinct gender-related factors in the organization of this face space. To control for a potential confound between physical similarity and gender, we used a Bayesian ideal observer and human discrimination data to construct a stimulus set in which pairs of different-gender faces were equally dissimilar as same-gender pairs. We found that the recognition of both same-gender and different-gender faces was suppressed following a brief exposure of 100ms. Moreover, recognition was more suppressed for test faces of a different-gender than those of the same-gender as the adaptor, despite the equivalence in physical and psychophysical similarity. Our results suggest that male and female faces likely occupy the same face space, allowing transfer of aftereffects between the genders, but that there are special properties that emerge along gender-defining dimensions of this space. PMID:21267414

  18. Familiarity facilitates feature-based face processing.

    PubMed

    Visconti di Oleggio Castello, Matteo; Wheeler, Kelsey G; Cipolli, Carlo; Gobbini, M Ida

    2017-01-01

    Recognition of personally familiar faces is remarkably efficient, effortless and robust. We asked if feature-based face processing facilitates detection of familiar faces by testing the effect of face inversion on a visual search task for familiar and unfamiliar faces. Because face inversion disrupts configural and holistic face processing, we hypothesized that inversion would diminish the familiarity advantage to the extent that it is mediated by such processing. Subjects detected personally familiar and stranger target faces in arrays of two, four, or six face images. Subjects showed significant facilitation of personally familiar face detection for both upright and inverted faces. The effect of familiarity on target absent trials, which involved only rejection of unfamiliar face distractors, suggests that familiarity facilitates rejection of unfamiliar distractors as well as detection of familiar targets. The preserved familiarity effect for inverted faces suggests that facilitation of face detection afforded by familiarity reflects mostly feature-based processes.

  19. Labelling Facial Affect in Context in Adults with and without TBI

    PubMed Central

    Turkstra, Lyn S.; Kraning, Sarah G.; Riedeman, Sarah K.; Mutlu, Bilge; Duff, Melissa; VanDenHeuvel, Sara

    2017-01-01

    Recognition of facial affect has been studied extensively in adults with and without traumatic brain injury (TBI), mostly by asking examinees to match basic emotion words to isolated faces. This method may not capture affect labelling in everyday life when faces are in context and choices are open-ended. To examine effects of context and response format, we asked 148 undergraduate students to label emotions shown on faces either in isolation or in natural visual scenes. Responses were categorised as representing basic emotions, social emotions, cognitive state terms, or appraisals. We used students’ responses to create a scoring system that was applied prospectively to five men with TBI. In both groups, over 50% of responses were neither basic emotion words nor synonyms, and there was no significant difference in response types between faces alone vs. in scenes. Adults with TBI used labels not seen in students’ responses, talked more overall, and often gave multiple labels for one photo. Results suggest benefits of moving beyond forced-choice tests of faces in isolation to fully characterise affect recognition in adults with and without TBI. PMID:29093643

  20. Functional integration of the posterior superior temporal sulcus correlates with facial expression recognition.

    PubMed

    Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia

    2016-05-01

    Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  1. Faces do not capture special attention in children with autism spectrum disorder: a change blindness study.

    PubMed

    Kikuchi, Yukiko; Senju, Atsushi; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2009-01-01

    Two experiments investigated attention of children with autism spectrum disorder (ASD) to faces and objects. In both experiments, children (7- to 15-year-olds) detected the difference between 2 visual scenes. Results in Experiment 1 revealed that typically developing children (n = 16) detected the change in faces faster than in objects, whereas children with ASD (n = 16) were equally fast in detecting changes in faces and objects. These results were replicated in Experiment 2 (n = 16 in children with ASD and 22 in typically developing children), which does not require face recognition skill. Results suggest that children with ASD lack an attentional bias toward others' faces, which could contribute to their atypical social orienting.

  2. Modulations of eye movement patterns by spatial filtering during the learning and testing phases of an old/new face recognition task.

    PubMed

    Lemieux, Chantal L; Collin, Charles A; Nelson, Elizabeth A

    2015-02-01

    In two experiments, we examined the effects of varying the spatial frequency (SF) content of face images on eye movements during the learning and testing phases of an old/new recognition task. At both learning and testing, participants were presented with face stimuli band-pass filtered to 11 different SF bands, as well as an unfiltered baseline condition. We found that eye movements varied significantly as a function of SF. Specifically, the frequency of transitions between facial features showed a band-pass pattern, with more transitions for middle-band faces (≈5-20 cycles/face) than for low-band (≈<5 cpf) or high-band (≈>20 cpf) ones. These findings were similar for the learning and testing phases. The distributions of transitions across facial features were similar for the middle-band, high-band, and unfiltered faces, showing a concentration on the eyes and mouth; conversely, low-band faces elicited mostly transitions involving the nose and nasion. The eye movement patterns elicited by low, middle, and high bands are similar to those previous researchers have suggested reflect holistic, configural, and featural processing, respectively. More generally, our results are compatible with the hypotheses that eye movements are functional, and that the visual system makes flexible use of visuospatial information in face processing. Finally, our finding that only middle spatial frequencies yielded the same number and distribution of fixations as unfiltered faces adds more evidence to the idea that these frequencies are especially important for face recognition, and reveals a possible mediator for the superior performance that they elicit.

  3. The uncrowded window of object recognition

    PubMed Central

    Pelli, Denis G; Tillman, Katharine A

    2009-01-01

    It is now emerging that vision is usually limited by object spacing rather than size. The visual system recognizes an object by detecting and then combining its features. ‘Crowding’ occurs when objects are too close together and features from several objects are combined into a jumbled percept. Here, we review the explosion of studies on crowding—in grating discrimination, letter and face recognition, visual search, selective attention, and reading—and find a universal principle, the Bouma law. The critical spacing required to prevent crowding is equal for all objects, although the effect is weaker between dissimilar objects. Furthermore, critical spacing at the cortex is independent of object position, and critical spacing at the visual field is proportional to object distance from fixation. The region where object spacing exceeds critical spacing is the ‘uncrowded window’. Observers cannot recognize objects outside of this window and its size limits the speed of reading and search. PMID:18828191

  4. Acute effects of delta-9-tetrahydrocannabinol, cannabidiol and their combination on facial emotion recognition: a randomised, double-blind, placebo-controlled study in cannabis users.

    PubMed

    Hindocha, Chandni; Freeman, Tom P; Schafer, Grainne; Gardener, Chelsea; Das, Ravi K; Morgan, Celia J A; Curran, H Valerie

    2015-03-01

    Acute administration of the primary psychoactive constituent of cannabis, Δ-9-tetrahydrocannabinol (THC), impairs human facial affect recognition, implicating the endocannabinoid system in emotional processing. Another main constituent of cannabis, cannabidiol (CBD), has seemingly opposite functional effects on the brain. This study aimed to determine the effects of THC and CBD, both alone and in combination on emotional facial affect recognition. 48 volunteers, selected for high and low frequency of cannabis use and schizotypy, were administered, THC (8mg), CBD (16mg), THC+CBD (8mg+16mg) and placebo, by inhalation, in a 4-way, double-blind, placebo-controlled crossover design. They completed an emotional facial affect recognition task including fearful, angry, happy, sad, surprise and disgust faces varying in intensity from 20% to 100%. A visual analogue scale (VAS) of feeling 'stoned' was also completed. In comparison to placebo, CBD improved emotional facial affect recognition at 60% emotional intensity; THC was detrimental to the recognition of ambiguous faces of 40% intensity. The combination of THC+CBD produced no impairment. Relative to placebo, both THC alone and combined THC+CBD equally increased feelings of being 'stoned'. CBD did not influence feelings of 'stoned'. No effects of frequency of use or schizotypy were found. In conclusion, CBD improves recognition of emotional facial affect and attenuates the impairment induced by THC. This is the first human study examining the effects of different cannabinoids on emotional processing. It provides preliminary evidence that different pharmacological agents acting upon the endocannabinoid system can both improve and impair recognition of emotional faces. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Acute effects of delta-9-tetrahydrocannabinol, cannabidiol and their combination on facial emotion recognition: A randomised, double-blind, placebo-controlled study in cannabis users

    PubMed Central

    Hindocha, Chandni; Freeman, Tom P.; Schafer, Grainne; Gardener, Chelsea; Das, Ravi K.; Morgan, Celia J.A.; Curran, H. Valerie

    2015-01-01

    Acute administration of the primary psychoactive constituent of cannabis, Δ-9-tetrahydrocannabinol (THC), impairs human facial affect recognition, implicating the endocannabinoid system in emotional processing. Another main constituent of cannabis, cannabidiol (CBD), has seemingly opposite functional effects on the brain. This study aimed to determine the effects of THC and CBD, both alone and in combination on emotional facial affect recognition. 48 volunteers, selected for high and low frequency of cannabis use and schizotypy, were administered, THC (8 mg), CBD (16 mg), THC+CBD (8 mg+16 mg) and placebo, by inhalation, in a 4-way, double-blind, placebo-controlled crossover design. They completed an emotional facial affect recognition task including fearful, angry, happy, sad, surprise and disgust faces varying in intensity from 20% to 100%. A visual analogue scale (VAS) of feeling ‘stoned’ was also completed. In comparison to placebo, CBD improved emotional facial affect recognition at 60% emotional intensity; THC was detrimental to the recognition of ambiguous faces of 40% intensity. The combination of THC+CBD produced no impairment. Relative to placebo, both THC alone and combined THC+CBD equally increased feelings of being ‘stoned’. CBD did not influence feelings of ‘stoned’. No effects of frequency of use or schizotypy were found. In conclusion, CBD improves recognition of emotional facial affect and attenuates the impairment induced by THC. This is the first human study examining the effects of different cannabinoids on emotional processing. It provides preliminary evidence that different pharmacological agents acting upon the endocannabinoid system can both improve and impair recognition of emotional faces. PMID:25534187

  6. Do I Have My Attention? Speed of Processing Advantages for the Self-Face Are Not Driven by Automatic Attention Capture

    PubMed Central

    Keyes, Helen; Dlugokencka, Aleksandra

    2014-01-01

    We respond more quickly to our own face than to other faces, but there is debate over whether this is connected to attention-grabbing properties of the self-face. In two experiments, we investigate whether the self-face selectively captures attention, and the attentional conditions under which this might occur. In both experiments, we examined whether different types of face (self, friend, stranger) provide differential levels of distraction when processing self, friend and stranger names. In Experiment 1, an image of a distractor face appeared centrally – inside the focus of attention – behind a target name, with the faces either upright or inverted. In Experiment 2, distractor faces appeared peripherally – outside the focus of attention – in the left or right visual field, or bilaterally. In both experiments, self-name recognition was faster than other name recognition, suggesting a self-referential processing advantage. The presence of the self-face did not cause more distraction in the naming task compared to other types of face, either when presented inside (Experiment 1) or outside (Experiment 2) the focus of attention. Distractor faces had different effects across the two experiments: when presented inside the focus of attention (Experiment 1), self and friend images facilitated self and friend naming, respectively. This was not true for stranger stimuli, suggesting that faces must be robustly represented to facilitate name recognition. When presented outside the focus of attention (Experiment 2), no facilitation occurred. Instead, we report an interesting distraction effect caused by friend faces when processing strangers’ names. We interpret this as a “social importance” effect, whereby we may be tuned to pick out and pay attention to familiar friend faces in a crowd. We conclude that any speed of processing advantages observed in the self-face processing literature are not driven by automatic attention capture. PMID:25338170

  7. Do I have my attention? Speed of processing advantages for the self-face are not driven by automatic attention capture.

    PubMed

    Keyes, Helen; Dlugokencka, Aleksandra

    2014-01-01

    We respond more quickly to our own face than to other faces, but there is debate over whether this is connected to attention-grabbing properties of the self-face. In two experiments, we investigate whether the self-face selectively captures attention, and the attentional conditions under which this might occur. In both experiments, we examined whether different types of face (self, friend, stranger) provide differential levels of distraction when processing self, friend and stranger names. In Experiment 1, an image of a distractor face appeared centrally - inside the focus of attention - behind a target name, with the faces either upright or inverted. In Experiment 2, distractor faces appeared peripherally - outside the focus of attention - in the left or right visual field, or bilaterally. In both experiments, self-name recognition was faster than other name recognition, suggesting a self-referential processing advantage. The presence of the self-face did not cause more distraction in the naming task compared to other types of face, either when presented inside (Experiment 1) or outside (Experiment 2) the focus of attention. Distractor faces had different effects across the two experiments: when presented inside the focus of attention (Experiment 1), self and friend images facilitated self and friend naming, respectively. This was not true for stranger stimuli, suggesting that faces must be robustly represented to facilitate name recognition. When presented outside the focus of attention (Experiment 2), no facilitation occurred. Instead, we report an interesting distraction effect caused by friend faces when processing strangers' names. We interpret this as a "social importance" effect, whereby we may be tuned to pick out and pay attention to familiar friend faces in a crowd. We conclude that any speed of processing advantages observed in the self-face processing literature are not driven by automatic attention capture.

  8. Face gender categorization and hemispheric asymmetries: Contrasting evidence from connected and disconnected brains.

    PubMed

    Prete, Giulia; Fabri, Mara; Foschi, Nicoletta; Tommasi, Luca

    2016-12-17

    We investigated hemispheric asymmetries in categorization of face gender by means of a divided visual field paradigm, in which female and male faces were presented unilaterally for 150ms each. A group of 60 healthy participants (30 males) and a male split-brain patient (D.D.C.) were asked to categorize the gender of the stimuli. Healthy participants categorized male faces presented in the right visual field (RVF) better and faster than when presented in the left visual field (LVF), and female faces presented in the LVF than in the RVF, independently of the participants' sex. Surprisingly, the recognition rates of D.D.C. were at chance levels - and significantly lower than those of the healthy participants - for both female and male faces presented in the RVF, as well as for female faces presented in the LVF. His performance was higher than expected by chance - and did not differ from controls - only for male faces presented in the LVF. The residual right-hemispheric ability of the split-brain patient in categorizing male faces reveals an own-gender bias lateralized in the right hemisphere, in line with the rightward own-identity and own-age bias previously shown in split-brain patients. The gender-contingent hemispheric dominance found in healthy participants confirms the previously shown right-hemispheric superiority in recognizing female faces, and also reveals a left-hemispheric superiority in recognizing male faces, adding an important evidence of hemispheric imbalance in the field of face and gender perception. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  9. Age Differences in the Differentiation of Trait Impressions From Faces

    PubMed Central

    Ng, Stacey Y.; Zebrowitz, Leslie A.; Franklin, Robert G.

    2016-01-01

    Objectives. We investigated whether evidence that older adults (OA) show less differentiation of visual stimuli than younger adults (YA) extends to trait impressions from faces and effects of face age. We also examined whether age differences in mood, vision, or cognition-mediated differentiation differences. Finally, we investigated whether age differences in trait differentiation mediated differences in impression positivity. Method. We used a differentiation index adapted from previous work on stereotyping to assess OA and YA likelihood of assigning different faces to different levels on trait scales. We computed scores for ratings of older and younger faces’ competence, health, hostility, and untrustworthiness. Results. OA showed less differentiated trait ratings than YA. Measures of mood, vision, and cognition did not mediate these rater age differences. Hostility was differentiated more for younger than older faces, while health was differentiated more for older faces, but only by OA. Age differences in differentiation mediated age differences in impression positivity. Discussion. Less differentiation of trait impressions from faces in OA is consistent with previous evidence for less differentiation in face and emotion recognition. Results indicated that that age-related dedifferentiation does not reflect narrow changes in visual function. They also provide a novel explanation for OA positivity effects. PMID:25194140

  10. [Developmental change in facial recognition by premature infants during infancy].

    PubMed

    Konishi, Yukihiko; Kusaka, Takashi; Nishida, Tomoko; Isobe, Kenichi; Itoh, Susumu

    2014-09-01

    Premature infants are thought to be at increased risk for developmental disorders. We evaluated facial recognition by premature infants during early infancy, as this ability has been reported to be impaired commonly in developmentally disabled children. In premature infants and full-term infants at the age of 4 months (4 corrected months for premature infants), visual behaviors while performing facial recognition tasks were determined and analyzed using an eye-tracking system (Tobii T60 manufactured by Tobii Technologics, Sweden). Both types of infants had a preference towards normal facial expressions; however, no preference towards the upper face was observed in premature infants. Our study suggests that facial recognition ability in premature infants may develop differently from that in full-term infants.

  11. View-tolerant face recognition and Hebbian learning imply mirror-symmetric neural tuning to head orientation

    PubMed Central

    Leibo, Joel Z.; Liao, Qianli; Freiwald, Winrich A.; Anselmi, Fabio; Poggio, Tomaso

    2017-01-01

    SUMMARY The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and robust against identity-preserving transformations like depth-rotations [1, 2]. Current computational models of object recognition, including recent deep learning networks, generate these properties through a hierarchy of alternating selectivity-increasing filtering and tolerance-increasing pooling operations, similar to simple-complex cells operations [3, 4, 5, 6]. Here we prove that a class of hierarchical architectures and a broad set of biologically plausible learning rules generate approximate invariance to identity-preserving transformations at the top level of the processing hierarchy. However, all past models tested failed to reproduce the most salient property of an intermediate representation of a three-level face-processing hierarchy in the brain: mirror-symmetric tuning to head orientation [7]. Here we demonstrate that one specific biologically-plausible Hebb-type learning rule generates mirror-symmetric tuning to bilaterally symmetric stimuli like faces at intermediate levels of the architecture and show why it does so. Thus the tuning properties of individual cells inside the visual stream appear to result from group properties of the stimuli they encode and to reflect the learning rules that sculpted the information-processing system within which they reside. PMID:27916522

  12. Are all types of expertise created equal? Car experts use different spatial frequency scales for subordinate categorization of cars and faces.

    PubMed

    Harel, Assaf; Bentin, Shlomo

    2013-01-01

    A much-debated question in object recognition is whether expertise for faces and expertise for non-face objects utilize common perceptual information. We investigated this issue by assessing the diagnostic information required for different types of expertise. Specifically, we asked whether face categorization and expert car categorization at the subordinate level relies on the same spatial frequency (SF) scales. Fifteen car experts and fifteen novices performed a category verification task with spatially filtered images of faces, cars, and airplanes. Images were categorized based on their basic (e.g. "car") and subordinate level (e.g. "Japanese car") identity. The effect of expertise was not evident when objects were categorized at the basic level. However, when the car experts categorized faces and cars at the subordinate level, the two types of expertise required different kinds of SF information. Subordinate categorization of faces relied on low SFs more than on high SFs, whereas subordinate expert car categorization relied on high SFs more than on low SFs. These findings suggest that expertise in the recognition of objects and faces do not utilize the same type of information. Rather, different types of expertise require different types of diagnostic visual information.

  13. Are All Types of Expertise Created Equal? Car Experts Use Different Spatial Frequency Scales for Subordinate Categorization of Cars and Faces

    PubMed Central

    Harel, Assaf; Bentin, Shlomo

    2013-01-01

    A much-debated question in object recognition is whether expertise for faces and expertise for non-face objects utilize common perceptual information. We investigated this issue by assessing the diagnostic information required for different types of expertise. Specifically, we asked whether face categorization and expert car categorization at the subordinate level relies on the same spatial frequency (SF) scales. Fifteen car experts and fifteen novices performed a category verification task with spatially filtered images of faces, cars, and airplanes. Images were categorized based on their basic (e.g. “car”) and subordinate level (e.g. “Japanese car”) identity. The effect of expertise was not evident when objects were categorized at the basic level. However, when the car experts categorized faces and cars at the subordinate level, the two types of expertise required different kinds of SF information. Subordinate categorization of faces relied on low SFs more than on high SFs, whereas subordinate expert car categorization relied on high SFs more than on low SFs. These findings suggest that expertise in the recognition of objects and faces do not utilize the same type of information. Rather, different types of expertise require different types of diagnostic visual information. PMID:23826188

  14. Famous face recognition, face matching, and extraversion.

    PubMed

    Lander, Karen; Poyarekar, Siddhi

    2015-01-01

    It has been previously established that extraverts who are skilled at interpersonal interaction perform significantly better than introverts on a face-specific recognition memory task. In our experiment we further investigate the relationship between extraversion and face recognition, focusing on famous face recognition and face matching. Results indicate that more extraverted individuals perform significantly better on an upright famous face recognition task and show significantly larger face inversion effects. However, our results did not find an effect of extraversion on face matching or inverted famous face recognition.

  15. Biases in facial and vocal emotion recognition in chronic schizophrenia

    PubMed Central

    Dondaine, Thibaut; Robert, Gabriel; Péron, Julie; Grandjean, Didier; Vérin, Marc; Drapier, Dominique; Millet, Bruno

    2014-01-01

    There has been extensive research on impaired emotion recognition in schizophrenia in the facial and vocal modalities. The literature points to biases toward non-relevant emotions for emotional faces but few studies have examined biases in emotional recognition across different modalities (facial and vocal). In order to test emotion recognition biases, we exposed 23 patients with stabilized chronic schizophrenia and 23 healthy controls (HCs) to emotional facial and vocal tasks asking them to rate emotional intensity on visual analog scales. We showed that patients with schizophrenia provided higher intensity ratings on the non-target scales (e.g., surprise scale for fear stimuli) than HCs for the both tasks. Furthermore, with the exception of neutral vocal stimuli, they provided the same intensity ratings on the target scales as the HCs. These findings suggest that patients with chronic schizophrenia have emotional biases when judging emotional stimuli in the visual and vocal modalities. These biases may stem from a basic sensorial deficit, a high-order cognitive dysfunction, or both. The respective roles of prefrontal-subcortical circuitry and the basal ganglia are discussed. PMID:25202287

  16. Facing mixed emotions: Analytic and holistic perception of facial emotion expressions engages separate brain networks.

    PubMed

    Meaux, Emilie; Vuilleumier, Patrik

    2016-11-01

    The ability to decode facial emotions is of primary importance for human social interactions; yet, it is still debated how we analyze faces to determine their expression. Here we compared the processing of emotional face expressions through holistic integration and/or local analysis of visual features, and determined which brain systems mediate these distinct processes. Behavioral, physiological, and brain responses to happy and angry faces were assessed by presenting congruent global configurations of expressions (e.g., happy top+happy bottom), incongruent composite configurations (e.g., angry top+happy bottom), and isolated features (e.g. happy top only). Top and bottom parts were always from the same individual. Twenty-six healthy volunteers were scanned using fMRI while they classified the expression in either the top or the bottom face part but ignored information in the other non-target part. Results indicate that the recognition of happy and anger expressions is neither strictly holistic nor analytic Both routes were involved, but with a different role for analytic and holistic information depending on the emotion type, and different weights of local features between happy and anger expressions. Dissociable neural pathways were engaged depending on emotional face configurations. In particular, regions within the face processing network differed in their sensitivity to holistic expression information, which predominantly activated fusiform, inferior occipital areas and amygdala when internal features were congruent (i.e. template matching), whereas more local analysis of independent features preferentially engaged STS and prefrontal areas (IFG/OFC) in the context of full face configurations, but early visual areas and pulvinar when seen in isolated parts. Collectively, these findings suggest that facial emotion recognition recruits separate, but interactive dorsal and ventral routes within the face processing networks, whose engagement may be shaped by reciprocal interactions and modulated by task demands. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. The neural organization of perception in chess experts.

    PubMed

    Krawczyk, Daniel C; Boggan, Amy L; McClelland, M Michelle; Bartlett, James C

    2011-07-20

    The human visual system responds to expertise, and it has been suggested that regions that process faces also process other objects of expertise including chess boards by experts. We tested whether chess and face processing overlap in brain activity using fMRI. Chess experts and novices exhibited face selective areas, but these regions showed no selectivity to chess configurations relative to other stimuli. We next compared neural responses to chess and to scrambled chess displays to isolate areas relevant to expertise. Areas within the posterior cingulate, orbitofrontal cortex, and right temporal cortex were active in this comparison in experts over novices. We also compared chess and face responses within the posterior cingulate and found this area responsive to chess only in experts. These findings indicate that the configurations in chess are not strongly processed by face-selective regions that are selective for faces in individuals who have expertise in both domains. Further, the area most consistently involved in chess did not show overlap with faces. Overall, these results suggest that expert visual processing may be similar at the level of recognition, but need not show the same neural correlates. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  18. Many faces of expertise: fusiform face area in chess experts and novices.

    PubMed

    Bilalić, Merim; Langner, Robert; Ulrich, Rolf; Grodd, Wolfgang

    2011-07-13

    The fusiform face area (FFA) is involved in face perception to such an extent that some claim it is a brain module for faces exclusively. The other possibility is that FFA is modulated by experience in individuation in any visual domain, not only faces. Here we test this latter FFA expertise hypothesis using the game of chess as a domain of investigation. We exploited the characteristic of chess, which features multiple objects forming meaningful spatial relations. In three experiments, we show that FFA activity is related to stimulus properties and not to chess skill directly. In all chess and non-chess tasks, experts' FFA was more activated than that of novices' only when they dealt with naturalistic full-board chess positions. When common spatial relationships formed by chess objects in chess positions were randomly disturbed, FFA was again differentially active only in experts, regardless of the actual task. Our experiments show that FFA contributes to the holistic processing of domain-specific multipart stimuli in chess experts. This suggests that FFA may not only mediate human expertise in face recognition but, supporting the expertise hypothesis, may mediate the automatic holistic processing of any highly familiar multipart visual input.

  19. Affective matching of odors and facial expressions in infants: shifting patterns between 3 and 7 months.

    PubMed

    Godard, Ornella; Baudouin, Jean-Yves; Schaal, Benoist; Durand, Karine

    2016-01-01

    Recognition of emotional facial expressions is a crucial skill for adaptive behavior. Past research suggests that at 5 to 7 months of age, infants look longer to an unfamiliar dynamic angry/happy face which emotionally matches a vocal expression. This suggests that they can match stimulations of distinct modalities on their emotional content. In the present study, olfaction-vision matching abilities were assessed across different age groups (3, 5 and 7 months) using dynamic expressive faces (happy vs. disgusted) and distinct hedonic odor contexts (pleasant, unpleasant and control) in a visual-preference paradigm. At all ages the infants were biased toward the disgust faces. This visual bias reversed into a bias for smiling faces in the context of the pleasant odor context in the 3-month-old infants. In infants aged 5 and 7 months, no effect of the odor context appeared in the present conditions. This study highlights the role of the olfactory context in the modulation of visual behavior toward expressive faces in infants. The influence of olfaction took the form of a contingency effect in 3-month-old infants, but later evolved to vanish or to take another form that could not be evidenced in the present study. © 2015 John Wiley & Sons Ltd.

  20. MEG demonstrates a supra-additive response to facial and vocal emotion in the right superior temporal sulcus.

    PubMed

    Hagan, Cindy C; Woods, Will; Johnson, Sam; Calder, Andrew J; Green, Gary G R; Young, Andrew W

    2009-11-24

    An influential neural model of face perception suggests that the posterior superior temporal sulcus (STS) is sensitive to those aspects of faces that produce transient visual changes, including facial expression. Other researchers note that recognition of expression involves multiple sensory modalities and suggest that the STS also may respond to crossmodal facial signals that change transiently. Indeed, many studies of audiovisual (AV) speech perception show STS involvement in AV speech integration. Here we examine whether these findings extend to AV emotion. We used magnetoencephalography to measure the neural responses of participants as they viewed and heard emotionally congruent fear and minimally congruent neutral face and voice stimuli. We demonstrate significant supra-additive responses (i.e., where AV > [unimodal auditory + unimodal visual]) in the posterior STS within the first 250 ms for emotionally congruent AV stimuli. These findings show a role for the STS in processing crossmodal emotive signals.

  1. Psychopathic traits affect the visual exploration of facial expressions.

    PubMed

    Boll, Sabrina; Gamer, Matthias

    2016-05-01

    Deficits in emotional reactivity and recognition have been reported in psychopathy. Impaired attention to the eyes along with amygdala malfunctions may underlie these problems. Here, we investigated how different facets of psychopathy modulate the visual exploration of facial expressions by assessing personality traits in a sample of healthy young adults using an eye-tracking based face perception task. Fearless Dominance (the interpersonal-emotional facet of psychopathy) and Coldheartedness scores predicted reduced face exploration consistent with findings on lowered emotional reactivity in psychopathy. Moreover, participants high on the social deviance facet of psychopathy ('Self-Centered Impulsivity') showed a reduced bias to shift attention towards the eyes. Our data suggest that facets of psychopathy modulate face processing in healthy individuals and reveal possible attentional mechanisms which might be responsible for the severe impairments of social perception and behavior observed in psychopathy. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Is That Me or My Twin? Lack of Self-Face Recognition Advantage in Identical Twins

    PubMed Central

    Martini, Matteo; Bufalari, Ilaria; Stazi, Maria Antonietta; Aglioti, Salvatore Maria

    2015-01-01

    Despite the increasing interest in twin studies and the stunning amount of research on face recognition, the ability of adult identical twins to discriminate their own faces from those of their co-twins has been scarcely investigated. One’s own face is the most distinctive feature of the bodily self, and people typically show a clear advantage in recognizing their own face even more than other very familiar identities. Given the very high level of resemblance of their faces, monozygotic twins represent a unique model for exploring self-face processing. Herein we examined the ability of monozygotic twins to distinguish their own face from the face of their co-twin and of a highly familiar individual. Results show that twins equally recognize their own face and their twin’s face. This lack of self-face advantage was negatively predicted by how much they felt physically similar to their co-twin and by their anxious or avoidant attachment style. We speculate that in monozygotic twins, the visual representation of the self-face overlaps with that of the co-twin. Thus, to distinguish the self from the co-twin, monozygotic twins have to rely much more than control participants on the multisensory integration processes upon which the sense of bodily self is based. Moreover, in keeping with the notion that attachment style influences perception of self and significant others, we propose that the observed self/co-twin confusion may depend upon insecure attachment. PMID:25853249

  3. Facial recognition using simulated prosthetic pixelized vision.

    PubMed

    Thompson, Robert W; Barnett, G David; Humayun, Mark S; Dagnelie, Gislin

    2003-11-01

    To evaluate a model of simulated pixelized prosthetic vision using noncontiguous circular phosphenes, to test the effects of phosphene and grid parameters on facial recognition. A video headset was used to view a reference set of four faces, followed by a partially averted image of one of those faces viewed through a square pixelizing grid that contained 10x10 to 32x32 dots separated by gaps. The grid size, dot size, gap width, dot dropout rate, and gray-scale resolution were varied separately about a standard test condition, for a total of 16 conditions. All tests were first performed at 99% contrast and then repeated at 12.5% contrast. Discrimination speed and performance were influenced by all stimulus parameters. The subjects achieved highly significant facial recognition accuracy for all high-contrast tests except for grids with 70% random dot dropout and two gray levels. In low-contrast tests, significant facial recognition accuracy was achieved for all but the most adverse grid parameters: total grid area less than 17% of the target image, 70% dropout, four or fewer gray levels, and a gap of 40.5 arcmin. For difficult test conditions, a pronounced learning effect was noticed during high-contrast trials, and a more subtle practice effect on timing was evident during subsequent low-contrast trials. These findings suggest that reliable face recognition with crude pixelized grids can be learned and may be possible, even with a crude visual prosthesis.

  4. Early top-down control of visual processing predicts working memory performance

    PubMed Central

    Rutman, Aaron M.; Clapp, Wesley C.; Chadick, James Z.; Gazzaley, Adam

    2009-01-01

    Selective attention confers a behavioral benefit for both perceptual and working memory (WM) performance, often attributed to top-down modulation of sensory neural processing. However, the direct relationship between early activity modulation in sensory cortices during selective encoding and subsequent WM performance has not been established. To explore the influence of selective attention on WM recognition, we used electroencephalography (EEG) to study the temporal dynamics of top-down modulation in a selective, delayed-recognition paradigm. Participants were presented with overlapped, “double-exposed” images of faces and natural scenes, and were instructed to either remember the face or the scene while simultaneously ignoring the other stimulus. Here, we present evidence that the degree to which participants modulate the early P100 (97–129 ms) event-related potential (ERP) during selective stimulus encoding significantly correlates with their subsequent WM recognition. These results contribute to our evolving understanding of the mechanistic overlap between attention and memory. PMID:19413473

  5. Perceptual expertise: can sensorimotor experience change holistic processing and left-side bias?

    PubMed

    Tso, Ricky Van-yip; Au, Terry Kit-fong; Hsiao, Janet Hui-wen

    2014-09-01

    Holistic processing and left-side bias are both behavioral markers of expert face recognition. By contrast, expert recognition of characters in Chinese orthography involves left-side bias but reduced holistic processing, although faces and Chinese characters share many visual properties. Here, we examined whether this reduction in holistic processing of Chinese characters can be better explained by writing experience than by reading experience. Compared with Chinese nonreaders, Chinese readers who had limited writing experience showed increased holistic processing, whereas Chinese readers who could write characters fluently showed reduced holistic processing. This result suggests that writing and sensorimotor experience can modulate holistic-processing effects and that the reduced holistic processing observed in expert Chinese readers may depend mostly on writing experience. However, both expert writers and writers with limited experience showed similarly stronger left-side bias than novices did in processing mirror-symmetric Chinese characters; left-side bias may therefore be a robust expertise marker for object recognition that is uninfluenced by sensorimotor experience. © The Author(s) 2014.

  6. A parametric study of face recognition when image degradations are combined.

    PubMed

    Uttal, W R; Baruch, T; Allen, L

    1997-01-01

    This article expands and quantifies one of the classic reports of modern visual perception research--Harmon and Julesz' (1973) demonstration of an enhancement in recognition performance when area averaging (blocking) and spatial frequency filtering are sequentially applied. Our goals were twofold: first, to determine if the existence of the phenomena could be confirmed and replicated in a parametric study; second, to determine if the new results supported the critical band masking theory originally proposed by Harmon and Julesz. We confirmed the presence of the phenomenon for stimuli subtending approximately six deg of visual angle vertically, but observed a surprisingly different pattern of results for smaller stimuli subtending approximately one deg. These and other recent findings from other laboratories raise questions about their masking theory as a complete explanation of the phenomena.

  7. Advanced Parkinson disease patients have impairment in prosody processing.

    PubMed

    Albuquerque, Luisa; Martins, Maurício; Coelho, Miguel; Guedes, Leonor; Ferreira, Joaquim J; Rosa, Mário; Martins, Isabel Pavão

    2016-01-01

    The ability to recognize and interpret emotions in others is a crucial prerequisite of adequate social behavior. Impairments in emotion processing have been reported from the early stages of Parkinson's disease (PD). This study aims to characterize emotion recognition in advanced Parkinson's disease (APD) candidates for deep-brain stimulation and to compare emotion recognition abilities in visual and auditory domains. APD patients, defined as those with levodopa-induced motor complications (N = 42), and healthy controls (N = 43) matched by gender, age, and educational level, undertook the Comprehensive Affect Testing System (CATS), a battery that evaluates recognition of seven basic emotions (happiness, sadness, anger, fear, surprise, disgust, and neutral) on facial expressions and four emotions on prosody (happiness, sadness, anger, and fear). APD patients were assessed during the "ON" state. Group performance was compared with independent-samples t tests. Compared to controls, APD had significantly lower scores on the discrimination and naming of emotions in prosody, and visual discrimination of neutral faces, but no significant differences in visual emotional tasks. The contrasting performance in emotional processing between visual and auditory stimuli suggests that APD candidates for surgery have either a selective difficulty in recognizing emotions in prosody or a general defect in prosody processing. Studies investigating early-stage PD, and the effect of subcortical lesions in prosody processing, favor the latter interpretation. Further research is needed to understand these deficits in emotional prosody recognition and their possible contribution to later behavioral or neuropsychiatric manifestations of PD.

  8. Description and recognition of faces from 3D data

    NASA Astrophysics Data System (ADS)

    Coombes, Anne M.; Richards, Robin; Linney, Alfred D.; Bruce, Vicki; Fright, Rick

    1992-12-01

    A method based on differential geometry, is presented for mathematically describing the shape of the facial surface. Three-dimensional data for the face are collected by optical surface scanning. The method allows the segmentation of the face into regions of a particular `surface type,' according to the surface curvature. Eight different surface types are produced which all have perceptually meaningful interpretations. The correspondence of the surface type regions to the facial features are easily visualized, allowing a qualitative assessment of the face. A quantitative description of the face in terms of the surface type regions can be produced and the variation of the description between faces is demonstrated. A set of optical surface scans can be registered together and averages to produce an average male and average female face. Thus an assessment of how individuals vary from the average can be made as well as a general statement about the differences between male and female faces. This method will enable an investigation to be made as to how reliably faces can be individuated by their surface shape which, if feasible, may be the basis of an automatic system for recognizing faces. It also has applications in physical anthropology, for classification of the face, facial reconstructive surgery, to quantify the changes in a face altered by reconstructive surgery and growth, and in visual perception, to assess the recognizability of faces. Examples of some of these applications are presented.

  9. Alterations in visual cortical activation and connectivity with prefrontal cortex during working memory updating in major depressive disorder.

    PubMed

    Le, Thang M; Borghi, John A; Kujawa, Autumn J; Klein, Daniel N; Leung, Hoi-Chung

    2017-01-01

    The present study examined the impacts of major depressive disorder (MDD) on visual and prefrontal cortical activity as well as their connectivity during visual working memory updating and related them to the core clinical features of the disorder. Impairment in working memory updating is typically associated with the retention of irrelevant negative information which can lead to persistent depressive mood and abnormal affect. However, performance deficits have been observed in MDD on tasks involving little or no demand on emotion processing, suggesting dysfunctions may also occur at the more basic level of information processing. Yet, it is unclear how various regions in the visual working memory circuit contribute to behavioral changes in MDD. We acquired functional magnetic resonance imaging data from 18 unmedicated participants with MDD and 21 age-matched healthy controls (CTL) while they performed a visual delayed recognition task with neutral faces and scenes as task stimuli. Selective working memory updating was manipulated by inserting a cue in the delay period to indicate which one or both of the two memorized stimuli (a face and a scene) would remain relevant for the recognition test. Our results revealed several key findings. Relative to the CTL group, the MDD group showed weaker postcue activations in visual association areas during selective maintenance of face and scene working memory. Across the MDD subjects, greater rumination and depressive symptoms were associated with more persistent activation and connectivity related to no-longer-relevant task information. Classification of postcue spatial activation patterns of the scene-related areas was also less consistent in the MDD subjects compared to the healthy controls. Such abnormalities appeared to result from a lack of updating effects in postcue functional connectivity between prefrontal and scene-related areas in the MDD group. In sum, disrupted working memory updating in MDD was revealed by alterations in activity patterns of the visual association areas, their connectivity with the prefrontal cortex, and their relationship with core clinical characteristics. These results highlight the role of information updating deficits in the cognitive control and symptomatology of depression.

  10. Fusiform gyrus face selectivity relates to individual differences in facial recognition ability.

    PubMed

    Furl, Nicholas; Garrido, Lúcia; Dolan, Raymond J; Driver, Jon; Duchaine, Bradley

    2011-07-01

    Regions of the occipital and temporal lobes, including a region in the fusiform gyrus (FG), have been proposed to constitute a "core" visual representation system for faces, in part because they show face selectivity and face repetition suppression. But recent fMRI studies of developmental prosopagnosics (DPs) raise questions about whether these measures relate to face processing skills. Although DPs manifest deficient face processing, most studies to date have not shown unequivocal reductions of functional responses in the proposed core regions. We scanned 15 DPs and 15 non-DP control participants with fMRI while employing factor analysis to derive behavioral components related to face identification or other processes. Repetition suppression specific to facial identities in FG or to expression in FG and STS did not show compelling relationships with face identification ability. However, we identified robust relationships between face selectivity and face identification ability in FG across our sample for several convergent measures, including voxel-wise statistical parametric mapping, peak face selectivity in individually defined "fusiform face areas" (FFAs), and anatomical extents (cluster sizes) of those FFAs. None of these measures showed associations with behavioral expression or object recognition ability. As a group, DPs had reduced face-selective responses in bilateral FFA when compared with non-DPs. Individual DPs were also more likely than non-DPs to lack expected face-selective activity in core regions. These findings associate individual differences in face processing ability with selectivity in core face processing regions. This confirms that face selectivity can provide a valid marker for neural mechanisms that contribute to face identification ability.

  11. Mapping the emotional face. How individual face parts contribute to successful emotion recognition.

    PubMed

    Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna

    2017-01-01

    Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.

  12. Mapping the emotional face. How individual face parts contribute to successful emotion recognition

    PubMed Central

    Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna

    2017-01-01

    Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation. PMID:28493921

  13. Mapping face categorization in the human ventral occipitotemporal cortex with direct neural intracranial recordings.

    PubMed

    Rossion, Bruno; Jacques, Corentin; Jonas, Jacques

    2018-02-26

    The neural basis of face categorization has been widely investigated with functional magnetic resonance imaging (fMRI), identifying a set of face-selective local regions in the ventral occipitotemporal cortex (VOTC). However, indirect recording of neural activity with fMRI is associated with large fluctuations of signal across regions, often underestimating face-selective responses in the anterior VOTC. While direct recording of neural activity with subdural grids of electrodes (electrocorticography, ECoG) or depth electrodes (stereotactic electroencephalography, SEEG) offers a unique opportunity to fill this gap in knowledge, these studies rather reveal widely distributed face-selective responses. Moreover, intracranial recordings are complicated by interindividual variability in neuroanatomy, ambiguity in definition, and quantification of responses of interest, as well as limited access to sulci with ECoG. Here, we propose to combine SEEG in large samples of individuals with fast periodic visual stimulation to objectively define, quantify, and characterize face categorization across the whole VOTC. This approach reconciles the wide distribution of neural face categorization responses with their (right) hemispheric and regional specialization, and reveals several face-selective regions in anterior VOTC sulci. We outline the challenges of this research program to understand the neural basis of face categorization and high-level visual recognition in general. © 2018 New York Academy of Sciences.

  14. Top-down contextual knowledge guides visual attention in infancy.

    PubMed

    Tummeltshammer, Kristen; Amso, Dima

    2017-10-26

    The visual context in which an object or face resides can provide useful top-down information for guiding attention orienting, object recognition, and visual search. Although infants have demonstrated sensitivity to covariation in spatial arrays, it is presently unclear whether they can use rapidly acquired contextual knowledge to guide attention during visual search. In this eye-tracking experiment, 6- and 10-month-old infants searched for a target face hidden among colorful distracter shapes. Targets appeared in Old or New visual contexts, depending on whether the visual search arrays (defined by the spatial configuration, shape and color of component items in the search display) were repeated or newly generated throughout the experiment. Targets in Old contexts appeared in the same location within the same configuration, such that context covaried with target location. Both 6- and 10-month-olds successfully distinguished between Old and New contexts, exhibiting faster search times, fewer looks at distracters, and more anticipation of targets when contexts repeated. This initial demonstration of contextual cueing effects in infants indicates that they can use top-down information to facilitate orienting during memory-guided visual search. © 2017 John Wiley & Sons Ltd.

  15. Pathways for smiling, disgust and fear recognition in blindsight patients.

    PubMed

    Gerbella, Marzio; Caruana, Fausto; Rizzolatti, Giacomo

    2017-08-31

    The aim of the present review is to discuss the localization of circuits that allow recognition of emotional facial expressions in blindsight patients. Because recognition of facial expressions is function of different centers, and their localization is not always clear, we decided to discuss here three emotional facial expression - smiling, disgust, and fear - whose anatomical localization in the pregenual sector of the anterior cingulate cortex (pACC), anterior insula (AI), and amygdala, respectively, is well established. We examined, then, the possible pathways that may convey affective visual information to these centers following lesions of V1. We concluded that the pathway leading to pACC, AI, and amygdala involves the deep layers of the superior colliculus, the medial pulvinar, and the superior temporal sulcus region. We suggest that this visual pathway provides an image of the observed affective faces, which, although deteriorated, is sufficient to determine some overt behavior, but not to provide conscious experience of the presented stimuli. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Spontaneous expression of mirror self-recognition in monkeys after learning precise visual-proprioceptive association for mirror images

    PubMed Central

    Chang, Liangtang; Zhang, Shikun; Poo, Mu-ming; Gong, Neng

    2017-01-01

    Mirror self-recognition (MSR) is generally considered to be an intrinsic cognitive ability found only in humans and a few species of great apes. Rhesus monkeys do not spontaneously show MSR, but they have the ability to use a mirror as an instrument to find hidden objects. The mechanism underlying the transition from simple mirror use to MSR remains unclear. Here we show that rhesus monkeys could show MSR after learning precise visual-proprioceptive association for mirror images. We trained head-fixed monkeys on a chair in front of a mirror to touch with spatiotemporal precision a laser pointer light spot on an adjacent board that could only be seen in the mirror. After several weeks of training, when the same laser pointer light was projected to the monkey's face, a location not used in training, all three trained monkeys successfully touched the face area marked by the light spot in front of a mirror. All trained monkeys passed the standard face mark test for MSR both on the monkey chair and in their home cage. Importantly, distinct from untrained control monkeys, the trained monkeys showed typical mirror-induced self-directed behaviors in their home cage, such as using the mirror to explore normally unseen body parts. Thus, bodily self-consciousness may be a cognitive ability present in many more species than previously thought, and acquisition of precise visual-proprioceptive association for the images in the mirror is critical for revealing the MSR ability of the animal. PMID:28193875

  17. Spontaneous expression of mirror self-recognition in monkeys after learning precise visual-proprioceptive association for mirror images.

    PubMed

    Chang, Liangtang; Zhang, Shikun; Poo, Mu-Ming; Gong, Neng

    2017-03-21

    Mirror self-recognition (MSR) is generally considered to be an intrinsic cognitive ability found only in humans and a few species of great apes. Rhesus monkeys do not spontaneously show MSR, but they have the ability to use a mirror as an instrument to find hidden objects. The mechanism underlying the transition from simple mirror use to MSR remains unclear. Here we show that rhesus monkeys could show MSR after learning precise visual-proprioceptive association for mirror images. We trained head-fixed monkeys on a chair in front of a mirror to touch with spatiotemporal precision a laser pointer light spot on an adjacent board that could only be seen in the mirror. After several weeks of training, when the same laser pointer light was projected to the monkey's face, a location not used in training, all three trained monkeys successfully touched the face area marked by the light spot in front of a mirror. All trained monkeys passed the standard face mark test for MSR both on the monkey chair and in their home cage. Importantly, distinct from untrained control monkeys, the trained monkeys showed typical mirror-induced self-directed behaviors in their home cage, such as using the mirror to explore normally unseen body parts. Thus, bodily self-consciousness may be a cognitive ability present in many more species than previously thought, and acquisition of precise visual-proprioceptive association for the images in the mirror is critical for revealing the MSR ability of the animal.

  18. Altered Face Scanning and Impaired Recognition of Biological Motion in a 15-Month-Old Infant with Autism

    ERIC Educational Resources Information Center

    Klin, Ami; Jones, Warren

    2008-01-01

    Mounting clinical evidence suggests that abnormalities of social engagement in children with autism are present even during infancy. However, direct experimental documentation of these abnormalities is still limited. In this case report of a 15-month-old infant with autism, we measured visual fixation patterns to both naturalistic and ambiguous…

  19. Survey on Classifying Human Actions through Visual Sensors

    DTIC Science & Technology

    2011-04-08

    International Conference on Automatic Face and Gesture Recognition, 2008, pp. 1-6, doi:10.1109/AFGR.2008.4813416. [47] Herrera, A., Beck , A., Bell, D...Announcement, DARPA- BAA -10-53, 2010 www.darpa.mil/tcto/docs/DARPA_ME_BAA-10-53_Mod1.pdf [84] Del Rose, M., Stein, J., “Survivability on the ART Robotic

  20. Learning Rotation-Invariant Local Binary Descriptor.

    PubMed

    Duan, Yueqi; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie

    2017-08-01

    In this paper, we propose a rotation-invariant local binary descriptor (RI-LBD) learning method for visual recognition. Compared with hand-crafted local binary descriptors, such as local binary pattern and its variants, which require strong prior knowledge, local binary feature learning methods are more efficient and data-adaptive. Unlike existing learning-based local binary descriptors, such as compact binary face descriptor and simultaneous local binary feature learning and encoding, which are susceptible to rotations, our RI-LBD first categorizes each local patch into a rotational binary pattern (RBP), and then jointly learns the orientation for each pattern and the projection matrix to obtain RI-LBDs. As all the rotation variants of a patch belong to the same RBP, they are rotated into the same orientation and projected into the same binary descriptor. Then, we construct a codebook by a clustering method on the learned binary codes, and obtain a histogram feature for each image as the final representation. In order to exploit higher order statistical information, we extend our RI-LBD to the triple rotation-invariant co-occurrence local binary descriptor (TRICo-LBD) learning method, which learns a triple co-occurrence binary code for each local patch. Extensive experimental results on four different visual recognition tasks, including image patch matching, texture classification, face recognition, and scene classification, show that our RI-LBD and TRICo-LBD outperform most existing local descriptors.

  1. Random-Profiles-Based 3D Face Recognition System

    PubMed Central

    Joongrock, Kim; Sunjin, Yu; Sangyoun, Lee

    2014-01-01

    In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation. PMID:24691101

  2. Prevalence of face recognition deficits in middle childhood.

    PubMed

    Bennetts, Rachel J; Murray, Ebony; Boyce, Tian; Bate, Sarah

    2017-02-01

    Approximately 2-2.5% of the adult population is believed to show severe difficulties with face recognition, in the absence of any neurological injury-a condition known as developmental prosopagnosia (DP). However, to date no research has attempted to estimate the prevalence of face recognition deficits in children, possibly because there are very few child-friendly, well-validated tests of face recognition. In the current study, we examined face and object recognition in a group of primary school children (aged 5-11 years), to establish whether our tests were suitable for children and to provide an estimate of face recognition difficulties in children. In Experiment 1 (n = 184), children completed a pre-existing test of child face memory, the Cambridge Face Memory Test-Kids (CFMT-K), and a bicycle test with the same format. In Experiment 2 (n = 413), children completed three-alternative forced-choice matching tasks with faces and bicycles. All tests showed good psychometric properties. The face and bicycle tests were well matched for difficulty and showed a similar developmental trajectory. Neither the memory nor the matching tests were suitable to detect impairments in the youngest groups of children, but both tests appear suitable to screen for face recognition problems in middle childhood. In the current sample, 1.2-5.2% of children showed difficulties with face recognition; 1.2-4% showed face-specific difficulties-that is, poor face recognition with typical object recognition abilities. This is somewhat higher than previous adult estimates: It is possible that face matching tests overestimate the prevalence of face recognition difficulties in children; alternatively, some children may "outgrow" face recognition difficulties.

  3. A Spiking Neural Network Based Cortex-Like Mechanism and Application to Facial Expression Recognition

    PubMed Central

    Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai

    2012-01-01

    In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism. PMID:23193391

  4. A spiking neural network based cortex-like mechanism and application to facial expression recognition.

    PubMed

    Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai

    2012-01-01

    In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism.

  5. Super-recognition in development: A case study of an adolescent with extraordinary face recognition skills.

    PubMed

    Bennetts, Rachel J; Mole, Joseph; Bate, Sarah

    2017-09-01

    Face recognition abilities vary widely. While face recognition deficits have been reported in children, it is unclear whether superior face recognition skills can be encountered during development. This paper presents O.B., a 14-year-old female with extraordinary face recognition skills: a "super-recognizer" (SR). O.B. demonstrated exceptional face-processing skills across multiple tasks, with a level of performance that is comparable to adult SRs. Her superior abilities appear to be specific to face identity: She showed an exaggerated face inversion effect and her superior abilities did not extend to object processing or non-identity aspects of face recognition. Finally, an eye-movement task demonstrated that O.B. spent more time than controls examining the nose - a pattern previously reported in adult SRs. O.B. is therefore particularly skilled at extracting and using identity-specific facial cues, indicating that face and object recognition are dissociable during development, and that super recognition can be detected in adolescence.

  6. Better Object Recognition and Naming Outcome With MRI-Guided Stereotactic Laser Amygdalohippocampotomy for Temporal Lobe Epilepsy

    PubMed Central

    Drane, Daniel L.; Loring, David W.; Voets, Natalie L.; Price, Michele; Ojemann, Jeffrey G.; Willie, Jon T.; Saindane, Amit M.; Phatak, Vaishali; Ivanisevic, Mirjana; Millis, Scott; Helmers, Sandra L.; Miller, John W.; Meador, Kimford J.; Gross, Robert E.

    2015-01-01

    SUMMARY OBJECTIVES Temporal lobe epilepsy (TLE) patients experience significant deficits in category-related object recognition and naming following standard surgical approaches. These deficits may result from a decoupling of core processing modules (e.g., language, visual processing, semantic memory), due to “collateral damage” to temporal regions outside the hippocampus following open surgical approaches. We predicted stereotactic laser amygdalohippocampotomy (SLAH) would minimize such deficits because it preserves white matter pathways and neocortical regions critical for these cognitive processes. METHODS Tests of naming and recognition of common nouns (Boston Naming Test) and famous persons were compared with nonparametric analyses using exact tests between a group of nineteen patients with medically-intractable mesial TLE undergoing SLAH (10 dominant, 9 nondominant), and a comparable series of TLE patients undergoing standard surgical approaches (n=39) using a prospective, non-randomized, non-blinded, parallel group design. RESULTS Performance declines were significantly greater for the dominant TLE patients undergoing open resection versus SLAH for naming famous faces and common nouns (F=24.3, p<.0001, η2=.57, & F=11.2, p<.001, η2=.39, respectively), and for the nondominant TLE patients undergoing open resection versus SLAH for recognizing famous faces (F=3.9, p<.02, η2=.19). When examined on an individual subject basis, no SLAH patients experienced any performance declines on these measures. In contrast, 32 of the 39 undergoing standard surgical approaches declined on one or more measures for both object types (p<.001, Fisher’s exact test). Twenty-one of 22 left (dominant) TLE patients declined on one or both naming tasks after open resection, while 11 of 17 right (non-dominant) TLE patients declined on face recognition. SIGNIFICANCE Preliminary results suggest 1) naming and recognition functions can be spared in TLE patients undergoing SLAH, and 2) the hippocampus does not appear to be an essential component of neural networks underlying name retrieval or recognition of common objects or famous faces. PMID:25489630

  7. The telltale face: possible mechanisms behind defector and cooperator recognition revealed by emotional facial expression metrics.

    PubMed

    Kovács-Bálint, Zsófia; Bereczkei, Tamás; Hernádi, István

    2013-11-01

    In this study, we investigated the role of facial cues in cooperator and defector recognition. First, a face image database was constructed from pairs of full face portraits of target subjects taken at the moment of decision-making in a prisoner's dilemma game (PDG) and in a preceding neutral task. Image pairs with no deficiencies (n = 67) were standardized for orientation and luminance. Then, confidence in defector and cooperator recognition was tested with image rating in a different group of lay judges (n = 62). Results indicate that (1) defectors were better recognized (58% vs. 47%), (2) they looked different from cooperators (p < .01), (3) males but not females evaluated the images with a relative bias towards the cooperator category (p < .01), and (4) females were more confident in detecting defectors (p < .05). According to facial microexpression analysis, defection was strongly linked with depressed lower lips and less opened eyes. Significant correlation was found between the intensity of micromimics and the rating of images in the cooperator-defector dimension. In summary, facial expressions can be considered as reliable indicators of momentary social dispositions in the PDG. Females may exhibit an evolutionary-based overestimation bias to detecting social visual cues of the defector face. © 2012 The British Psychological Society.

  8. Common polymorphism in the oxytocin receptor gene (OXTR) is associated with human social recognition skills

    PubMed Central

    Skuse, David H.; Lori, Adriana; Cubells, Joseph F.; Lee, Irene; Conneely, Karen N.; Puura, Kaija; Lehtimäki, Terho; Binder, Elisabeth B.; Young, Larry J.

    2014-01-01

    The neuropeptides oxytocin and vasopressin are evolutionarily conserved regulators of social perception and behavior. Evidence is building that they are critically involved in the development of social recognition skills within rodent species, primates, and humans. We investigated whether common polymorphisms in the genes encoding the oxytocin and vasopressin 1a receptors influence social memory for faces. Our sample comprised 198 families, from the United Kingdom and Finland, in whom a single child had been diagnosed with high-functioning autism. Previous research has shown that impaired social perception, characteristic of autism, extends to the first-degree relatives of autistic individuals, implying heritable risk. Assessments of face recognition memory, discrimination of facial emotions, and direction of gaze detection were standardized for age (7–60 y) and sex. A common SNP in the oxytocin receptor (rs237887) was strongly associated with recognition memory in combined probands, parents, and siblings after correction for multiple comparisons. Homozygotes for the ancestral A allele had impairments in the range −0.6 to −1.15 SD scores, irrespective of their diagnostic status. Our findings imply that a critical role for the oxytocin system in social recognition has been conserved across perceptual boundaries through evolution, from olfaction in rodents to visual memory in humans. PMID:24367110

  9. Common polymorphism in the oxytocin receptor gene (OXTR) is associated with human social recognition skills.

    PubMed

    Skuse, David H; Lori, Adriana; Cubells, Joseph F; Lee, Irene; Conneely, Karen N; Puura, Kaija; Lehtimäki, Terho; Binder, Elisabeth B; Young, Larry J

    2014-02-04

    The neuropeptides oxytocin and vasopressin are evolutionarily conserved regulators of social perception and behavior. Evidence is building that they are critically involved in the development of social recognition skills within rodent species, primates, and humans. We investigated whether common polymorphisms in the genes encoding the oxytocin and vasopressin 1a receptors influence social memory for faces. Our sample comprised 198 families, from the United Kingdom and Finland, in whom a single child had been diagnosed with high-functioning autism. Previous research has shown that impaired social perception, characteristic of autism, extends to the first-degree relatives of autistic individuals, implying heritable risk. Assessments of face recognition memory, discrimination of facial emotions, and direction of gaze detection were standardized for age (7-60 y) and sex. A common SNP in the oxytocin receptor (rs237887) was strongly associated with recognition memory in combined probands, parents, and siblings after correction for multiple comparisons. Homozygotes for the ancestral A allele had impairments in the range -0.6 to -1.15 SD scores, irrespective of their diagnostic status. Our findings imply that a critical role for the oxytocin system in social recognition has been conserved across perceptual boundaries through evolution, from olfaction in rodents to visual memory in humans.

  10. A reciprocal model of face recognition and autistic traits: evidence from an individual differences perspective.

    PubMed

    Halliday, Drew W R; MacDonald, Stuart W S; Scherf, K Suzanne; Sherf, Suzanne K; Tanaka, James W

    2014-01-01

    Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ)). We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals.

  11. A Reciprocal Model of Face Recognition and Autistic Traits: Evidence from an Individual Differences Perspective

    PubMed Central

    Halliday, Drew W. R.; MacDonald, Stuart W. S.; Sherf, Suzanne K.; Tanaka, James W.

    2014-01-01

    Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ)). We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals. PMID:24853862

  12. The first does the work, but the third time's the charm: the effects of massed repetition on episodic encoding of multimodal face-name associations.

    PubMed

    Mangels, Jennifer A; Manzi, Alberto; Summerfield, Christopher

    2010-03-01

    In social interactions, it is often necessary to rapidly encode the association between visually presented faces and auditorily presented names. The present study used event-related potentials to examine the neural correlates of associative encoding for multimodal face-name pairs. We assessed study-phase processes leading to high-confidence recognition of correct pairs (and consistent rejection of recombined foils) as compared to lower-confidence recognition of correct pairs (with inconsistent rejection of recombined foils) and recognition failures (misses). Both high- and low-confidence retrieval of face-name pairs were associated with study-phase activity suggestive of item-specific processing of the face (posterior inferior temporal negativity) and name (fronto-central negativity). However, only those pairs later retrieved with high confidence recruited a sustained centro-parietal positivity that an ancillary localizer task suggested may index an association-unique process. Additionally, we examined how these processes were influenced by massed repetition, a mnemonic strategy commonly employed in everyday situations to improve face-name memory. Differences in subsequent memory effects across repetitions suggested that associative encoding was strongest at the initial presentation, and thus, that the initial presentation has the greatest impact on memory formation. Yet, exploratory analyses suggested that the third presentation may have benefited later memory by providing an opportunity for extended processing of the name. Thus, although encoding of the initial presentation was critical for establishing a strong association, the extent to which processing was sustained across subsequent immediate (massed) presentations may provide additional encoding support that serves to differentiate face-name pairs from similar (recombined) pairs by providing additional encoding opportunities for the less dominant stimulus dimension (i.e., name).

  13. View-Tolerant Face Recognition and Hebbian Learning Imply Mirror-Symmetric Neural Tuning to Head Orientation.

    PubMed

    Leibo, Joel Z; Liao, Qianli; Anselmi, Fabio; Freiwald, Winrich A; Poggio, Tomaso

    2017-01-09

    The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and robust against identity-preserving transformations, like depth rotations [1, 2]. Current computational models of object recognition, including recent deep-learning networks, generate these properties through a hierarchy of alternating selectivity-increasing filtering and tolerance-increasing pooling operations, similar to simple-complex cells operations [3-6]. Here, we prove that a class of hierarchical architectures and a broad set of biologically plausible learning rules generate approximate invariance to identity-preserving transformations at the top level of the processing hierarchy. However, all past models tested failed to reproduce the most salient property of an intermediate representation of a three-level face-processing hierarchy in the brain: mirror-symmetric tuning to head orientation [7]. Here, we demonstrate that one specific biologically plausible Hebb-type learning rule generates mirror-symmetric tuning to bilaterally symmetric stimuli, like faces, at intermediate levels of the architecture and show why it does so. Thus, the tuning properties of individual cells inside the visual stream appear to result from group properties of the stimuli they encode and to reflect the learning rules that sculpted the information-processing system within which they reside. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. The many faces of research on face perception.

    PubMed

    Little, Anthony C; Jones, Benedict C; DeBruine, Lisa M

    2011-06-12

    Face perception is fundamental to human social interaction. Many different types of important information are visible in faces and the processes and mechanisms involved in extracting this information are complex and can be highly specialized. The importance of faces has long been recognized by a wide range of scientists. Importantly, the range of perspectives and techniques that this breadth has brought to face perception research has, in recent years, led to many important advances in our understanding of face processing. The articles in this issue on face perception each review a particular arena of interest in face perception, variously focusing on (i) the social aspects of face perception (attraction, recognition and emotion), (ii) the neural mechanisms underlying face perception (using brain scanning, patient data, direct stimulation of the brain, visual adaptation and single-cell recording), and (iii) comparative aspects of face perception (comparing adult human abilities with those of chimpanzees and children). Here, we introduce the central themes of the issue and present an overview of the articles.

  15. A framework for the recognition of 3D faces and expressions

    NASA Astrophysics Data System (ADS)

    Li, Chao; Barreto, Armando

    2006-04-01

    Face recognition technology has been a focus both in academia and industry for the last couple of years because of its wide potential applications and its importance to meet the security needs of today's world. Most of the systems developed are based on 2D face recognition technology, which uses pictures for data processing. With the development of 3D imaging technology, 3D face recognition emerges as an alternative to overcome the difficulties inherent with 2D face recognition, i.e. sensitivity to illumination conditions and orientation positioning of the subject. But 3D face recognition still needs to tackle the problem of deformation of facial geometry that results from the expression changes of a subject. To deal with this issue, a 3D face recognition framework is proposed in this paper. It is composed of three subsystems: an expression recognition system, a system for the identification of faces with expression, and neutral face recognition system. A system for the recognition of faces with one type of expression (happiness) and neutral faces was implemented and tested on a database of 30 subjects. The results proved the feasibility of this framework.

  16. Biologically Inspired Visual Model With Preliminary Cognition and Active Attention Adjustment.

    PubMed

    Qiao, Hong; Xi, Xuanyang; Li, Yinlin; Wu, Wei; Li, Fengfu

    2015-11-01

    Recently, many computational models have been proposed to simulate visual cognition process. For example, the hierarchical Max-Pooling (HMAX) model was proposed according to the hierarchical and bottom-up structure of V1 to V4 in the ventral pathway of primate visual cortex, which could achieve position- and scale-tolerant recognition. In our previous work, we have introduced memory and association into the HMAX model to simulate visual cognition process. In this paper, we improve our theoretical framework by mimicking a more elaborate structure and function of the primate visual cortex. We will mainly focus on the new formation of memory and association in visual processing under different circumstances as well as preliminary cognition and active adjustment in the inferior temporal cortex, which are absent in the HMAX model. The main contributions of this paper are: 1) in the memory and association part, we apply deep convolutional neural networks to extract various episodic features of the objects since people use different features for object recognition. Moreover, to achieve a fast and robust recognition in the retrieval and association process, different types of features are stored in separated clusters and the feature binding of the same object is stimulated in a loop discharge manner and 2) in the preliminary cognition and active adjustment part, we introduce preliminary cognition to classify different types of objects since distinct neural circuits in a human brain are used for identification of various types of objects. Furthermore, active cognition adjustment of occlusion and orientation is implemented to the model to mimic the top-down effect in human cognition process. Finally, our model is evaluated on two face databases CAS-PEAL-R1 and AR. The results demonstrate that our model exhibits its efficiency on visual recognition process with much lower memory storage requirement and a better performance compared with the traditional purely computational methods.

  17. Contributions of feature shapes and surface cues to the recognition of facial expressions.

    PubMed

    Sormaz, Mladen; Young, Andrew W; Andrews, Timothy J

    2016-10-01

    Theoretical accounts of face processing often emphasise feature shapes as the primary visual cue to the recognition of facial expressions. However, changes in facial expression also affect the surface properties of the face. In this study, we investigated whether this surface information can also be used in the recognition of facial expression. First, participants identified facial expressions (fear, anger, disgust, sadness, happiness) from images that were manipulated such that they varied mainly in shape or mainly in surface properties. We found that the categorization of facial expression is possible in either type of image, but that different expressions are relatively dependent on surface or shape properties. Next, we investigated the relative contributions of shape and surface information to the categorization of facial expressions. This employed a complementary method that involved combining the surface properties of one expression with the shape properties from a different expression. Our results showed that the categorization of facial expressions in these hybrid images was equally dependent on the surface and shape properties of the image. Together, these findings provide a direct demonstration that both feature shape and surface information make significant contributions to the recognition of facial expressions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Problems of Face Recognition in Patients with Behavioral Variant Frontotemporal Dementia.

    PubMed

    Chandra, Sadanandavalli Retnaswami; Patwardhan, Ketaki; Pai, Anupama Ramakanth

    2017-01-01

    Faces are very special as they are most essential for social cognition in humans. It is partly understood that face processing in its abstractness involves several extra striate areas. One of the most important causes for caregiver suffering in patients with anterior dementia is lack of empathy. This apart from being a behavioral disorder could be also due to failure to categorize the emotions of the people around them. Inlusion criteria: DSM IV for Bv FTD Tested for prosopagnosia - familiar faces, famous face, smiling face, crying face and reflected face using a simple picture card (figure 1). Advanced illness and mixed causes. 46 patients (15 females, 31 males) 24 had defective face recognition. (mean age 51.5),10/15 females (70%) and 14/31males(47. Familiar face recognition defect was found in 6/10 females and 6/14 males. Total- 40%(6/15) females and 19.35%(6/31)males with FTD had familiar face recognition. Famous Face: 9/10 females and 7/14 males. Total- 60% (9/15) females with FTD had famous face recognition defect as against 22.6%(7/31) males with FTD Smiling face defects in 8/10 female and no males. Total- 53.33% (8/15) females. Crying face recognition defect in 3/10 female and 2 /14 males. Total- 20%(3/15) females and 6.5%(2/31) males. Reflected face recognition defect in 4 females. Famous face recognition and positive emotion recognition defect in 80%, only 20% comprehend positive emotions, Face recognition defects are found in only 45% of males and more common in females. Face recognition is more affected in females with FTD There is differential involvement of different aspects of the face recognition could be one of the important factor underlying decline in the emotional and social behavior of these patients. Understanding these pathological processes will give more insight regarding patient behavior.

  19. Different Neural Processes Accompany Self-Recognition in Photographs Across the Lifespan: An ERP Study Using Dizygotic Twins

    PubMed Central

    Butler, David L.; Mattingley, Jason B.; Cunnington, Ross; Suddendorf, Thomas

    2013-01-01

    Our appearance changes over time, yet we can recognize ourselves in photographs from across the lifespan. Researchers have extensively studied self-recognition in photographs and have proposed that specific neural correlates are involved, but few studies have examined self-recognition using images from different periods of life. Here we compared ERP responses to photographs of participants when they were 5–15, 16–25, and 26–45 years old. We found marked differences between the responses to photographs from these time periods in terms of the neural markers generally assumed to reflect (i) the configural processing of faces (i.e., the N170), (ii) the matching of the currently perceived face to a representation already stored in memory (i.e., the P250), and (iii) the retrieval of information about the person being recognized (i.e., the N400). There was no uniform neural signature of visual self-recognition. To test whether there was anything specific to self-recognition in these brain responses, we also asked participants to identify photographs of their dizygotic twins taken from the same time periods. Critically, this allowed us to minimize the confounding effects of exposure, for it is likely that participants have been similarly exposed to each other's faces over the lifespan. The same pattern of neural response emerged with only one exception: the neural marker reflecting the retrieval of mnemonic information (N400) differed across the lifespan for self but not for twin. These results, as well as our novel approach using twins and photographs from across the lifespan, have wide-ranging consequences for the study of self-recognition and the nature of our personal identity through time. PMID:24069151

  20. Different neural processes accompany self-recognition in photographs across the lifespan: an ERP study using dizygotic twins.

    PubMed

    Butler, David L; Mattingley, Jason B; Cunnington, Ross; Suddendorf, Thomas

    2013-01-01

    Our appearance changes over time, yet we can recognize ourselves in photographs from across the lifespan. Researchers have extensively studied self-recognition in photographs and have proposed that specific neural correlates are involved, but few studies have examined self-recognition using images from different periods of life. Here we compared ERP responses to photographs of participants when they were 5-15, 16-25, and 26-45 years old. We found marked differences between the responses to photographs from these time periods in terms of the neural markers generally assumed to reflect (i) the configural processing of faces (i.e., the N170), (ii) the matching of the currently perceived face to a representation already stored in memory (i.e., the P250), and (iii) the retrieval of information about the person being recognized (i.e., the N400). There was no uniform neural signature of visual self-recognition. To test whether there was anything specific to self-recognition in these brain responses, we also asked participants to identify photographs of their dizygotic twins taken from the same time periods. Critically, this allowed us to minimize the confounding effects of exposure, for it is likely that participants have been similarly exposed to each other's faces over the lifespan. The same pattern of neural response emerged with only one exception: the neural marker reflecting the retrieval of mnemonic information (N400) differed across the lifespan for self but not for twin. These results, as well as our novel approach using twins and photographs from across the lifespan, have wide-ranging consequences for the study of self-recognition and the nature of our personal identity through time.

  1. Implicit self-other discrimination affects the interplay between multisensory affordances of mental representations of faces.

    PubMed

    Zeugin, David; Arfa, Norhan; Notter, Michael; Murray, Micah M; Ionta, Silvio

    2017-08-30

    Face recognition is an apparently straightforward but, in fact, complex ability, encompassing the activation of at least visual and somatosensory representations. Understanding how identity shapes the interplay between these face-related affordances could clarify the mechanisms of self-other discrimination. To this aim, we exploited the so-called "face inversion effect" (FIE), a specific bias in the mental rotation of face images (of other people): with respect to inanimate objects, face images require longer time to be mentally rotated from the upside-down. Via the FIE, which suggests the activation of somatosensory mechanisms, we assessed identity-related changes in the interplay between visual and somatosensory affordances between self- and other-face representations. Methodologically, to avoid the potential interference of the somatosensory feedback associated with musculoskeletal movements, we introduced the tracking of gaze direction to record participants' response. Response times from twenty healthy participants showed the larger FIE for self- than other-faces, suggesting that the impact of somatosensory affordances on mental representation of faces varies according to identity. The present study lays the foundations of a quantifiable method to implicitly assess self-other discrimination, with possible translational benefits for early diagnosis of face processing disturbances (e.g. prosopagnosia), and for neurophysiological studies on self-other discrimination in ethological settings. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. From face processing to face recognition: Comparing three different processing levels.

    PubMed

    Besson, G; Barragan-Jason, G; Thorpe, S J; Fabre-Thorpe, M; Puma, S; Ceccaldi, M; Barbeau, E J

    2017-01-01

    Verifying that a face is from a target person (e.g. finding someone in the crowd) is a critical ability of the human face processing system. Yet how fast this can be performed is unknown. The 'entry-level shift due to expertise' hypothesis suggests that - since humans are face experts - processing faces should be as fast - or even faster - at the individual than at superordinate levels. In contrast, the 'superordinate advantage' hypothesis suggests that faces are processed from coarse to fine, so that the opposite pattern should be observed. To clarify this debate, three different face processing levels were compared: (1) a superordinate face categorization level (i.e. detecting human faces among animal faces), (2) a face familiarity level (i.e. recognizing famous faces among unfamiliar ones) and (3) verifying that a face is from a target person, our condition of interest. The minimal speed at which faces can be categorized (∼260ms) or recognized as familiar (∼360ms) has largely been documented in previous studies, and thus provides boundaries to compare our condition of interest to. Twenty-seven participants were included. The recent Speed and Accuracy Boosting procedure paradigm (SAB) was used since it constrains participants to use their fastest strategy. Stimuli were presented either upright or inverted. Results revealed that verifying that a face is from a target person (minimal RT at ∼260ms) was remarkably fast but longer than the face categorization level (∼240ms) and was more sensitive to face inversion. In contrast, it was much faster than recognizing a face as familiar (∼380ms), a level severely affected by face inversion. Face recognition corresponding to finding a specific person in a crowd thus appears achievable in only a quarter of a second. In favor of the 'superordinate advantage' hypothesis or coarse-to-fine account of the face visual hierarchy, these results suggest a graded engagement of the face processing system across processing levels as reflected by the face inversion effects. Furthermore, they underline how verifying that a face is from a target person and detecting a face as familiar - both often referred to as "Face Recognition" - in fact differs. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Facial decoding in schizophrenia is underpinned by basic visual processing impairments.

    PubMed

    Belge, Jan-Baptist; Maurage, Pierre; Mangelinckx, Camille; Leleux, Dominique; Delatte, Benoît; Constant, Eric

    2017-09-01

    Schizophrenia is associated with a strong deficit in the decoding of emotional facial expression (EFE). Nevertheless, it is still unclear whether this deficit is specific for emotions or due to a more general impairment for any type of facial processing. This study was designed to clarify this issue. Thirty patients suffering from schizophrenia and 30 matched healthy controls performed several tasks evaluating the recognition of both changeable (i.e. eyes orientation and emotions) and stable (i.e. gender, age) facial characteristics. Accuracy and reaction times were recorded. Schizophrenic patients presented a performance deficit (accuracy and reaction times) in the perception of both changeable and stable aspects of faces, without any specific deficit for emotional decoding. Our results demonstrate a generalized face recognition deficit in schizophrenic patients, probably caused by a perceptual deficit in basic visual processing. It seems that the deficit in the decoding of emotional facial expression (EFE) is not a specific deficit of emotion processing, but is at least partly related to a generalized perceptual deficit in lower-level perceptual processing, occurring before the stage of emotion processing, and underlying more complex cognitive dysfunctions. These findings should encourage future investigations to explore the neurophysiologic background of these generalized perceptual deficits, and stimulate a clinical approach focusing on more basic visual processing. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  4. Matching Heard and Seen Speech: An ERP Study of Audiovisual Word Recognition

    PubMed Central

    Kaganovich, Natalya; Schumaker, Jennifer; Rowland, Courtney

    2016-01-01

    Seeing articulatory gestures while listening to speech-in-noise (SIN) significantly improves speech understanding. However, the degree of this improvement varies greatly among individuals. We examined a relationship between two distinct stages of visual articulatory processing and the SIN accuracy by combining a cross-modal repetition priming task with ERP recordings. Participants first heard a word referring to a common object (e.g., pumpkin) and then decided whether the subsequently presented visual silent articulation matched the word they had just heard. Incongruent articulations elicited a significantly enhanced N400, indicative of a mismatch detection at the pre-lexical level. Congruent articulations elicited a significantly larger LPC, indexing articulatory word recognition. Only the N400 difference between incongruent and congruent trials was significantly correlated with individuals’ SIN accuracy improvement in the presence of the talker’s face. PMID:27155219

  5. The unique role of the visual word form area in reading.

    PubMed

    Dehaene, Stanislas; Cohen, Laurent

    2011-06-01

    Reading systematically activates the left lateral occipitotemporal sulcus, at a site known as the visual word form area (VWFA). This site is reproducible across individuals/scripts, attuned to reading-specific processes, and partially selective for written strings relative to other categories such as line drawings. Lesions affecting the VWFA cause pure alexia, a selective deficit in word recognition. These findings must be reconciled with the fact that human genome evolution cannot have been influenced by such a recent and culturally variable activity as reading. Capitalizing on recent functional magnetic resonance imaging experiments, we provide strong corroborating evidence for the hypothesis that reading acquisition partially recycles a cortical territory evolved for object and face recognition, the prior properties of which influenced the form of writing systems. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Parametric Representation of the Speaker's Lips for Multimodal Sign Language and Speech Recognition

    NASA Astrophysics Data System (ADS)

    Ryumin, D.; Karpov, A. A.

    2017-05-01

    In this article, we propose a new method for parametric representation of human's lips region. The functional diagram of the method is described and implementation details with the explanation of its key stages and features are given. The results of automatic detection of the regions of interest are illustrated. A speed of the method work using several computers with different performances is reported. This universal method allows applying parametrical representation of the speaker's lipsfor the tasks of biometrics, computer vision, machine learning, and automatic recognition of face, elements of sign languages, and audio-visual speech, including lip-reading.

  7. Visual Cortical Representation of Whole Words and Hemifield-split Word Parts.

    PubMed

    Strother, Lars; Coros, Alexandra M; Vilis, Tutis

    2016-02-01

    Reading requires the neural integration of visual word form information that is split between our retinal hemifields. We examined multiple visual cortical areas involved in this process by measuring fMRI responses while observers viewed words that changed or repeated in one or both hemifields. We were specifically interested in identifying brain areas that exhibit decreased fMRI responses as a result of repeated versus changing visual word form information in each visual hemifield. Our method yielded highly significant effects of word repetition in a previously reported visual word form area (VWFA) in occipitotemporal cortex, which represents hemifield-split words as whole units. We also identified a more posterior occipital word form area (OWFA), which represents word form information in the right and left hemifields independently and is thus both functionally and anatomically distinct from the VWFA. Both the VWFA and the OWFA were left-lateralized in our study and strikingly symmetric in anatomical location relative to known face-selective visual cortical areas in the right hemisphere. Our findings are consistent with the observation that category-selective visual areas come in pairs and support the view that neural mechanisms in left visual cortex--especially those that evolved to support the visual processing of faces--are developmentally malleable and become incorporated into a left-lateralized visual word form network that supports rapid word recognition and reading.

  8. Race-specific perceptual discrimination improvement following short individuation training with faces

    PubMed Central

    McGugin, Rankin Williams; Tanaka, James W.; Lebrecht, Sophie; Tarr, Michael J.; Gauthier, Isabel

    2010-01-01

    This study explores the effect of individuation training on the acquisition of race-specific expertise. First, we investigated whether practice individuating other-race faces yields improvement in perceptual discrimination for novel faces of that race. Second, we asked whether there was similar improvement for novel faces of a different race for which participants received equal practice, but in an orthogonal task that did not require individuation. Caucasian participants were trained to individuate faces of one race (African American or Hispanic) and to make difficult eye luminance judgments on faces of the other race. By equating these tasks we are able to rule out raw experience, visual attention or performance/success-induced positivity as the critical factors that produce race-specific improvements. These results indicate that individuation practice is one mechanism through which cognitive, perceptual, and/or social processes promote growth of the own-race face recognition advantage. PMID:21429002

  9. You Look Familiar: How Malaysian Chinese Recognize Faces

    PubMed Central

    Tan, Chrystalle B. Y.; Stephen, Ian D.; Whitehead, Ross; Sheppard, Elizabeth

    2012-01-01

    East Asian and white Western observers employ different eye movement strategies for a variety of visual processing tasks, including face processing. Recent eye tracking studies on face recognition found that East Asians tend to integrate information holistically by focusing on the nose while white Westerners perceive faces featurally by moving between the eyes and mouth. The current study examines the eye movement strategy that Malaysian Chinese participants employ when recognizing East Asian, white Western, and African faces. Rather than adopting the Eastern or Western fixation pattern, Malaysian Chinese participants use a mixed strategy by focusing on the eyes and nose more than the mouth. The combination of Eastern and Western strategies proved advantageous in participants' ability to recognize East Asian and white Western faces, suggesting that individuals learn to use fixation patterns that are optimized for recognizing the faces with which they are more familiar. PMID:22253762

  10. Postencoding cognitive processes in the cross-race effect: Categorization and individuation during face recognition.

    PubMed

    Ho, Michael R; Pezdek, Kathy

    2016-06-01

    The cross-race effect (CRE) describes the finding that same-race faces are recognized more accurately than cross-race faces. According to social-cognitive theories of the CRE, processes of categorization and individuation at encoding account for differential recognition of same- and cross-race faces. Recent face memory research has suggested that similar but distinct categorization and individuation processes also occur postencoding, at recognition. Using a divided-attention paradigm, in Experiments 1A and 1B we tested and confirmed the hypothesis that distinct postencoding categorization and individuation processes occur during the recognition of same- and cross-race faces. Specifically, postencoding configural divided-attention tasks impaired recognition accuracy more for same-race than for cross-race faces; on the other hand, for White (but not Black) participants, postencoding featural divided-attention tasks impaired recognition accuracy more for cross-race than for same-race faces. A social categorization paradigm used in Experiments 2A and 2B tested the hypothesis that the postencoding in-group or out-group social orientation to faces affects categorization and individuation processes during the recognition of same-race and cross-race faces. Postencoding out-group orientation to faces resulted in categorization for White but not for Black participants. This was evidenced by White participants' impaired recognition accuracy for same-race but not for cross-race out-group faces. Postencoding in-group orientation to faces had no effect on recognition accuracy for either same-race or cross-race faces. The results of Experiments 2A and 2B suggest that this social orientation facilitates White but not Black participants' individuation and categorization processes at recognition. Models of recognition memory for same-race and cross-race faces need to account for processing differences that occur at both encoding and recognition.

  11. Sensory Contributions to Impaired Emotion Processing in Schizophrenia

    PubMed Central

    Butler, Pamela D.; Abeles, Ilana Y.; Weiskopf, Nicole G.; Tambini, Arielle; Jalbrzikowski, Maria; Legatt, Michael E.; Zemon, Vance; Loughead, James; Gur, Ruben C.; Javitt, Daniel C.

    2009-01-01

    Both emotion and visual processing deficits are documented in schizophrenia, and preferential magnocellular visual pathway dysfunction has been reported in several studies. This study examined the contribution to emotion-processing deficits of magnocellular and parvocellular visual pathway function, based on stimulus properties and shape of contrast response functions. Experiment 1 examined the relationship between contrast sensitivity to magnocellular- and parvocellular-biased stimuli and emotion recognition using the Penn Emotion Recognition (ER-40) and Emotion Differentiation (EMODIFF) tests. Experiment 2 altered the contrast levels of the faces themselves to determine whether emotion detection curves would show a pattern characteristic of magnocellular neurons and whether patients would show a deficit in performance related to early sensory processing stages. Results for experiment 1 showed that patients had impaired emotion processing and a preferential magnocellular deficit on the contrast sensitivity task. Greater deficits in ER-40 and EMODIFF performance correlated with impaired contrast sensitivity to the magnocellular-biased condition, which remained significant for the EMODIFF task even when nonspecific correlations due to group were considered in a step-wise regression. Experiment 2 showed contrast response functions indicative of magnocellular processing for both groups, with patients showing impaired performance. Impaired emotion identification on this task was also correlated with magnocellular-biased visual sensory processing dysfunction. These results provide evidence for a contribution of impaired early-stage visual processing in emotion recognition deficits in schizophrenia and suggest that a bottom-up approach to remediation may be effective. PMID:19793797

  12. Sensory contributions to impaired emotion processing in schizophrenia.

    PubMed

    Butler, Pamela D; Abeles, Ilana Y; Weiskopf, Nicole G; Tambini, Arielle; Jalbrzikowski, Maria; Legatt, Michael E; Zemon, Vance; Loughead, James; Gur, Ruben C; Javitt, Daniel C

    2009-11-01

    Both emotion and visual processing deficits are documented in schizophrenia, and preferential magnocellular visual pathway dysfunction has been reported in several studies. This study examined the contribution to emotion-processing deficits of magnocellular and parvocellular visual pathway function, based on stimulus properties and shape of contrast response functions. Experiment 1 examined the relationship between contrast sensitivity to magnocellular- and parvocellular-biased stimuli and emotion recognition using the Penn Emotion Recognition (ER-40) and Emotion Differentiation (EMODIFF) tests. Experiment 2 altered the contrast levels of the faces themselves to determine whether emotion detection curves would show a pattern characteristic of magnocellular neurons and whether patients would show a deficit in performance related to early sensory processing stages. Results for experiment 1 showed that patients had impaired emotion processing and a preferential magnocellular deficit on the contrast sensitivity task. Greater deficits in ER-40 and EMODIFF performance correlated with impaired contrast sensitivity to the magnocellular-biased condition, which remained significant for the EMODIFF task even when nonspecific correlations due to group were considered in a step-wise regression. Experiment 2 showed contrast response functions indicative of magnocellular processing for both groups, with patients showing impaired performance. Impaired emotion identification on this task was also correlated with magnocellular-biased visual sensory processing dysfunction. These results provide evidence for a contribution of impaired early-stage visual processing in emotion recognition deficits in schizophrenia and suggest that a bottom-up approach to remediation may be effective.

  13. Crossmodal and Incremental Perception of Audiovisual Cues to Emotional Speech

    ERIC Educational Resources Information Center

    Barkhuysen, Pashiera; Krahmer, Emiel; Swerts, Marc

    2010-01-01

    In this article we report on two experiments about the perception of audiovisual cues to emotional speech. The article addresses two questions: (1) how do visual cues from a speaker's face to emotion relate to auditory cues, and (2) what is the recognition speed for various facial cues to emotion? Both experiments reported below are based on tests…

  14. Association and dissociation between detection and discrimination of objects of expertise: Evidence from visual search.

    PubMed

    Golan, Tal; Bentin, Shlomo; DeGutis, Joseph M; Robertson, Lynn C; Harel, Assaf

    2014-02-01

    Expertise in face recognition is characterized by high proficiency in distinguishing between individual faces. However, faces also enjoy an advantage at the early stage of basic-level detection, as demonstrated by efficient visual search for faces among nonface objects. In the present study, we asked (1) whether the face advantage in detection is a unique signature of face expertise, or whether it generalizes to other objects of expertise, and (2) whether expertise in face detection is intrinsically linked to expertise in face individuation. We compared how groups with varying degrees of object and face expertise (typical adults, developmental prosopagnosics [DP], and car experts) search for objects within and outside their domains of expertise (faces, cars, airplanes, and butterflies) among a variable set of object distractors. Across all three groups, search efficiency (indexed by reaction time slopes) was higher for faces and airplanes than for cars and butterflies. Notably, the search slope for car targets was considerably shallower in the car experts than in nonexperts. Although the mean face slope was slightly steeper among the DPs than in the other two groups, most of the DPs' search slopes were well within the normative range. This pattern of results suggests that expertise in object detection is indeed associated with expertise at the subordinate level, that it is not specific to faces, and that the two types of expertise are distinct facilities. We discuss the potential role of experience in bridging between low-level discriminative features and high-level naturalistic categories.

  15. Face and body recognition show similar improvement during childhood.

    PubMed

    Bank, Samantha; Rhodes, Gillian; Read, Ainsley; Jeffery, Linda

    2015-09-01

    Adults are proficient in extracting identity cues from faces. This proficiency develops slowly during childhood, with performance not reaching adult levels until adolescence. Bodies are similar to faces in that they convey identity cues and rely on specialized perceptual mechanisms. However, it is currently unclear whether body recognition mirrors the slow development of face recognition during childhood. Recent evidence suggests that body recognition develops faster than face recognition. Here we measured body and face recognition in 6- and 10-year-old children and adults to determine whether these two skills show different amounts of improvement during childhood. We found no evidence that they do. Face and body recognition showed similar improvement with age, and children, like adults, were better at recognizing faces than bodies. These results suggest that the mechanisms of face and body memory mature at a similar rate or that improvement of more general cognitive and perceptual skills underlies improvement of both face and body recognition. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Illumination robust face recognition using spatial adaptive shadow compensation based on face intensity prior

    NASA Astrophysics Data System (ADS)

    Hsieh, Cheng-Ta; Huang, Kae-Horng; Lee, Chang-Hsing; Han, Chin-Chuan; Fan, Kuo-Chin

    2017-12-01

    Robust face recognition under illumination variations is an important and challenging task in a face recognition system, particularly for face recognition in the wild. In this paper, a face image preprocessing approach, called spatial adaptive shadow compensation (SASC), is proposed to eliminate shadows in the face image due to different lighting directions. First, spatial adaptive histogram equalization (SAHE), which uses face intensity prior model, is proposed to enhance the contrast of each local face region without generating visible noises in smooth face areas. Adaptive shadow compensation (ASC), which performs shadow compensation in each local image block, is then used to produce a wellcompensated face image appropriate for face feature extraction and recognition. Finally, null-space linear discriminant analysis (NLDA) is employed to extract discriminant features from SASC compensated images. Experiments performed on the Yale B, Yale B extended, and CMU PIE face databases have shown that the proposed SASC always yields the best face recognition accuracy. That is, SASC is more robust to face recognition under illumination variations than other shadow compensation approaches.

  17. Accurate, fast, and secure biometric fingerprint recognition system utilizing sensor fusion of fingerprint patterns

    NASA Astrophysics Data System (ADS)

    El-Saba, Aed; Alsharif, Salim; Jagapathi, Rajendarreddy

    2011-04-01

    Fingerprint recognition is one of the first techniques used for automatically identifying people and today it is still one of the most popular and effective biometric techniques. With this increase in fingerprint biometric uses, issues related to accuracy, security and processing time are major challenges facing the fingerprint recognition systems. Previous work has shown that polarization enhancementencoding of fingerprint patterns increase the accuracy and security of fingerprint systems without burdening the processing time. This is mainly due to the fact that polarization enhancementencoding is inherently a hardware process and does not have detrimental time delay effect on the overall process. Unpolarized images, however, posses a high visual contrast and when fused (without digital enhancement) properly with polarized ones, is shown to increase the recognition accuracy and security of the biometric system without any significant processing time delay.

  18. Super-recognizers: People with extraordinary face recognition ability

    PubMed Central

    Russell, Richard; Duchaine, Brad; Nakayama, Ken

    2014-01-01

    We tested four people who claimed to have significantly better than ordinary face recognition ability. Exceptional ability was confirmed in each case. On two very different tests of face recognition, all four experimental subjects performed beyond the range of control subject performance. They also scored significantly better than average on a perceptual discrimination test with faces. This effect was larger with upright than inverted faces, and the four subjects showed a larger ‘inversion effect’ than control subjects, who in turn showed a larger inversion effect than developmental prosopagnosics. This indicates an association between face recognition ability and the magnitude of the inversion effect. Overall, these ‘super-recognizers’ are about as good at face recognition and perception as developmental prosopagnosics are bad. Our findings demonstrate the existence of people with exceptionally good face recognition ability, and show that the range of face recognition and face perception ability is wider than previously acknowledged. PMID:19293090

  19. Super-recognizers: people with extraordinary face recognition ability.

    PubMed

    Russell, Richard; Duchaine, Brad; Nakayama, Ken

    2009-04-01

    We tested 4 people who claimed to have significantly better than ordinary face recognition ability. Exceptional ability was confirmed in each case. On two very different tests of face recognition, all 4 experimental subjects performed beyond the range of control subject performance. They also scored significantly better than average on a perceptual discrimination test with faces. This effect was larger with upright than with inverted faces, and the 4 subjects showed a larger "inversion effect" than did control subjects, who in turn showed a larger inversion effect than did developmental prosopagnosics. This result indicates an association between face recognition ability and the magnitude of the inversion effect. Overall, these "super-recognizers" are about as good at face recognition and perception as developmental prosopagnosics are bad. Our findings demonstrate the existence of people with exceptionally good face recognition ability and show that the range of face recognition and face perception ability is wider than has been previously acknowledged.

  20. A multi-view face recognition system based on cascade face detector and improved Dlib

    NASA Astrophysics Data System (ADS)

    Zhou, Hongjun; Chen, Pei; Shen, Wei

    2018-03-01

    In this research, we present a framework for multi-view face detect and recognition system based on cascade face detector and improved Dlib. This method is aimed to solve the problems of low efficiency and low accuracy in multi-view face recognition, to build a multi-view face recognition system, and to discover a suitable monitoring scheme. For face detection, the cascade face detector is used to extracted the Haar-like feature from the training samples, and Haar-like feature is used to train a cascade classifier by combining Adaboost algorithm. Next, for face recognition, we proposed an improved distance model based on Dlib to improve the accuracy of multiview face recognition. Furthermore, we applied this proposed method into recognizing face images taken from different viewing directions, including horizontal view, overlooks view, and looking-up view, and researched a suitable monitoring scheme. This method works well for multi-view face recognition, and it is also simulated and tested, showing satisfactory experimental results.

  1. Face emotion recognition is related to individual differences in psychosis-proneness.

    PubMed

    Germine, L T; Hooker, C I

    2011-05-01

    Deficits in face emotion recognition (FER) in schizophrenia are well documented, and have been proposed as a potential intermediate phenotype for schizophrenia liability. However, research on the relationship between psychosis vulnerability and FER has mixed findings and methodological limitations. Moreover, no study has yet characterized the relationship between FER ability and level of psychosis-proneness. If FER ability varies continuously with psychosis-proneness, this suggests a relationship between FER and polygenic risk factors. We tested two large internet samples to see whether psychometric psychosis-proneness, as measured by the Schizotypal Personality Questionnaire-Brief (SPQ-B), is related to differences in face emotion identification and discrimination or other face processing abilities. Experiment 1 (n=2332) showed that psychosis-proneness predicts face emotion identification ability but not face gender identification ability. Experiment 2 (n=1514) demonstrated that psychosis-proneness also predicts performance on face emotion but not face identity discrimination. The tasks in Experiment 2 used identical stimuli and task parameters, differing only in emotion/identity judgment. Notably, the relationships demonstrated in Experiments 1 and 2 persisted even when individuals with the highest psychosis-proneness levels (the putative high-risk group) were excluded from analysis. Our data suggest that FER ability is related to individual differences in psychosis-like characteristics in the normal population, and that these differences cannot be accounted for by differences in face processing and/or visual perception. Our results suggest that FER may provide a useful candidate intermediate phenotype.

  2. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance

    PubMed Central

    Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.

    2015-01-01

    To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT (“face patches”) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887

  3. Dynamic adjustments in prefrontal, hippocampal, and inferior temporal interactions with increasing visual working memory load.

    PubMed

    Rissman, Jesse; Gazzaley, Adam; D'Esposito, Mark

    2008-07-01

    The maintenance of visual stimuli across a delay interval in working memory tasks is thought to involve reverberant neural communication between the prefrontal cortex and posterior visual association areas. Recent studies suggest that the hippocampus might also contribute to this retention process, presumably via reciprocal interactions with visual regions. To characterize the nature of these interactions, we performed functional connectivity analysis on an event-related functional magnetic resonance imaging data set in which participants performed a delayed face recognition task. As the number of faces that participants were required to remember was parametrically increased, the right inferior frontal gyrus (IFG) showed a linearly decreasing degree of functional connectivity with the fusiform face area (FFA) during the delay period. In contrast, the hippocampus linearly increased its delay period connectivity with both the FFA and the IFG as the mnemonic load increased. Moreover, the degree to which participants' FFA showed a load-dependent increase in its connectivity with the hippocampus predicted the degree to which its connectivity with the IFG decreased with load. Thus, these neural circuits may dynamically trade off to accommodate the particular mnemonic demands of the task, with IFG-FFA interactions mediating maintenance at lower loads and hippocampal interactions supporting retention at higher loads.

  4. Face recognition system and method using face pattern words and face pattern bytes

    DOEpatents

    Zheng, Yufeng

    2014-12-23

    The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.

  5. A Qualitative Impairment in Face Perception in Alzheimer's Disease: Evidence from a Reduced Face Inversion Effect.

    PubMed

    Lavallée, Marie Maxime; Gandini, Delphine; Rouleau, Isabelle; Vallet, Guillaume T; Joannette, Maude; Kergoat, Marie-Jeanne; Busigny, Thomas; Rossion, Bruno; Joubert, Sven

    2016-01-01

    Prevalent face recognition difficulties in Alzheimer's disease (AD) have typically been attributed to the underlying episodic and semantic memory impairment. The aim of the current study was to determine if AD patients are also impaired at the perceptual level for faces, more specifically at extracting a visual representation of an individual face. To address this question, we investigated the matching of simultaneously presented individual faces and of other nonface familiar shapes (cars), at both upright and inverted orientation, in a group of mild AD patients and in a group of healthy older controls matched for age and education. AD patients showed a reduced inversion effect (i.e., larger performance for upright than inverted stimuli) for faces, but not for cars, both in terms of error rates and response times. While healthy participants showed a much larger decrease in performance for faces than for cars with inversion, the inversion effect did not differ significantly for faces and cars in AD. This abnormal inversion effect for faces was observed in a large subset of individual patients with AD. These results suggest that AD patients have deficits in higher-level visual processes, more specifically at perceiving individual faces, a function that relies on holistic representations specific to upright face stimuli. These deficits, combined with their memory impairment, may contribute to the difficulties in recognizing familiar people that are often reported in patients suffering from the disease and by their caregivers.

  6. Covert face recognition in congenital prosopagnosia: a group study.

    PubMed

    Rivolta, Davide; Palermo, Romina; Schmalzl, Laura; Coltheart, Max

    2012-03-01

    Even though people with congenital prosopagnosia (CP) never develop a normal ability to "overtly" recognize faces, some individuals show indices of "covert" (or implicit) face recognition. The aim of this study was to demonstrate covert face recognition in CP when participants could not overtly recognize the faces. Eleven people with CP completed three tasks assessing their overt face recognition ability, and three tasks assessing their "covert" face recognition: a Forced choice familiarity task, a Forced choice cued task, and a Priming task. Evidence of covert recognition was observed with the Forced choice familiarity task, but not the Priming task. In addition, we propose that the Forced choice cued task does not measure covert processing as such, but instead "provoked-overt" recognition. Our study clearly shows that people with CP demonstrate covert recognition for faces that they cannot overtly recognize, and that behavioural tasks vary in their sensitivity to detect covert recognition in CP. Copyright © 2011 Elsevier Srl. All rights reserved.

  7. Understanding gender bias in face recognition: effects of divided attention at encoding.

    PubMed

    Palmer, Matthew A; Brewer, Neil; Horry, Ruth

    2013-03-01

    Prior research has demonstrated a female own-gender bias in face recognition, with females better at recognizing female faces than male faces. We explored the basis for this effect by examining the effect of divided attention during encoding on females' and males' recognition of female and male faces. For female participants, divided attention impaired recognition performance for female faces to a greater extent than male faces in a face recognition paradigm (Study 1; N=113) and an eyewitness identification paradigm (Study 2; N=502). Analysis of remember-know judgments (Study 2) indicated that divided attention at encoding selectively reduced female participants' recollection of female faces at test. For male participants, divided attention selectively reduced recognition performance (and recollection) for male stimuli in Study 2, but had similar effects on recognition of male and female faces in Study 1. Overall, the results suggest that attention at encoding contributes to the female own-gender bias by facilitating the later recollection of female faces. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. System for face recognition under expression variations of neutral-sampled individuals using recognized expression warping and a virtual expression-face database

    NASA Astrophysics Data System (ADS)

    Petpairote, Chayanut; Madarasmi, Suthep; Chamnongthai, Kosin

    2018-01-01

    The practical identification of individuals using facial recognition techniques requires the matching of faces with specific expressions to faces from a neutral face database. A method for facial recognition under varied expressions against neutral face samples of individuals via recognition of expression warping and the use of a virtual expression-face database is proposed. In this method, facial expressions are recognized and the input expression faces are classified into facial expression groups. To aid facial recognition, the virtual expression-face database is sorted into average facial-expression shapes and by coarse- and fine-featured facial textures. Wrinkle information is also employed in classification by using a process of masking to adjust input faces to match the expression-face database. We evaluate the performance of the proposed method using the CMU multi-PIE, Cohn-Kanade, and AR expression-face databases, and we find that it provides significantly improved results in terms of face recognition accuracy compared to conventional methods and is acceptable for facial recognition under expression variation.

  9. Human and animal sounds influence recognition of body language.

    PubMed

    Van den Stock, Jan; Grèzes, Julie; de Gelder, Beatrice

    2008-11-25

    In naturalistic settings emotional events have multiple correlates and are simultaneously perceived by several sensory systems. Recent studies have shown that recognition of facial expressions is biased towards the emotion expressed by a simultaneously presented emotional expression in the voice even if attention is directed to the face only. So far, no study examined whether this phenomenon also applies to whole body expressions, although there is no obvious reason why this crossmodal influence would be specific for faces. Here we investigated whether perception of emotions expressed in whole body movements is influenced by affective information provided by human and by animal vocalizations. Participants were instructed to attend to the action displayed by the body and to categorize the expressed emotion. The results indicate that recognition of body language is biased towards the emotion expressed by the simultaneously presented auditory information, whether it consist of human or of animal sounds. Our results show that a crossmodal influence from auditory to visual emotional information obtains for whole body video images with the facial expression blanked and includes human as well as animal sounds.

  10. Mechanisms of object recognition: what we have learned from pigeons

    PubMed Central

    Soto, Fabian A.; Wasserman, Edward A.

    2014-01-01

    Behavioral studies of object recognition in pigeons have been conducted for 50 years, yielding a large body of data. Recent work has been directed toward synthesizing this evidence and understanding the visual, associative, and cognitive mechanisms that are involved. The outcome is that pigeons are likely to be the non-primate species for which the computational mechanisms of object recognition are best understood. Here, we review this research and suggest that a core set of mechanisms for object recognition might be present in all vertebrates, including pigeons and people, making pigeons an excellent candidate model to study the neural mechanisms of object recognition. Behavioral and computational evidence suggests that error-driven learning participates in object category learning by pigeons and people, and recent neuroscientific research suggests that the basal ganglia, which are homologous in these species, may implement error-driven learning of stimulus-response associations. Furthermore, learning of abstract category representations can be observed in pigeons and other vertebrates. Finally, there is evidence that feedforward visual processing, a central mechanism in models of object recognition in the primate ventral stream, plays a role in object recognition by pigeons. We also highlight differences between pigeons and people in object recognition abilities, and propose candidate adaptive specializations which may explain them, such as holistic face processing and rule-based category learning in primates. From a modern comparative perspective, such specializations are to be expected regardless of the model species under study. The fact that we have a good idea of which aspects of object recognition differ in people and pigeons should be seen as an advantage over other animal models. From this perspective, we suggest that there is much to learn about human object recognition from studying the “simple” brains of pigeons. PMID:25352784

  11. Face race processing and racial bias in early development: A perceptual-social linkage.

    PubMed

    Lee, Kang; Quinn, Paul C; Pascalis, Olivier

    2017-06-01

    Infants have asymmetrical exposure to different types of faces (e.g., more human than other-species, more female than male, and more own-race than other-race). What are the developmental consequences of such experiential asymmetry? Here we review recent advances in research on the development of cross-race face processing. The evidence suggests that greater exposure to own- than other-race faces in infancy leads to developmentally early perceptual differences in visual preference, recognition, category formation, and scanning of own- and other-race faces. Further, such perceptual differences in infancy may be associated with the emergence of implicit racial bias, consistent with a Perceptual-Social Linkage Hypothesis. Current and future work derived from this hypothesis may lay an important empirical foundation for the development of intervention programs to combat the early occurrence of implicit racial bias.

  12. Working Memory Impairment in People with Williams Syndrome: Effects of Delay, Task and Stimuli

    PubMed Central

    O'Hearn, Kirsten; Courtney, Susan; Street, Whitney; Landau, Barbara

    2009-01-01

    Williams syndrome (WS) is a neurodevelopmental disorder associated with impaired visuospatial representations subserved by the dorsal stream and relatively strong object recognition abilities subserved by the ventral stream. There is conflicting evidence on whether this uneven pattern extends to working memory (WM) in WS. The present studies provide a new perspective, testing WM for a single stimulus using a delayed recognition paradigm in individuals with WS and typically developing children matched for mental age (MA matches). In three experiments, participants judged whether a second stimulus ‘matched’ an initial sample, either in location or identity. We first examined memory for faces, houses and locations using a 5 s delay (Experiment 1) and a 2 s delay (Experiment 2). We then tested memory for human faces, houses, cat faces, and shoes with a 2 s delay using a new set of stimuli that were better controlled for expression, hairline and orientation (Experiment 3). With the 5 s delay (Experiment 1), the WS group was impaired overall compared to MA matches. While participants with WS tended to perform more poorly than MA matches with the 2 s delay, they also exhibited an uneven profile compared to MA matches. Face recognition was relatively preserved in WS with friendly faces (Experiment 2) but not when the faces had a neutral expression and were less natural looking (Experiment 3). Experiment 3 indicated that memory for object identity was relatively stronger than memory for location in WS. These findings reveal an overall WM impairment in WS that can be overcome under some conditions. Abnormalities in the parietal lobe/dorsal stream in WS may damage not only the representation of spatial location but also may impact WM for visual stimuli more generally. PMID:19084315

  13. Working memory impairment in people with Williams syndrome: effects of delay, task and stimuli.

    PubMed

    O'Hearn, Kirsten; Courtney, Susan; Street, Whitney; Landau, Barbara

    2009-04-01

    Williams syndrome (WS) is a neurodevelopmental disorder associated with impaired visuospatial representations subserved by the dorsal stream and relatively strong object recognition abilities subserved by the ventral stream. There is conflicting evidence on whether this uneven pattern in WS extends to working memory (WM). The present studies provide a new perspective, testing WM for a single stimulus using a delayed recognition paradigm in individuals with WS and typically developing children matched for mental age (MA matches). In three experiments, participants judged whether a second stimulus 'matched' an initial sample, either in location or identity. We first examined memory for faces, houses and locations using a 5s delay (Experiment 1) and a 2s delay (Experiment 2). We then tested memory for human faces, houses, cat faces, and shoes with a 2s delay using a new set of stimuli that were better controlled for expression, hairline and orientation (Experiment 3). With the 5s delay (Experiment 1), the WS group was impaired overall compared to MA matches. While participants with WS tended to perform more poorly than MA matches with the 2s delay, they also exhibited an uneven profile compared to MA matches. Face recognition was relatively preserved in WS with friendly faces (Experiment 2) but not when the faces had a neutral expression and were less natural looking (Experiment 3). Experiment 3 indicated that memory for object identity was relatively stronger than memory for location in WS. These findings reveal an overall WM impairment in WS that can be overcome under some conditions. Abnormalities in the parietal lobe/dorsal stream in WS may damage not only the representation of spatial location but may also impact WM for visual stimuli more generally.

  14. Characterizing the spatio-temporal dynamics of the neural events occurring prior to and up to overt recognition of famous faces.

    PubMed

    Jemel, Boutheina; Schuller, Anne-Marie; Goffaux, Valérie

    2010-10-01

    Although it is generally acknowledged that familiar face recognition is fast, mandatory, and proceeds outside conscious control, it is still unclear whether processes leading to familiar face recognition occur in a linear (i.e., gradual) or a nonlinear (i.e., all-or-none) manner. To test these two alternative accounts, we recorded scalp ERPs while participants indicated whether they recognize as familiar the faces of famous and unfamiliar persons gradually revealed in a descending sequence of frames, from the noisier to the least noisy. This presentation procedure allowed us to characterize the changes in scalp ERP responses occurring prior to and up to overt recognition. Our main finding is that gradual and all-or-none processes are possibly involved during overt recognition of familiar faces. Although the N170 and the N250 face-sensitive responses displayed an abrupt activity change at the moment of overt recognition of famous faces, later ERPs encompassing the N400 and late positive component exhibited an incremental increase in amplitude as the point of recognition approached. In addition, famous faces that were not overtly recognized at one trial before recognition elicited larger ERP potentials than unfamiliar faces, probably reflecting a covert recognition process. Overall, these findings present evidence that recognition of familiar faces implicates spatio-temporally complex neural processes exhibiting differential pattern activity changes as a function of recognition state.

  15. Anatomical connections of the functionally-defined “face patches” in the macaque monkey

    PubMed Central

    Saleem, Kadharbatcha S.

    2017-01-01

    The neural circuits underlying face recognition provide a model for understanding visual object representation, social cognition, and hierarchical information processing. A fundamental piece of information lacking to date is the detailed anatomical connections of the face patches. Here, we injected retrograde tracers into four different face patches (PL, ML, AL, AM) to characterize their anatomical connectivity. We found that the patches are strongly and specifically connected to each other, and individual patches receive inputs from extrastriate cortex, the medial temporal lobe, and three subcortical structures (the pulvinar, claustrum, and amygdala). Inputs from prefrontal cortex were surprisingly weak. Patches were densely interconnected to one another in both feedforward and feedback directions, inconsistent with a serial hierarchy. These results provide the first direct anatomical evidence that the face patches constitute a highly specialized system, and suggest that subcortical regions may play a vital role in routing face-related information to subsequent processing stages. PMID:27263973

  16. Near infrared and visible face recognition based on decision fusion of LBP and DCT features

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan

    2018-03-01

    Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In order to extract the discriminative complementary features between near infrared and visible images, in this paper, we proposed a novel near infrared and visible face fusion recognition algorithm based on DCT and LBP features. Firstly, the effective features in near-infrared face image are extracted by the low frequency part of DCT coefficients and the partition histograms of LBP operator. Secondly, the LBP features of visible-light face image are extracted to compensate for the lacking detail features of the near-infrared face image. Then, the LBP features of visible-light face image, the DCT and LBP features of near-infrared face image are sent to each classifier for labeling. Finally, decision level fusion strategy is used to obtain the final recognition result. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. The experiment results show that the proposed method extracts the complementary features of near-infrared and visible face images and improves the robustness of unconstrained face recognition. Especially for the circumstance of small training samples, the recognition rate of proposed method can reach 96.13%, which has improved significantly than 92.75 % of the method based on statistical feature fusion.

  17. Face memory and face recognition in children and adolescents with attention deficit hyperactivity disorder: A systematic review.

    PubMed

    Romani, Maria; Vigliante, Miriam; Faedda, Noemi; Rossetti, Serena; Pezzuti, Lina; Guidetti, Vincenzo; Cardona, Francesco

    2018-06-01

    This review focuses on facial recognition abilities in children and adolescents with attention deficit hyperactivity disorder (ADHD). A systematic review, using PRISMA guidelines, was conducted to identify original articles published prior to May 2017 pertaining to memory, face recognition, affect recognition, facial expression recognition and recall of faces in children and adolescents with ADHD. The qualitative synthesis based on different studies shows a particular focus of the research on facial affect recognition without paying similar attention to the structural encoding of facial recognition. In this review, we further investigate facial recognition abilities in children and adolescents with ADHD, providing synthesis of the results observed in the literature, while detecting face recognition tasks used on face processing abilities in ADHD and identifying aspects not yet explored. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. [Face recognition in patients with autism spectrum disorders].

    PubMed

    Kita, Yosuke; Inagaki, Masumi

    2012-07-01

    The present study aimed to review previous research conducted on face recognition in patients with autism spectrum disorders (ASD). Face recognition is a key question in the ASD research field because it can provide clues for elucidating the neural substrates responsible for the social impairment of these patients. Historically, behavioral studies have reported low performance and/or unique strategies of face recognition among ASD patients. However, the performance and strategy of ASD patients is comparable to those of the control group, depending on the experimental situation or developmental stage, suggesting that face recognition of ASD patients is not entirely impaired. Recent brain function studies, including event-related potential and functional magnetic resonance imaging studies, have investigated the cognitive process of face recognition in ASD patients, and revealed impaired function in the brain's neural network comprising the fusiform gyrus and amygdala. This impaired function is potentially involved in the diminished preference for faces, and in the atypical development of face recognition, eliciting symptoms of unstable behavioral characteristics in these patients. Additionally, face recognition in ASD patients is examined from a different perspective, namely self-face recognition, and facial emotion recognition. While the former topic is intimately linked to basic social abilities such as self-other discrimination, the latter is closely associated with mentalizing. Further research on face recognition in ASD patients should investigate the connection between behavioral and neurological specifics in these patients, by considering developmental changes and the spectrum clinical condition of ASD.

  19. Fusion of LBP and SWLD using spatio-spectral information for hyperspectral face recognition

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Jiang, Peng; Zhang, Shuai; Xiong, Jinquan

    2018-01-01

    Hyperspectral imaging, recording intrinsic spectral information of the skin cross different spectral bands, become an important issue for robust face recognition. However, the main challenges for hyperspectral face recognition are high data dimensionality, low signal to noise ratio and inter band misalignment. In this paper, hyperspectral face recognition based on LBP (Local binary pattern) and SWLD (Simplified Weber local descriptor) is proposed to extract discriminative local features from spatio-spectral fusion information. Firstly, the spatio-spectral fusion strategy based on statistical information is used to attain discriminative features of hyperspectral face images. Secondly, LBP is applied to extract the orientation of the fusion face edges. Thirdly, SWLD is proposed to encode the intensity information in hyperspectral images. Finally, we adopt a symmetric Kullback-Leibler distance to compute the encoded face images. The hyperspectral face recognition is tested on Hong Kong Polytechnic University Hyperspectral Face database (PolyUHSFD). Experimental results show that the proposed method has higher recognition rate (92.8%) than the state of the art hyperspectral face recognition algorithms.

  20. Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias.

    PubMed

    Invitto, Sara; Calcagnì, Antonio; Mignozzi, Arianna; Scardino, Rosanna; Piraino, Giulia; Turchi, Daniele; De Feudis, Irio; Brunetti, Antonio; Bevilacqua, Vitoantonio; de Tommaso, Marina

    2017-01-01

    Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emotional face recognition and how potential modulation of this processing induced by music could be influenced by the subject's musical competence. We investigated how emotional face recognition processing could be modulated by listening to music and how this modulation varies according to the subjective emotional salience of the music and the listener's musical competence. The sample consisted of 24 participants: 12 professional musicians and 12 university students (non-musicians). Participants performed an emotional go/no-go task whilst listening to music by Albeniz, Chopin, or Mozart. The target stimuli were emotionally neutral facial expressions. We examined the N170 Event-Related Potential (ERP) and behavioral responses (i.e., motor reaction time to target recognition and musical emotional judgment). A linear mixed-effects model and a decision-tree learning technique were applied to N170 amplitudes and latencies. The main findings of the study were that musicians' behavioral responses and N170 is more affected by the emotional value of music administered in the emotional go/no-go task and this bias is also apparent in responses to the non-target emotional face. This suggests that emotional information, coming from multiple sensory channels, activates a crossmodal integration process that depends upon the stimuli emotional salience and the listener's appraisal.

  1. Specific and nonspecific neural activity during selective processing of visual representations in working memory.

    PubMed

    Oh, Hwamee; Leung, Hoi-Chung

    2010-02-01

    In this fMRI study, we investigated prefrontal cortex (PFC) and visual association regions during selective information processing. We recorded behavioral responses and neural activity during a delayed recognition task with a cue presented during the delay period. A specific cue ("Face" or "Scene") was used to indicate which one of the two initially viewed pictures of a face and a scene would be tested at the end of a trial, whereas a nonspecific cue ("Both") was used as control. As expected, the specific cues facilitated behavioral performance (faster response times) compared to the nonspecific cue. A postexperiment memory test showed that the items cued to remember were better recognized than those not cued. The fMRI results showed largely overlapped activations across the three cue conditions in dorsolateral and ventrolateral PFC, dorsomedial PFC, posterior parietal cortex, ventral occipito-temporal cortex, dorsal striatum, and pulvinar nucleus. Among those regions, dorsomedial PFC and inferior occipital gyrus remained active during the entire postcue delay period. Differential activity was mainly found in the association cortices. In particular, the parahippocampal area and posterior superior parietal lobe showed significantly enhanced activity during the postcue period of the scene condition relative to the Face and Both conditions. No regions showed differentially greater responses to the face cue. Our findings suggest that a better representation of visual information in working memory may depend on enhancing the more specialized visual association areas or their interaction with PFC.

  2. When the face fits: recognition of celebrities from matching and mismatching faces and voices.

    PubMed

    Stevenage, Sarah V; Neil, Greg J; Hamlin, Iain

    2014-01-01

    The results of two experiments are presented in which participants engaged in a face-recognition or a voice-recognition task. The stimuli were face-voice pairs in which the face and voice were co-presented and were either "matched" (same person), "related" (two highly associated people), or "mismatched" (two unrelated people). Analysis in both experiments confirmed that accuracy and confidence in face recognition was consistently high regardless of the identity of the accompanying voice. However accuracy of voice recognition was increasingly affected as the relationship between voice and accompanying face declined. Moreover, when considering self-reported confidence in voice recognition, confidence remained high for correct responses despite the proportion of these responses declining across conditions. These results converged with existing evidence indicating the vulnerability of voice recognition as a relatively weak signaller of identity, and results are discussed in the context of a person-recognition framework.

  3. How "Central" Is Central Coherence?: Preliminary Evidence on the Link between Conceptual and Perceptual Processing in Children with Autism

    ERIC Educational Resources Information Center

    Lopez, Beatriz; Leekam, Susan R.; Arts, Gerda R. J.

    2008-01-01

    This study aimed to test the assumption drawn from weak central coherence theory that a central cognitive mechanism is responsible for integrating information at both conceptual and perceptual levels. A visual semantic memory task and a face recognition task measuring use of holistic information were administered to 15 children with autism and 16…

  4. Cognitive Sciences

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Session MP4 includes short reports on: (1) Face Recognition in Microgravity: Is Gravity Direction Involved in the Inversion Effect?; (2) Motor Timing under Microgravity; (3) Perceived Self-Motion Assessed by Computer-Generated Animations: Complexity and Reliability; (4) Prolonged Weightlessness Reference Frames and Visual Symmetry Detection; (5) Mental Representation of Gravity During a Locomotor Task; and (6) Haptic Perception in Weightlessness: A Sense of Force or a Sense of Effort?

  5. Formal implementation of a performance evaluation model for the face recognition system.

    PubMed

    Shin, Yong-Nyuo; Kim, Jason; Lee, Yong-Jun; Shin, Woochang; Choi, Jin-Young

    2008-01-01

    Due to usability features, practical applications, and its lack of intrusiveness, face recognition technology, based on information, derived from individuals' facial features, has been attracting considerable attention recently. Reported recognition rates of commercialized face recognition systems cannot be admitted as official recognition rates, as they are based on assumptions that are beneficial to the specific system and face database. Therefore, performance evaluation methods and tools are necessary to objectively measure the accuracy and performance of any face recognition system. In this paper, we propose and formalize a performance evaluation model for the biometric recognition system, implementing an evaluation tool for face recognition systems based on the proposed model. Furthermore, we performed evaluations objectively by providing guidelines for the design and implementation of a performance evaluation system, formalizing the performance test process.

  6. Altering sensorimotor feedback disrupts visual discrimination of facial expressions.

    PubMed

    Wood, Adrienne; Lupyan, Gary; Sherrin, Steven; Niedenthal, Paula

    2016-08-01

    Looking at another person's facial expression of emotion can trigger the same neural processes involved in producing the expression, and such responses play a functional role in emotion recognition. Disrupting individuals' facial action, for example, interferes with verbal emotion recognition tasks. We tested the hypothesis that facial responses also play a functional role in the perceptual processing of emotional expressions. We altered the facial action of participants with a gel facemask while they performed a task that involved distinguishing target expressions from highly similar distractors. Relative to control participants, participants in the facemask condition demonstrated inferior perceptual discrimination of facial expressions, but not of nonface stimuli. The findings suggest that somatosensory/motor processes involving the face contribute to the visual perceptual-and not just conceptual-processing of facial expressions. More broadly, our study contributes to growing evidence for the fundamentally interactive nature of the perceptual inputs from different sensory modalities.

  7. [Comparative studies of face recognition].

    PubMed

    Kawai, Nobuyuki

    2012-07-01

    Every human being is proficient in face recognition. However, the reason for and the manner in which humans have attained such an ability remain unknown. These questions can be best answered-through comparative studies of face recognition in non-human animals. Studies in both primates and non-primates show that not only primates, but also non-primates possess the ability to extract information from their conspecifics and from human experimenters. Neural specialization for face recognition is shared with mammals in distant taxa, suggesting that face recognition evolved earlier than the emergence of mammals. A recent study indicated that a social insect, the golden paper wasp, can distinguish their conspecific faces, whereas a closely related species, which has a less complex social lifestyle with just one queen ruling a nest of underlings, did not show strong face recognition for their conspecifics. Social complexity and the need to differentiate between one another likely led humans to evolve their face recognition abilities.

  8. Impaired face recognition is associated with social inhibition

    PubMed Central

    Avery, Suzanne N; VanDerKlok, Ross M; Heckers, Stephan; Blackford, Jennifer U

    2016-01-01

    Face recognition is fundamental to successful social interaction. Individuals with deficits in face recognition are likely to have social functioning impairments that may lead to heightened risk for social anxiety. A critical component of social interaction is how quickly a face is learned during initial exposure to a new individual. Here, we used a novel Repeated Faces task to assess how quickly memory for faces is established. Face recognition was measured over multiple exposures in 52 young adults ranging from low to high in social inhibition, a core dimension of social anxiety. High social inhibition was associated with a smaller slope of change in recognition memory over repeated face exposure, indicating participants with higher social inhibition showed smaller improvements in recognition memory after seeing faces multiple times. We propose that impaired face learning is an important mechanism underlying social inhibition and may contribute to, or maintain, social anxiety. PMID:26776300

  9. Impaired face recognition is associated with social inhibition.

    PubMed

    Avery, Suzanne N; VanDerKlok, Ross M; Heckers, Stephan; Blackford, Jennifer U

    2016-02-28

    Face recognition is fundamental to successful social interaction. Individuals with deficits in face recognition are likely to have social functioning impairments that may lead to heightened risk for social anxiety. A critical component of social interaction is how quickly a face is learned during initial exposure to a new individual. Here, we used a novel Repeated Faces task to assess how quickly memory for faces is established. Face recognition was measured over multiple exposures in 52 young adults ranging from low to high in social inhibition, a core dimension of social anxiety. High social inhibition was associated with a smaller slope of change in recognition memory over repeated face exposure, indicating participants with higher social inhibition showed smaller improvements in recognition memory after seeing faces multiple times. We propose that impaired face learning is an important mechanism underlying social inhibition and may contribute to, or maintain, social anxiety. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Enhancing the performance of cooperative face detector by NFGS

    NASA Astrophysics Data System (ADS)

    Yesugade, Snehal; Dave, Palak; Srivastava, Srinkhala; Das, Apurba

    2015-07-01

    Computerized human face detection is an important task of deformable pattern recognition in today's world. Especially in cooperative authentication scenarios like ATM fraud detection, attendance recording, video tracking and video surveillance, the accuracy of the face detection engine in terms of accuracy, memory utilization and speed have been active areas of research for the last decade. The Haar based face detection or SIFT and EBGM based face recognition systems are fairly reliable in this regard. But, there the features are extracted in terms of gray textures. When the input is a high resolution online video with a fairly large viewing area, Haar needs to search for face everywhere (say 352×250 pixels) and every time (e.g., 30 FPS capture all the time). In the current paper we have proposed to address both the aforementioned scenarios by a neuro-visually inspired method of figure-ground segregation (NFGS) [5] to result in a two-dimensional binary array from gray face image. The NFGS would identify the reference video frame in a low sampling rate and updates the same with significant change of environment like illumination. The proposed algorithm would trigger the face detector only when appearance of a new entity is encountered into the viewing area. To address the detection accuracy, classical face detector would be enabled only in a narrowed down region of interest (RoI) as fed by the NFGS. The act of updating the RoI would be done in each frame online with respect to the moving entity which in turn would improve both FR (False Rejection) and FA (False Acceptance) of the face detection system.

  11. Face averages enhance user recognition for smartphone security.

    PubMed

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.

  12. Gender differences in recognition of toy faces suggest a contribution of experience.

    PubMed

    Ryan, Kaitlin F; Gauthier, Isabel

    2016-12-01

    When there is a gender effect, women perform better then men in face recognition tasks. Prior work has not documented a male advantage on a face recognition task, suggesting that women may outperform men at face recognition generally either due to evolutionary reasons or the influence of social roles. Here, we question the idea that women excel at all face recognition and provide a proof of concept based on a face category for which men outperform women. We developed a test of face learning to measures individual differences with face categories for which men and women may differ in experience, using the faces of Barbie dolls and of Transformers. The results show a crossover interaction between subject gender and category, where men outperform women with Transformers' faces. We demonstrate that men can outperform women with some categories of faces, suggesting that explanations for a general face recognition advantage for women are in fact not needed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Hyperspectral face recognition with spatiospectral information fusion and PLS regression.

    PubMed

    Uzair, Muhammad; Mahmood, Arif; Mian, Ajmal

    2015-03-01

    Hyperspectral imaging offers new opportunities for face recognition via improved discrimination along the spectral dimension. However, it poses new challenges, including low signal-to-noise ratio, interband misalignment, and high data dimensionality. Due to these challenges, the literature on hyperspectral face recognition is not only sparse but is limited to ad hoc dimensionality reduction techniques and lacks comprehensive evaluation. We propose a hyperspectral face recognition algorithm using a spatiospectral covariance for band fusion and partial least square regression for classification. Moreover, we extend 13 existing face recognition techniques, for the first time, to perform hyperspectral face recognition.We formulate hyperspectral face recognition as an image-set classification problem and evaluate the performance of seven state-of-the-art image-set classification techniques. We also test six state-of-the-art grayscale and RGB (color) face recognition algorithms after applying fusion techniques on hyperspectral images. Comparison with the 13 extended and five existing hyperspectral face recognition techniques on three standard data sets show that the proposed algorithm outperforms all by a significant margin. Finally, we perform band selection experiments to find the most discriminative bands in the visible and near infrared response spectrum.

  14. Image preprocessing study on KPCA-based face recognition

    NASA Astrophysics Data System (ADS)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  15. A real time mobile-based face recognition with fisherface methods

    NASA Astrophysics Data System (ADS)

    Arisandi, D.; Syahputra, M. F.; Putri, I. L.; Purnamawati, S.; Rahmat, R. F.; Sari, P. P.

    2018-03-01

    Face Recognition is a field research in Computer Vision that study about learning face and determine the identity of the face from a picture sent to the system. By utilizing this face recognition technology, learning process about people’s identity between students in a university will become simpler. With this technology, student won’t need to browse student directory in university’s server site and look for the person with certain face trait. To obtain this goal, face recognition application use image processing methods consist of two phase, pre-processing phase and recognition phase. In pre-processing phase, system will process input image into the best image for recognition phase. Purpose of this pre-processing phase is to reduce noise and increase signal in image. Next, to recognize face phase, we use Fisherface Methods. This methods is chosen because of its advantage that would help system of its limited data. Therefore from experiment the accuracy of face recognition using fisherface is 90%.

  16. Gender interactions in the recognition of emotions and conduct symptoms in adolescents.

    PubMed

    Halász, József; Aspán, Nikoletta; Bozsik, Csilla; Gádoros, Júlia; Inántsy-Pap, Judit

    2014-01-01

    According to literature data, impairment in the recognition of emotions might be related to antisocial developmental pathway. In the present study, the relationship between gender-specific interaction of emotion recognition and conduct symptoms were studied in non-clinical adolescents. After informed consent, 29 boys and 24 girls (13-16 years, 14 ± 0.1 years) participated in the study. The parent version of the Strengths and Difficulties Questionnaire was used to assess behavioral problems. The recognition of basic emotions was analyzed according to both the gender of the participants and the gender of the stimulus faces via the "Facial Expressions of Emotion- Stimuli and Tests". Girls were significantly better than boys in the recognition of disgust, irrespective from the gender of the stimulus faces, albeit both genders were significantly better in the recognition of disgust in the case of male stimulus faces compared to female stimulus faces. Both boys and girls were significantly better in the recognition of sadness in the case of female stimulus faces compared to male stimulus faces. There was no gender effect (neither participant nor stimulus faces) in the recognition of other emotions. Conduct scores in boys were inversely correlated with the recognition of fear in male stimulus faces (R=-0.439, p<0.05) and with overall emotion recognition in male stimulus faces (R=-0.558, p<0.01). In girls, conduct scores were shown a tendency for positive correlation with disgust recognition in female stimulus faces (R=0.376, p<0.07). A gender-specific interaction between the recognition of emotions and antisocial developmentalpathway is suggested.

  17. Split-brain reveals separate but equal self-recognition in the two cerebral hemispheres.

    PubMed

    Uddin, Lucina Q; Rayman, Jan; Zaidel, Eran

    2005-09-01

    To assess the ability of the disconnected cerebral hemispheres to recognize images of the self, a split-brain patient (an individual who underwent complete cerebral commissurotomy to relieve intractable epilepsy) was tested using morphed self-face images presented to one visual hemifield (projecting to one hemisphere) at a time while making "self/other" judgments. The performance of the right and left hemispheres of this patient as assessed by a signal detection method was not significantly different, though a measure of bias did reveal hemispheric differences. The right and left hemispheres of this patient independently and equally possessed the ability to self-recognize, but only the right hemisphere could successfully recognize familiar others. This supports a modular concept of self-recognition and other-recognition, separately present in each cerebral hemisphere.

  18. Fast and accurate face recognition based on image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2017-05-01

    Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.

  19. The relationship between face recognition ability and socioemotional functioning throughout adulthood.

    PubMed

    Turano, Maria Teresa; Viggiano, Maria Pia

    2017-11-01

    The relationship between face recognition ability and socioemotional functioning has been widely explored. However, how aging modulates this association regarding both objective performance and subjective-perception is still neglected. Participants, aged between 18 and 81 years, performed a face memory test and completed subjective face recognition and socioemotional questionnaires. General and social anxiety, and neuroticism traits account for the individual variation in face recognition abilities during adulthood. Aging modulates these relationships because as they age, individuals that present a higher level of these traits also show low-level face recognition ability. Intriguingly, the association between depression and face recognition abilities is evident with increasing age. Overall, the present results emphasize the importance of embedding face metacognition measurement into the context of these studies and suggest that aging is an important factor to be considered, which seems to contribute to the relationship between socioemotional and face-cognitive functioning.

  20. Why we respond faster to the self than to others? An implicit positive association theory of self-advantage during implicit face recognition.

    PubMed

    Ma, Yina; Han, Shihui

    2010-06-01

    Human adults usually respond faster to their own faces rather than to those of others. We tested the hypothesis that an implicit positive association (IPA) with self mediates self-advantage in face recognition through 4 experiments. Using a self-concept threat (SCT) priming that associated the self with negative personal traits and led to a weakened IPA with self, we found that self-face advantage in an implicit face-recognition task that required identification of face orientation was eliminated by the SCT priming. Moreover, the SCT effect on self-face recognition was evident only with the left-hand responses. Furthermore, the SCT effect on self-face recognition was observed in both Chinese and American participants. Our findings support the IPA hypothesis that defines a social cognitive mechanism of self-advantage in face recognition.

  1. fMRI-activation during drawing a naturalistic or sketchy portrait.

    PubMed

    Schaer, K; Jahn, G; Lotze, M

    2012-07-15

    Neural processes for naturalistic drawing might be discerned into object recognition and analysis, attention processes guiding eye hand interaction, encoding of visual features in an allocentric reference frame, a transfer into the motor command and precise motor guidance with tight sensorimotor feedback. Cerebral representations in a real life paradigm during naturalistic drawing have sparsely been investigated. Using a functional Magnetic Resonance Imaging (fMRI) paradigm we measured 20 naive subjects during drawing a portrait from a frontal face presented as a photograph. Participants were asked to draw the portrait in either a naturalistic or a sketchy characteristic way. Tracing the contours of the face with a pencil or passive viewing of the face served as control conditions. Compared to passive viewing, naturalistic and sketchy drawing recruited predominantly the dorsal visual pathway, somatosensory and motor areas and bilateral BA 44. The right occipital lobe, middle temporal (MT) and the fusiform face area were increasingly active during drawing compared to passive viewing as well. Compared to tracing with a pencil, both drawing tasks increasingly involved the bilateral precuneus together with the cuneus and right inferior temporal lobe. Overall, our study identified cerebral areas characteristic for previously proposed aspects of drawing: face perception and analysis (fusiform gyrus and higher visual areas), encoding and retrieval of locations in an allocentric reference frame (precuneus), and continuous feedback processes during motor output (parietal sulcus, cerebellar hemisphere). Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Developmental prosopagnosia and super-recognition: no special role for surface reflectance processing

    PubMed Central

    Russell, Richard; Chatterjee, Garga; Nakayama, Ken

    2011-01-01

    Face recognition by normal subjects depends in roughly equal proportions on shape and surface reflectance cues, while object recognition depends predominantly on shape cues. It is possible that developmental prosopagnosics are deficient not in their ability to recognize faces per se, but rather in their ability to use reflectance cues. Similarly, super-recognizers’ exceptional ability with face recognition may be a result of superior surface reflectance perception and memory. We tested this possibility by administering tests of face perception and face recognition in which only shape or reflectance cues are available to developmental prosopagnosics, super-recognizers, and control subjects. Face recognition ability and the relative use of shape and pigmentation were unrelated in all the tests. Subjects who were better at using shape or reflectance cues were also better at using the other type of cue. These results do not support the proposal that variation in surface reflectance perception ability is the underlying cause of variation in face recognition ability. Instead, these findings support the idea that face recognition ability is related to neural circuits using representations that integrate shape and pigmentation information. PMID:22192636

  3. The surprisingly high human efficiency at learning to recognize faces

    PubMed Central

    Peterson, Matthew F.; Abbey, Craig K.; Eckstein, Miguel P.

    2009-01-01

    We investigated the ability of humans to optimize face recognition performance through rapid learning of individual relevant features. We created artificial faces with discriminating visual information heavily concentrated in single features (nose, eyes, chin or mouth). In each of 2500 learning blocks a feature was randomly selected and retained over the course of four trials, during which observers identified randomly sampled, noisy face images. Observers learned the discriminating feature through indirect feedback, leading to large performance gains. Performance was compared to a learning Bayesian ideal observer, resulting in unexpectedly high learning compared to previous studies with simpler stimuli. We explore various explanations and conclude that the higher learning measured with faces cannot be driven by adaptive eye movement strategies but can be mostly accounted for by suboptimalities in human face discrimination when observers are uncertain about the discriminating feature. We show that an initial bias of humans to use specific features to perform the task even though they are informed that each of four features is equally likely to be the discriminatory feature would lead to seemingly supra-optimal learning. We also examine the possibility of inefficient human integration of visual information across the spatially distributed facial features. Together, the results suggest that humans can show large performance improvement effects in discriminating faces as they learn to identify the feature containing the discriminatory information. PMID:19000918

  4. Two areas for familiar face recognition in the primate brain.

    PubMed

    Landi, Sofia M; Freiwald, Winrich A

    2017-08-11

    Familiarity alters face recognition: Familiar faces are recognized more accurately than unfamiliar ones and under difficult viewing conditions when unfamiliar face recognition fails. The neural basis for this fundamental difference remains unknown. Using whole-brain functional magnetic resonance imaging, we found that personally familiar faces engage the macaque face-processing network more than unfamiliar faces. Familiar faces also recruited two hitherto unknown face areas at anatomically conserved locations within the perirhinal cortex and the temporal pole. These two areas, but not the core face-processing network, responded to familiar faces emerging from a blur with a characteristic nonlinear surge, akin to the abruptness of familiar face recognition. In contrast, responses to unfamiliar faces and objects remained linear. Thus, two temporal lobe areas extend the core face-processing network into a familiar face-recognition system. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  5. Facial emotion recognition, face scan paths, and face perception in children with neurofibromatosis type 1.

    PubMed

    Lewis, Amelia K; Porter, Melanie A; Williams, Tracey A; Bzishvili, Samantha; North, Kathryn N; Payne, Jonathan M

    2017-05-01

    This study aimed to investigate face scan paths and face perception abilities in children with Neurofibromatosis Type 1 (NF1) and how these might relate to emotion recognition abilities in this population. The authors investigated facial emotion recognition, face scan paths, and face perception in 29 children with NF1 compared to 29 chronological age-matched typically developing controls. Correlations between facial emotion recognition, face scan paths, and face perception in children with NF1 were examined. Children with NF1 displayed significantly poorer recognition of fearful expressions compared to controls, as well as a nonsignificant trend toward poorer recognition of anger. Although there was no significant difference between groups in time spent viewing individual core facial features (eyes, nose, mouth, and nonfeature regions), children with NF1 spent significantly less time than controls viewing the face as a whole. Children with NF1 also displayed significantly poorer face perception abilities than typically developing controls. Facial emotion recognition deficits were not significantly associated with aberrant face scan paths or face perception abilities in the NF1 group. These results suggest that impairments in the perception, identification, and interpretation of information from faces are important aspects of the social-cognitive phenotype of NF1. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Tracking and recognition face in videos with incremental local sparse representation model

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang

    2013-10-01

    This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.

  7. Face recognition in the thermal infrared domain

    NASA Astrophysics Data System (ADS)

    Kowalski, M.; Grudzień, A.; Palka, N.; Szustakowski, M.

    2017-10-01

    Biometrics refers to unique human characteristics. Each unique characteristic may be used to label and describe individuals and for automatic recognition of a person based on physiological or behavioural properties. One of the most natural and the most popular biometric trait is a face. The most common research methods on face recognition are based on visible light. State-of-the-art face recognition systems operating in the visible light spectrum achieve very high level of recognition accuracy under controlled environmental conditions. Thermal infrared imagery seems to be a promising alternative or complement to visible range imaging due to its relatively high resistance to illumination changes. A thermal infrared image of the human face presents its unique heat-signature and can be used for recognition. The characteristics of thermal images maintain advantages over visible light images, and can be used to improve algorithms of human face recognition in several aspects. Mid-wavelength or far-wavelength infrared also referred to as thermal infrared seems to be promising alternatives. We present the study on 1:1 recognition in thermal infrared domain. The two approaches we are considering are stand-off face verification of non-moving person as well as stop-less face verification on-the-move. The paper presents methodology of our studies and challenges for face recognition systems in the thermal infrared domain.

  8. Infrared and visible fusion face recognition based on NSCT domain

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan

    2018-01-01

    Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In this paper, a novel fusion algorithm in non-subsampled contourlet transform (NSCT) domain is proposed for Infrared and visible face fusion recognition. Firstly, NSCT is used respectively to process the infrared and visible face images, which exploits the image information at multiple scales, orientations, and frequency bands. Then, to exploit the effective discriminant feature and balance the power of high-low frequency band of NSCT coefficients, the local Gabor binary pattern (LGBP) and Local Binary Pattern (LBP) are applied respectively in different frequency parts to obtain the robust representation of infrared and visible face images. Finally, the score-level fusion is used to fuse the all the features for final classification. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. Experiments results show that the proposed method extracts the complementary features of near-infrared and visible-light images and improves the robustness of unconstrained face recognition.

  9. Face Averages Enhance User Recognition for Smartphone Security

    PubMed Central

    Robertson, David J.; Kramer, Robin S. S.; Burton, A. Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual’s ‘face-average’ – a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user’s face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings. PMID:25807251

  10. Emotional modulation of visual remapping of touch.

    PubMed

    Cardini, Flavia; Bertini, Caterina; Serino, Andrea; Ladavas, Elisabetta

    2012-10-01

    The perception of tactile stimuli on the face is modulated if subjects concurrently observe a face being touched; this effect is termed "visual remapping of touch" or the VRT effect. Given the high social value of this mechanism, we investigated whether it might be modulated by specific key information processed in face-to-face interactions: facial emotional expression. In two separate experiments, participants received tactile stimuli, near the perceptual threshold, either on their right, left, or both cheeks. Concurrently, they watched several blocks of movies depicting a face with a neutral, happy, or fearful expression that was touched or just approached by human fingers (Experiment 1). Participants were asked to distinguish between unilateral and bilateral felt tactile stimulation. Tactile perception was enhanced when viewing touch toward a fearful face compared with viewing touch toward the other two expressions. In order to test whether this result can be generalized to other negative emotions or whether it is a fear-specific effect, we ran a second experiment, where participants watched movies of faces-touched or approached by fingers-with either a fearful or an angry expression (Experiment 2). In line with the first experiment, tactile perception was enhanced when subjects viewed touch toward a fearful face and not toward an angry face. Results of the present experiments are interpreted in light of different mechanisms underlying different emotions recognition, with a specific involvement of the somatosensory system when viewing a fearful expression and a resulting fear-specific modulation of the VRT effect.

  11. Recognition memory span in autopsy-confirmed Dementia with Lewy Bodies and Alzheimer's Disease.

    PubMed

    Salmon, David P; Heindel, William C; Hamilton, Joanne M; Vincent Filoteo, J; Cidambi, Varun; Hansen, Lawrence A; Masliah, Eliezer; Galasko, Douglas

    2015-08-01

    Evidence from patients with amnesia suggests that recognition memory span tasks engage both long-term memory (i.e., secondary memory) processes mediated by the diencephalic-medial temporal lobe memory system and working memory processes mediated by fronto-striatal systems. Thus, the recognition memory span task may be particularly effective for detecting memory deficits in disorders that disrupt both memory systems. The presence of unique pathology in fronto-striatal circuits in Dementia with Lewy Bodies (DLB) compared to AD suggests that performance on the recognition memory span task might be differentially affected in the two disorders even though they have quantitatively similar deficits in secondary memory. In the present study, patients with autopsy-confirmed DLB or AD, and Normal Control (NC) participants, were tested on separate recognition memory span tasks that required them to retain increasing amounts of verbal, spatial, or visual object (i.e., faces) information across trials. Results showed that recognition memory spans for verbal and spatial stimuli, but not face stimuli, were lower in patients with DLB than in those with AD, and more impaired relative to NC performance. This was despite similar deficits in the two patient groups on independent measures of secondary memory such as the total number of words recalled from long-term storage on the Buschke Selective Reminding Test. The disproportionate vulnerability of recognition memory span task performance in DLB compared to AD may be due to greater fronto-striatal involvement in DLB and a corresponding decrement in cooperative interaction between working memory and secondary memory processes. Assessment of recognition memory span may contribute to the ability to distinguish between DLB and AD relatively early in the course of disease. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Recognition Memory Span in Autopsy-Confirmed Dementia with Lewy Bodies and Alzheimer’s Disease

    PubMed Central

    Salmon, David P.; Heindel, William C.; Hamilton, Joanne M.; Filoteo, J. Vincent; Cidambi, Varun; Hansen, Lawrence A.; Masliah, Eliezer; Galasko, Douglas

    2016-01-01

    Evidence from patients with amnesia suggests that recognition memory span tasks engage both long-term memory (i.e., secondary memory) processes mediated by the diencephalic-medial temporal lobe memory system and working memory processes mediated by fronto-striatal systems. Thus, the recognition memory span task may be particularly effective for detecting memory deficits in disorders that disrupt both memory systems. The presence of unique pathology in fronto-striatal circuits in Dementia with Lewy Bodies (DLB) compared to AD suggests that performance on the recognition memory span task might be differentially affected in the two disorders even though they have quantitatively similar deficits in secondary memory. In the present study, patients with autopsy-confirmed DLB or AD, and normal control (NC) participants, were tested on separate recognition memory span tasks that required them to retain increasing amounts of verbal, spatial, or visual object (i.e., faces) information across trials. Results showed that recognition memory spans for verbal and spatial stimuli, but not face stimuli, were lower in patients with DLB than in those with AD, and more impaired relative to NC performance. This was despite similar deficits in the two patient groups on independent measures of secondary memory such as the total number of words recalled from Long-Term Storage on the Buschke Selective Reminding Test. The disproportionate vulnerability of recognition memory span task performance in DLB compared to AD may be due to greater fronto-striatal involvement in DLB and a corresponding decrement in cooperative interaction between working memory and secondary memory processes. Assessment of recognition memory span may contribute to the ability to distinguish between DLB and AD relatively early in the course of disease. PMID:26184443

  13. Comparison of emotion recognition from facial expression and music.

    PubMed

    Gaspar, Tina; Labor, Marina; Jurić, Iva; Dumancić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recognizing emotions presented as facial expressions or in classical music works we conducted a survey which included 90 elementary school and 87 high school students from Osijek (Croatia). The participants had to match 8 photographs of different emotions expressed on the face and 8 pieces of classical music works with 8 offered emotions. The recognition of emotions expressed through classical music pieces was significantly less successful than the recognition of emotional facial expressions. The high school students were significantly better at recognizing facial emotions than the elementary school students, whereas girls were better than boys. The success rate in recognizing emotions from music pieces was associated with higher grades in mathematics. Basic emotions are far better recognized if presented on human faces than in music, possibly because the understanding of facial emotions is one of the oldest communication skills in human society. Female advantage in emotion recognition was selected due to the necessity of their communication with the newborns during early development. The proficiency in recognizing emotional content of music and mathematical skills probably share some general cognitive skills like attention, memory and motivation. Music pieces were differently processed in brain than facial expressions and consequently, probably differently evaluated as relevant emotional clues.

  14. Better object recognition and naming outcome with MRI-guided stereotactic laser amygdalohippocampotomy for temporal lobe epilepsy.

    PubMed

    Drane, Daniel L; Loring, David W; Voets, Natalie L; Price, Michele; Ojemann, Jeffrey G; Willie, Jon T; Saindane, Amit M; Phatak, Vaishali; Ivanisevic, Mirjana; Millis, Scott; Helmers, Sandra L; Miller, John W; Meador, Kimford J; Gross, Robert E

    2015-01-01

    Patients with temporal lobe epilepsy (TLE) experience significant deficits in category-related object recognition and naming following standard surgical approaches. These deficits may result from a decoupling of core processing modules (e.g., language, visual processing, and semantic memory), due to "collateral damage" to temporal regions outside the hippocampus following open surgical approaches. We predicted that stereotactic laser amygdalohippocampotomy (SLAH) would minimize such deficits because it preserves white matter pathways and neocortical regions that are critical for these cognitive processes. Tests of naming and recognition of common nouns (Boston Naming Test) and famous persons were compared with nonparametric analyses using exact tests between a group of 19 patients with medically intractable mesial TLE undergoing SLAH (10 dominant, 9 nondominant), and a comparable series of TLE patients undergoing standard surgical approaches (n=39) using a prospective, nonrandomized, nonblinded, parallel-group design. Performance declines were significantly greater for the patients with dominant TLE who were undergoing open resection versus SLAH for naming famous faces and common nouns (F=24.3, p<0.0001, η2=0.57, and F=11.2, p<0.001, η2=0.39, respectively), and for the patients with nondominant TLE undergoing open resection versus SLAH for recognizing famous faces (F=3.9, p<0.02, η2=0.19). When examined on an individual subject basis, no SLAH patients experienced any performance declines on these measures. In contrast, 32 of the 39 patients undergoing standard surgical approaches declined on one or more measures for both object types (p<0.001, Fisher's exact test). Twenty-one of 22 left (dominant) TLE patients declined on one or both naming tasks after open resection, while 11 of 17 right (nondominant) TLE patients declined on face recognition. Preliminary results suggest (1) naming and recognition functions can be spared in TLE patients undergoing SLAH, and (2) the hippocampus does not appear to be an essential component of neural networks underlying name retrieval or recognition of common objects or famous faces. Wiley Periodicals, Inc. © 2014 International League Against Epilepsy.

  15. Learning the moves: the effect of familiarity and facial motion on person recognition across large changes in viewing format.

    PubMed

    Roark, Dana A; O'Toole, Alice J; Abdi, Hervé; Barrett, Susan E

    2006-01-01

    Familiarity with a face or person can support recognition in tasks that require generalization to novel viewing contexts. Using naturalistic viewing conditions requiring recognition of people from face or whole body gait stimuli, we investigated the effects of familiarity, facial motion, and direction of learning/test transfer on person recognition. Participants were familiarized with previously unknown people from gait videos and were tested on faces (experiment 1a) or were familiarized with faces and were tested with gait videos (experiment 1b). Recognition was more accurate when learning from the face and testing with the gait videos, than when learning from the gait videos and testing with the face. The repetition of a single stimulus, either the face or gait, produced strong recognition gains across transfer conditions. Also, the presentation of moving faces resulted in better performance than that of static faces. In experiment 2, we investigated the role of facial motion further by testing recognition with static profile images. Motion provided no benefit for recognition, indicating that structure-from-motion is an unlikely source of the motion advantage found in the first set of experiments.

  16. Gender-Based Prototype Formation in Face Recognition

    ERIC Educational Resources Information Center

    Baudouin, Jean-Yves; Brochard, Renaud

    2011-01-01

    The role of gender categories in prototype formation during face recognition was investigated in 2 experiments. The participants were asked to learn individual faces and then to recognize them. During recognition, individual faces were mixed with faces, which were blended faces of same or different genders. The results of the 2 experiments showed…

  17. The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex

    PubMed Central

    Leibo, Joel Z.; Liao, Qianli; Anselmi, Fabio; Poggio, Tomaso

    2015-01-01

    Is visual cortex made up of general-purpose information processing machinery, or does it consist of a collection of specialized modules? If prior knowledge, acquired from learning a set of objects is only transferable to new objects that share properties with the old, then the recognition system’s optimal organization must be one containing specialized modules for different object classes. Our analysis starts from a premise we call the invariance hypothesis: that the computational goal of the ventral stream is to compute an invariant-to-transformations and discriminative signature for recognition. The key condition enabling approximate transfer of invariance without sacrificing discriminability turns out to be that the learned and novel objects transform similarly. This implies that the optimal recognition system must contain subsystems trained only with data from similarly-transforming objects and suggests a novel interpretation of domain-specific regions like the fusiform face area (FFA). Furthermore, we can define an index of transformation-compatibility, computable from videos, that can be combined with information about the statistics of natural vision to yield predictions for which object categories ought to have domain-specific regions in agreement with the available data. The result is a unifying account linking the large literature on view-based recognition with the wealth of experimental evidence concerning domain-specific regions. PMID:26496457

  18. Artificial faces are harder to remember

    PubMed Central

    Balas, Benjamin; Pacella, Jonathan

    2015-01-01

    Observers interact with artificial faces in a range of different settings and in many cases must remember and identify computer-generated faces. In general, however, most adults have heavily biased experience favoring real faces over synthetic faces. It is well known that face recognition abilities are affected by experience such that faces belonging to “out-groups” defined by race or age are more poorly remembered and harder to discriminate from one another than faces belonging to the “in-group.” Here, we examine the extent to which artificial faces form an “out-group” in this sense when other perceptual categories are matched. We rendered synthetic faces using photographs of real human faces and compared performance in a memory task and a discrimination task across real and artificial versions of the same faces. We found that real faces were easier to remember, but only slightly more discriminable than artificial faces. Artificial faces were also equally susceptible to the well-known face inversion effect, suggesting that while these patterns are still processed by the human visual system in a face-like manner, artificial appearance does compromise the efficiency of face processing. PMID:26195852

  19. Face Age and Eye Gaze Influence Older Adults' Emotion Recognition.

    PubMed

    Campbell, Anna; Murray, Janice E; Atkinson, Lianne; Ruffman, Ted

    2017-07-01

    Eye gaze has been shown to influence emotion recognition. In addition, older adults (over 65 years) are not as influenced by gaze direction cues as young adults (18-30 years). Nevertheless, these differences might stem from the use of young to middle-aged faces in emotion recognition research because older adults have an attention bias toward old-age faces. Therefore, using older face stimuli might allow older adults to process gaze direction cues to influence emotion recognition. To investigate this idea, young and older adults completed an emotion recognition task with young and older face stimuli displaying direct and averted gaze, assessing labeling accuracy for angry, disgusted, fearful, happy, and sad faces. Direct gaze rather than averted gaze improved young adults' recognition of emotions in young and older faces, but for older adults this was true only for older faces. The current study highlights the impact of stimulus face age and gaze direction on emotion recognition in young and older adults. The use of young face stimuli with direct gaze in most research might contribute to age-related emotion recognition differences. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Color constancy in 3D-2D face recognition

    NASA Astrophysics Data System (ADS)

    Meyer, Manuel; Riess, Christian; Angelopoulou, Elli; Evangelopoulos, Georgios; Kakadiaris, Ioannis A.

    2013-05-01

    Face is one of the most popular biometric modalities. However, up to now, color is rarely actively used in face recognition. Yet, it is well-known that when a person recognizes a face, color cues can become as important as shape, especially when combined with the ability of people to identify the color of objects independent of illuminant color variations. In this paper, we examine the feasibility and effect of explicitly embedding illuminant color information in face recognition systems. We empirically examine the theoretical maximum gain of including known illuminant color to a 3D-2D face recognition system. We also investigate the impact of using computational color constancy methods for estimating the illuminant color, which is then incorporated into the face recognition framework. Our experiments show that under close-to-ideal illumination estimates, one can improve face recognition rates by 16%. When the illuminant color is algorithmically estimated, the improvement is approximately 5%. These results suggest that color constancy has a positive impact on face recognition, but the accuracy of the illuminant color estimate has a considerable effect on its benefits.

  1. Identification and intensity of disgust: Distinguishing visual, linguistic and facial expressions processing in Parkinson disease.

    PubMed

    Sedda, Anna; Petito, Sara; Guarino, Maria; Stracciari, Andrea

    2017-07-14

    Most of the studies since now show an impairment for facial displays of disgust recognition in Parkinson disease. A general impairment in disgust processing in patients with Parkinson disease might adversely affect their social interactions, given the relevance of this emotion for human relations. However, despite the importance of faces, disgust is also expressed through other format of visual stimuli such as sentences and visual images. The aim of our study was to explore disgust processing in a sample of patients affected by Parkinson disease, by means of various tests tackling not only facial recognition but also other format of visual stimuli through which disgust can be recognized. Our results confirm that patients are impaired in recognizing facial displays of disgust. Further analyses show that patients are also impaired and slower for other facial expressions, with the only exception of happiness. Notably however, patients with Parkinson disease processed visual images and sentences as controls. Our findings show a dissociation within different formats of visual stimuli of disgust, suggesting that Parkinson disease is not characterized by a general compromising of disgust processing, as often suggested. The involvement of the basal ganglia-frontal cortex system might spare some cognitive components of emotional processing, related to memory and culture, at least for disgust. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Relations between scanning and recognition of own- and other-race faces in 6- and 9-month-old infants.

    PubMed

    Liu, Shaoying; Quinn, Paul C; Xiao, Naiqi G; Wu, Zhijun; Liu, Guangxi; Lee, Kang

    2018-06-01

    Infants typically see more own-race faces than other-race faces. Existing evidence shows that this difference in face race experience has profound consequences for face processing: as early as 6 months of age, infants scan own- and other-race faces differently and display superior recognition for own- relative to other-race faces. However, it is unclear whether scanning of own-race faces is related to the own-race recognition advantage in infants. To bridge this gap in the literature, the current study used eye tracking to investigate the relation between own-race face scanning and recognition in 6- and 9-month-old Asian infants (N = 82). The infants were familiarized with dynamic own- and other-race faces, and then their face recognition was tested with static face images. Both age groups recognized own- but not other-race faces. Also, regardless of race, the more infants scanned the eyes of the novel versus familiar faces at test, the better their face-recognition performance. In addition, both 6- and 9-month-olds fixated significantly longer on the nose of own-race faces, and greater fixation on the nose during test trials correlated positively with individual novelty preference scores in the own- but not other-race condition. The results suggest that some aspects of the relation between recognition and scanning are independent of differential experience with face race, whereas other aspects are affected by such experience. More broadly, the findings imply that scanning and recognition may become linked during infancy at least in part through the influence of perceptual experience. © 2018 The Institute of Psychology, Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  3. Schizotypy and impaired basic face recognition? Another non-confirmatory study.

    PubMed

    Bell, Vaughan; Halligan, Peter

    2015-12-01

    Although schizotypy has been found to be reliably associated with a reduced recognition of facial affect, the few studies that have tested the association between basic face recognition abilities and schizotypy have found mixed results. This study formally tested the association in a large non-clinical sample with established neurological measures of face recognition. Two hundred and twenty-seven participants completed the Oxford-Liverpool Inventory of Feelings and Experiences schizotypy scale and completed the Famous Faces Test and the Cardiff Repeated Recognition Test for Faces. No association between any schizotypal dimension and performance on either of the facial recognition and learning tests was found. The null results can be accepted with a high degree of confidence. Further additional evidence is provided for a lack of association between schizotypy and basic face recognition deficits. © 2014 Wiley Publishing Asia Pty Ltd.

  4. The Effect of Inversion on Face Recognition in Adults with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Hedley, Darren; Brewer, Neil; Young, Robyn

    2015-01-01

    Face identity recognition has widely been shown to be impaired in individuals with autism spectrum disorders (ASD). In this study we examined the influence of inversion on face recognition in 26 adults with ASD and 33 age and IQ matched controls. Participants completed a recognition test comprising upright and inverted faces. Participants with ASD…

  5. The hierarchical brain network for face recognition.

    PubMed

    Zhen, Zonglei; Fang, Huizhen; Liu, Jia

    2013-01-01

    Numerous functional magnetic resonance imaging (fMRI) studies have identified multiple cortical regions that are involved in face processing in the human brain. However, few studies have characterized the face-processing network as a functioning whole. In this study, we used fMRI to identify face-selective regions in the entire brain and then explore the hierarchical structure of the face-processing network by analyzing functional connectivity among these regions. We identified twenty-five regions mainly in the occipital, temporal and frontal cortex that showed a reliable response selective to faces (versus objects) across participants and across scan sessions. Furthermore, these regions were clustered into three relatively independent sub-networks in a face-recognition task on the basis of the strength of functional connectivity among them. The functionality of the sub-networks likely corresponds to the recognition of individual identity, retrieval of semantic knowledge and representation of emotional information. Interestingly, when the task was switched to object recognition from face recognition, the functional connectivity between the inferior occipital gyrus and the rest of the face-selective regions were significantly reduced, suggesting that this region may serve as an entry node in the face-processing network. In sum, our study provides empirical evidence for cognitive and neural models of face recognition and helps elucidate the neural mechanisms underlying face recognition at the network level.

  6. Cross-modal face recognition using multi-matcher face scores

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2015-05-01

    The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.

  7. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ("face patches") did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. Significance statement: We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. Copyright © 2015 the authors 0270-6474/15/3513402-17$15.00/0.

  8. Automatic textual annotation of video news based on semantic visual object extraction

    NASA Astrophysics Data System (ADS)

    Boujemaa, Nozha; Fleuret, Francois; Gouet, Valerie; Sahbi, Hichem

    2003-12-01

    In this paper, we present our work for automatic generation of textual metadata based on visual content analysis of video news. We present two methods for semantic object detection and recognition from a cross modal image-text thesaurus. These thesaurus represent a supervised association between models and semantic labels. This paper is concerned with two semantic objects: faces and Tv logos. In the first part, we present our work for efficient face detection and recogniton with automatic name generation. This method allows us also to suggest the textual annotation of shots close-up estimation. On the other hand, we were interested to automatically detect and recognize different Tv logos present on incoming different news from different Tv Channels. This work was done jointly with the French Tv Channel TF1 within the "MediaWorks" project that consists on an hybrid text-image indexing and retrieval plateform for video news.

  9. Unsupervised real-time speaker identification for daily movies

    NASA Astrophysics Data System (ADS)

    Li, Ying; Kuo, C.-C. Jay

    2002-07-01

    The problem of identifying speakers for movie content analysis is addressed in this paper. While most previous work on speaker identification was carried out in a supervised mode using pure audio data, more robust results can be obtained in real-time by integrating knowledge from multiple media sources in an unsupervised mode. In this work, both audio and visual cues will be employed and subsequently combined in a probabilistic framework to identify speakers. Particularly, audio information is used to identify speakers with a maximum likelihood (ML)-based approach while visual information is adopted to distinguish speakers by detecting and recognizing their talking faces based on face detection/recognition and mouth tracking techniques. Moreover, to accommodate for speakers' acoustic variations along time, we update their models on the fly by adapting to their newly contributed speech data. Encouraging results have been achieved through extensive experiments, which shows a promising future of the proposed audiovisual-based unsupervised speaker identification system.

  10. PCANet: A Simple Deep Learning Baseline for Image Classification?

    PubMed

    Chan, Tsung-Han; Jia, Kui; Gao, Shenghua; Lu, Jiwen; Zeng, Zinan; Ma, Yi

    2015-12-01

    In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition.

  11. Age differences in accuracy and choosing in eyewitness identification and face recognition.

    PubMed

    Searcy, J H; Bartlett, J C; Memon, A

    1999-05-01

    Studies of aging and face recognition show age-related increases in false recognitions of new faces. To explore implications of this false alarm effect, we had young and senior adults perform (1) three eye-witness identification tasks, using both target present and target absent lineups, and (2) and old/new recognition task in which a study list of faces was followed by a test including old and new faces, along with conjunctions of old faces. Compared with the young, seniors had lower accuracy and higher choosing rates on the lineups, and they also falsely recognized more new faces on the recognition test. However, after screening for perceptual processing deficits, there was no age difference in false recognition of conjunctions, or in discriminating old faces from conjunctions. We conclude that the false alarm effect generalizes to lineup identification, but does not extend to conjunction faces. The findings are consistent with age-related deficits in recollection of context and relative age invariance in perceptual integrative processes underlying the experience of familiarity.

  12. Disrupted neural processing of emotional faces in psychopathy.

    PubMed

    Contreras-Rodríguez, Oren; Pujol, Jesus; Batalla, Iolanda; Harrison, Ben J; Bosque, Javier; Ibern-Regàs, Immaculada; Hernández-Ribas, Rosa; Soriano-Mas, Carles; Deus, Joan; López-Solà, Marina; Pifarré, Josep; Menchón, José M; Cardoner, Narcís

    2014-04-01

    Psychopaths show a reduced ability to recognize emotion facial expressions, which may disturb the interpersonal relationship development and successful social adaptation. Behavioral hypotheses point toward an association between emotion recognition deficits in psychopathy and amygdala dysfunction. Our prediction was that amygdala dysfunction would combine deficient activation with disturbances in functional connectivity with cortical regions of the face-processing network. Twenty-two psychopaths and 22 control subjects were assessed and functional magnetic resonance maps were generated to identify both brain activation and task-induced functional connectivity using psychophysiological interaction analysis during an emotional face-matching task. Results showed significant amygdala activation in control subjects only, but differences between study groups did not reach statistical significance. In contrast, psychopaths showed significantly increased activation in visual and prefrontal areas, with this latest activation being associated with psychopaths' affective-interpersonal disturbances. Psychophysiological interaction analyses revealed a reciprocal reduction in functional connectivity between the left amygdala and visual and prefrontal cortices. Our results suggest that emotional stimulation may evoke a relevant cortical response in psychopaths, but a disruption in the processing of emotional faces exists involving the reciprocal functional interaction between the amygdala and neocortex, consistent with the notion of a failure to integrate emotion into cognition in psychopathic individuals.

  13. Further insight into self-face recognition in schizophrenia patients: Why ambiguity matters.

    PubMed

    Bortolon, Catherine; Capdevielle, Delphine; Salesse, Robin N; Raffard, Stephane

    2016-03-01

    Although some studies reported specifically self-face processing deficits in patients with schizophrenia disorder (SZ), it remains unclear whether these deficits rather reflect a more global face processing deficit. Contradictory results are probably due to the different methodologies employed and the lack of control of other confounding factors. Moreover, no study has so far evaluated possible daily life self-face recognition difficulties in SZ. Therefore, our primary objective was to investigate self-face recognition in patients suffering from SZ compared to healthy controls (HC) using an "objective measure" (reaction time and accuracy) and a "subjective measure" (self-report of daily self-face recognition difficulties). Twenty-four patients with SZ and 23 HC performed a self-face recognition task and completed a questionnaire evaluating daily difficulties in self-face recognition. Recognition task material consisted in three different faces (the own, a famous and an unknown) being morphed in steps of 20%. Results showed that SZ were overall slower than HC regardless of the face identity, but less accurate only for the faces containing 60%-40% morphing. Moreover, SZ and HC reported a similar amount of daily problems with self/other face recognition. No significant correlations were found between objective and subjective measures (p > 0.05). The small sample size and relatively mild severity of psychopathology does not allow us to generalize our results. These results suggest that: (1) patients with SZ are as capable of recognizing their own face as HC, although they are susceptible to ambiguity; (2) there are far less self recognition deficits in schizophrenia patients than previously postulated. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Adaptive gamma correction-based expert system for nonuniform illumination face enhancement

    NASA Astrophysics Data System (ADS)

    Abdelhamid, Iratni; Mustapha, Aouache; Adel, Oulefki

    2018-03-01

    The image quality of a face recognition system suffers under severe lighting conditions. Thus, this study aims to develop an approach for nonuniform illumination adjustment based on an adaptive gamma correction (AdaptGC) filter that can solve the aforementioned issue. An approach for adaptive gain factor prediction was developed via neural network model-based cross-validation (NN-CV). To achieve this objective, a gamma correction function and its effects on the face image quality with different gain values were examined first. Second, an orientation histogram (OH) algorithm was assessed as a face's feature descriptor. Subsequently, a density histogram module was developed for face label generation. During the NN-CV construction, the model was assessed to recognize the OH descriptor and predict the face label. The performance of the NN-CV model was evaluated by examining the statistical measures of root mean square error and coefficient of efficiency. Third, to evaluate the AdaptGC enhancement approach, an image quality metric was adopted using enhancement by entropy, contrast per pixel, second-derivative-like measure of enhancement, and sharpness, then supported by visual inspection. The experiment results were examined using five face's databases, namely, extended Yale-B, Carnegie Mellon University-Pose, Illumination, and Expression, Mobio, FERET, and Oulu-CASIA-NIR-VIS. The final results prove that AdaptGC filter implementation compared with state-of-the-art methods is the best choice in terms of contrast and nonuniform illumination adjustment. In summary, the benefits attained prove that AdaptGC is driven by a profitable enhancement rate, which provides satisfying features for high rate face recognition systems.

  15. The Role of Active Exploration of 3D Face Stimuli on Recognition Memory of Facial Information

    ERIC Educational Resources Information Center

    Liu, Chang Hong; Ward, James; Markall, Helena

    2007-01-01

    Research on face recognition has mainly relied on methods in which observers are relatively passive viewers of face stimuli. This study investigated whether active exploration of three-dimensional (3D) face stimuli could facilitate recognition memory. A standard recognition task and a sequential matching task were employed in a yoked design.…

  16. Impairments in Monkey and Human Face Recognition in 2-Year-Old Toddlers with Autism Spectrum Disorder and Developmental Delay

    ERIC Educational Resources Information Center

    Chawarska, Katarzyna; Volkmar, Fred

    2007-01-01

    Face recognition impairments are well documented in older children with Autism Spectrum Disorders (ASD); however, the developmental course of the deficit is not clear. This study investigates the progressive specialization of face recognition skills in children with and without ASD. Experiment 1 examines human and monkey face recognition in…

  17. Visual adaptation dominates bimodal visual-motor action adaptation

    PubMed Central

    de la Rosa, Stephan; Ferstl, Ylva; Bülthoff, Heinrich H.

    2016-01-01

    A long standing debate revolves around the question whether visual action recognition primarily relies on visual or motor action information. Previous studies mainly examined the contribution of either visual or motor information to action recognition. Yet, the interaction of visual and motor action information is particularly important for understanding action recognition in social interactions, where humans often observe and execute actions at the same time. Here, we behaviourally examined the interaction of visual and motor action recognition processes when participants simultaneously observe and execute actions. We took advantage of behavioural action adaptation effects to investigate behavioural correlates of neural action recognition mechanisms. In line with previous results, we find that prolonged visual exposure (visual adaptation) and prolonged execution of the same action with closed eyes (non-visual motor adaptation) influence action recognition. However, when participants simultaneously adapted visually and motorically – akin to simultaneous execution and observation of actions in social interactions - adaptation effects were only modulated by visual but not motor adaptation. Action recognition, therefore, relies primarily on vision-based action recognition mechanisms in situations that require simultaneous action observation and execution, such as social interactions. The results suggest caution when associating social behaviour in social interactions with motor based information. PMID:27029781

  18. Developmental prosopagnosia and super-recognition: no special role for surface reflectance processing.

    PubMed

    Russell, Richard; Chatterjee, Garga; Nakayama, Ken

    2012-01-01

    Face recognition by normal subjects depends in roughly equal proportions on shape and surface reflectance cues, while object recognition depends predominantly on shape cues. It is possible that developmental prosopagnosics are deficient not in their ability to recognize faces per se, but rather in their ability to use reflectance cues. Similarly, super-recognizers' exceptional ability with face recognition may be a result of superior surface reflectance perception and memory. We tested this possibility by administering tests of face perception and face recognition in which only shape or reflectance cues are available to developmental prosopagnosics, super-recognizers, and control subjects. Face recognition ability and the relative use of shape and pigmentation were unrelated in all the tests. Subjects who were better at using shape or reflectance cues were also better at using the other type of cue. These results do not support the proposal that variation in surface reflectance perception ability is the underlying cause of variation in face recognition ability. Instead, these findings support the idea that face recognition ability is related to neural circuits using representations that integrate shape and pigmentation information. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Own-Group Face Recognition Bias: The Effects of Location and Reputation

    PubMed Central

    Yan, Linlin; Wang, Zhe; Huang, Jianling; Sun, Yu-Hao P.; Judges, Rebecca A.; Xiao, Naiqi G.; Lee, Kang

    2017-01-01

    In the present study, we examined whether social categorization based on university affiliation can induce an advantage in recognizing faces. Moreover, we investigated how the reputation or location of the university affected face recognition performance using an old/new paradigm. We assigned five different university labels to the faces: participants’ own university and four other universities. Among the four other university labels, we manipulated the academic reputation and geographical location of these universities relative to the participants’ own university. The results showed that an own-group face recognition bias emerged for faces with own-university labels comparing to those with other-university labels. Furthermore, we found a robust own-group face recognition bias only when the other university was located in a different city far away from participants’ own university. Interestingly, we failed to find the influence of university reputation on own-group face recognition bias. These results suggest that categorizing a face as a member of one’s own university is sufficient to enhance recognition accuracy and the location will play a more important role in the effect of social categorization on face recognition than reputation. The results provide insight into the role of motivational factors underlying the university membership in face perception. PMID:29066989

  20. The own-age bias in face memory is unrelated to differences in attention--evidence from event-related potentials.

    PubMed

    Neumann, Markus F; End, Albert; Luttmann, Stefanie; Schweinberger, Stefan R; Wiese, Holger

    2015-03-01

    Participants are more accurate at remembering faces from their own relative to a different age group (the own-age bias, or OAB). A recent socio-cognitive account has suggested that differential allocation of attention to old versus young faces underlies this phenomenon. Critically, empirical evidence for a direct relationship between attention to own- versus other-age faces and the OAB in memory is lacking. To fill this gap, we tested the roles of attention in three different experimental paradigms, and additionally analyzed event-related brain potentials (ERPs). In Experiment 1, we compared the learning of old and young faces during focused versus divided attention, but revealed similar OABs in subsequent memory for both attention conditions. Similarly, manipulating attention during learning did not differentially affect the ERPs elicited by young versus old faces. In Experiment 2, we examined the repetition effects from task-irrelevant old and young faces presented under varying attentional loads on the N250r ERP component as an index of face recognition. Independent of load, the N250r effects were comparable for both age categories. Finally, in Experiment 3 we measured the N2pc as an index of attentional selection of old versus young target faces in a visual search task. The N2pc was not significantly different for the young versus the old target search conditions, suggesting similar orientations of attention to either face age group. Overall, we propose that the OAB in memory is largely unrelated to early attentional processes. Our findings therefore contrast with the predictions from socio-cognitive accounts on own-group biases in recognition memory, and are more easily reconciled with expertise-based models.

  1. Sub-pattern based multi-manifold discriminant analysis for face recognition

    NASA Astrophysics Data System (ADS)

    Dai, Jiangyan; Guo, Changlu; Zhou, Wei; Shi, Yanjiao; Cong, Lin; Yi, Yugen

    2018-04-01

    In this paper, we present a Sub-pattern based Multi-manifold Discriminant Analysis (SpMMDA) algorithm for face recognition. Unlike existing Multi-manifold Discriminant Analysis (MMDA) approach which is based on holistic information of face image for recognition, SpMMDA operates on sub-images partitioned from the original face image and then extracts the discriminative local feature from the sub-images separately. Moreover, the structure information of different sub-images from the same face image is considered in the proposed method with the aim of further improve the recognition performance. Extensive experiments on three standard face databases (Extended YaleB, CMU PIE and AR) demonstrate that the proposed method is effective and outperforms some other sub-pattern based face recognition methods.

  2. Behavioral model of visual perception and recognition

    NASA Astrophysics Data System (ADS)

    Rybak, Ilya A.; Golovan, Alexander V.; Gusakova, Valentina I.

    1993-09-01

    In the processes of visual perception and recognition human eyes actively select essential information by way of successive fixations at the most informative points of the image. A behavioral program defining a scanpath of the image is formed at the stage of learning (object memorizing) and consists of sequential motor actions, which are shifts of attention from one to another point of fixation, and sensory signals expected to arrive in response to each shift of attention. In the modern view of the problem, invariant object recognition is provided by the following: (1) separated processing of `what' (object features) and `where' (spatial features) information at high levels of the visual system; (2) mechanisms of visual attention using `where' information; (3) representation of `what' information in an object-based frame of reference (OFR). However, most recent models of vision based on OFR have demonstrated the ability of invariant recognition of only simple objects like letters or binary objects without background, i.e. objects to which a frame of reference is easily attached. In contrast, we use not OFR, but a feature-based frame of reference (FFR), connected with the basic feature (edge) at the fixation point. This has provided for our model, the ability for invariant representation of complex objects in gray-level images, but demands realization of behavioral aspects of vision described above. The developed model contains a neural network subsystem of low-level vision which extracts a set of primary features (edges) in each fixation, and high- level subsystem consisting of `what' (Sensory Memory) and `where' (Motor Memory) modules. The resolution of primary features extraction decreases with distances from the point of fixation. FFR provides both the invariant representation of object features in Sensor Memory and shifts of attention in Motor Memory. Object recognition consists in successive recall (from Motor Memory) and execution of shifts of attention and successive verification of the expected sets of features (stored in Sensory Memory). The model shows the ability of recognition of complex objects (such as faces) in gray-level images invariant with respect to shift, rotation, and scale.

  3. Genetic specificity of face recognition.

    PubMed

    Shakeshaft, Nicholas G; Plomin, Robert

    2015-10-13

    Specific cognitive abilities in diverse domains are typically found to be highly heritable and substantially correlated with general cognitive ability (g), both phenotypically and genetically. Recent twin studies have found the ability to memorize and recognize faces to be an exception, being similarly heritable but phenotypically substantially uncorrelated both with g and with general object recognition. However, the genetic relationships between face recognition and other abilities (the extent to which they share a common genetic etiology) cannot be determined from phenotypic associations. In this, to our knowledge, first study of the genetic associations between face recognition and other domains, 2,000 18- and 19-year-old United Kingdom twins completed tests assessing their face recognition, object recognition, and general cognitive abilities. Results confirmed the substantial heritability of face recognition (61%), and multivariate genetic analyses found that most of this genetic influence is unique and not shared with other cognitive abilities.

  4. Genetic specificity of face recognition

    PubMed Central

    Shakeshaft, Nicholas G.; Plomin, Robert

    2015-01-01

    Specific cognitive abilities in diverse domains are typically found to be highly heritable and substantially correlated with general cognitive ability (g), both phenotypically and genetically. Recent twin studies have found the ability to memorize and recognize faces to be an exception, being similarly heritable but phenotypically substantially uncorrelated both with g and with general object recognition. However, the genetic relationships between face recognition and other abilities (the extent to which they share a common genetic etiology) cannot be determined from phenotypic associations. In this, to our knowledge, first study of the genetic associations between face recognition and other domains, 2,000 18- and 19-year-old United Kingdom twins completed tests assessing their face recognition, object recognition, and general cognitive abilities. Results confirmed the substantial heritability of face recognition (61%), and multivariate genetic analyses found that most of this genetic influence is unique and not shared with other cognitive abilities. PMID:26417086

  5. Abnormal visual scan paths: a psychophysiological marker of delusions in schizophrenia.

    PubMed

    Phillips, M L; David, A S

    1998-02-09

    The role of the visual scan path as a psychophysiological marker of visual attention has been highlighted previously (Phillips and David, 1994). We investigated information processing in schizophrenic patients with severe delusions and again when the delusions were subsiding using visual scan path measurements. We aimed to demonstrate a specific deficit in processing human faces in deluded subjects by relating this to abnormal viewing strategies. Scan paths were measured in six deluded and five non-deluded schizophrenics (matched for medication and negative symptoms), and nine age-matched normal controls. Deluded subjects had abnormal scan paths in a recognition task, fixating non-feature areas significantly more than controls, but were equally accurate. Re-testing after improvement in delusional conviction revealed fewer group differences. The results suggest state-dependent abnormal information processing in schizophrenics when deluded, with reliance on less-salient visual information for decision-making.

  6. Smartphone based face recognition tool for the blind.

    PubMed

    Kramer, K M; Hedin, D S; Rolkosky, D J

    2010-01-01

    The inability to identify people during group meetings is a disadvantage for blind people in many professional and educational situations. To explore the efficacy of face recognition using smartphones in these settings, we have prototyped and tested a face recognition tool for blind users. The tool utilizes Smartphone technology in conjunction with a wireless network to provide audio feedback of the people in front of the blind user. Testing indicated that the face recognition technology can tolerate up to a 40 degree angle between the direction a person is looking and the camera's axis and a 96% success rate with no false positives. Future work will be done to further develop the technology for local face recognition on the smartphone in addition to remote server based face recognition.

  7. Face recognition performance of individuals with Asperger syndrome on the Cambridge Face Memory Test.

    PubMed

    Hedley, Darren; Brewer, Neil; Young, Robyn

    2011-12-01

    Although face recognition deficits in individuals with Autism Spectrum Disorder (ASD), including Asperger syndrome (AS), are widely acknowledged, the empirical evidence is mixed. This in part reflects the failure to use standardized and psychometrically sound tests. We contrasted standardized face recognition scores on the Cambridge Face Memory Test (CFMT) for 34 individuals with AS with those for 42, IQ-matched non-ASD individuals, and age-standardized scores from a large Australian cohort. We also examined the influence of IQ, autistic traits, and negative affect on face recognition performance. Overall, participants with AS performed significantly worse on the CFMT than the non-ASD participants and when evaluated against standardized test norms. However, while 24% of participants with AS presented with severe face recognition impairment (>2 SDs below the mean), many individuals performed at or above the typical level for their age: 53% scored within +/- 1 SD of the mean and 9% demonstrated superior performance (>1 SD above the mean). Regression analysis provided no evidence that IQ, autistic traits, or negative affect significantly influenced face recognition: diagnostic group membership was the only significant predictor of face recognition performance. In sum, face recognition performance in ASD is on a continuum, but with average levels significantly below non-ASD levels of performance. Copyright © 2011, International Society for Autism Research, Wiley-Liss, Inc.

  8. Implications of holistic face processing in autism and schizophrenia

    PubMed Central

    Watson, Tamara L.

    2013-01-01

    People with autism and schizophrenia have been shown to have a local bias in sensory processing and face recognition difficulties. A global or holistic processing strategy is known to be important when recognizing faces. Studies investigating face recognition in these populations are reviewed and show that holistic processing is employed despite lower overall performance in the tasks used. This implies that holistic processing is necessary but not sufficient for optimal face recognition and new avenues for research into face recognition based on network models of autism and schizophrenia are proposed. PMID:23847581

  9. Audiovisual speech facilitates voice learning.

    PubMed

    Sheffert, Sonya M; Olson, Elizabeth

    2004-02-01

    In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.

  10. Effects of visual and verbal interference tasks on olfactory memory: the role of task complexity.

    PubMed

    Annett, J M; Leslie, J C

    1996-08-01

    Recent studies have demonstrated that visual and verbal suppression tasks interfere with olfactory memory in a manner which is partially consistent with a dual coding interpretation. However, it has been suggested that total task complexity rather than modality specificity of the suppression tasks might account for the observed pattern of results. This study addressed the issue of whether or not the level of difficulty and complexity of suppression tasks could explain the apparent modality effects noted in earlier experiments. A total of 608 participants were each allocated to one of 19 experimental conditions involving interference tasks which varied suppression type (visual or verbal), nature of complexity (single, double or mixed) and level of difficulty (easy, optimal or difficult) and presented with 13 target odours. Either recognition of the odours or free recall of the odour names was tested on one occasion, either within 15 minutes of presentation or one week later. Both recognition and recall performance showed an overall effect for suppression nature, suppression level and time of testing with no effect for suppression type. The results lend only limited support to Paivio's (1986) dual coding theory, but have a number of characteristics which suggest that an adequate account of olfactory memory may be broadly similar to current theories of face and object recognition. All of these phenomena might be dealt with by an appropriately modified version of dual coding theory.

  11. When is the right hemisphere holistic and when is it not? The case of Chinese character recognition.

    PubMed

    Chung, Harry K S; Leung, Jacklyn C Y; Wong, Vienne M Y; Hsiao, Janet H

    2018-05-15

    Holistic processing (HP) has long been considered a characteristic of right hemisphere (RH) processing. Indeed, holistic face processing is typically associated with left visual field (LVF)/RH processing advantages. Nevertheless, expert Chinese character recognition involves reduced HP and increased RH lateralization, presenting a counterexample. Recent modeling research suggests that RH processing may be associated with an increase or decrease in HP, depending on whether spacing or component information was used respectively. Since expert Chinese character recognition involves increasing sensitivity to components while deemphasizing spacing information, RH processing in experts may be associated with weaker HP than novices. Consistent with this hypothesis, in a divided visual field paradigm, novices exhibited HP only in the LVF/RH, whereas experts showed no HP in either visual field. This result suggests that the RH may flexibly switch between part-based and holistic representations, consistent with recent fMRI findings. The RH's advantage in global/low spatial frequency processing is suggested to be relative to the task relevant frequency range. Thus, its use of holistic and part-based representations may depend on how attention is allocated for task relevant information. This study provides the first behavioral evidence showing how type of information used for processing modulates perceptual representations in the RH. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Movement cues aid face recognition in developmental prosopagnosia.

    PubMed

    Bennetts, Rachel J; Butcher, Natalie; Lander, Karen; Udale, Robert; Bate, Sarah

    2015-11-01

    Seeing a face in motion can improve face recognition in the general population, and studies of face matching indicate that people with face recognition difficulties (developmental prosopagnosia; DP) may be able to use movement cues as a supplementary strategy to help them process faces. However, the use of facial movement cues in DP has not been examined in the context of familiar face recognition. This study examined whether people with DP were better at recognizing famous faces presented in motion, compared to static. Nine participants with DP and 14 age-matched controls completed a famous face recognition task. Each face was presented twice across 2 blocks: once in motion and once as a still image. Discriminability (A) was calculated for each block. Participants with DP showed a significant movement advantage overall. This was driven by a movement advantage in the first block, but not in the second block. Participants with DP were significantly worse than controls at identifying faces from static images, but there was no difference between those with DP and controls for moving images. Seeing a familiar face in motion can improve face recognition in people with DP, at least in some circumstances. The mechanisms behind this effect are unclear, but these results suggest that some people with DP are able to learn and recognize patterns of facial motion, and movement can act as a useful cue when face recognition is impaired. (c) 2015 APA, all rights reserved).

  13. A Benchmark and Comparative Study of Video-Based Face Recognition on COX Face Database.

    PubMed

    Huang, Zhiwu; Shan, Shiguang; Wang, Ruiping; Zhang, Haihong; Lao, Shihong; Kuerban, Alifu; Chen, Xilin

    2015-12-01

    Face recognition with still face images has been widely studied, while the research on video-based face recognition is inadequate relatively, especially in terms of benchmark datasets and comparisons. Real-world video-based face recognition applications require techniques for three distinct scenarios: 1) Videoto-Still (V2S); 2) Still-to-Video (S2V); and 3) Video-to-Video (V2V), respectively, taking video or still image as query or target. To the best of our knowledge, few datasets and evaluation protocols have benchmarked for all the three scenarios. In order to facilitate the study of this specific topic, this paper contributes a benchmarking and comparative study based on a newly collected still/video face database, named COX(1) Face DB. Specifically, we make three contributions. First, we collect and release a largescale still/video face database to simulate video surveillance with three different video-based face recognition scenarios (i.e., V2S, S2V, and V2V). Second, for benchmarking the three scenarios designed on our database, we review and experimentally compare a number of existing set-based methods. Third, we further propose a novel Point-to-Set Correlation Learning (PSCL) method, and experimentally show that it can be used as a promising baseline method for V2S/S2V face recognition on COX Face DB. Extensive experimental results clearly demonstrate that video-based face recognition needs more efforts, and our COX Face DB is a good benchmark database for evaluation.

  14. Familiarity and face emotion recognition in patients with schizophrenia.

    PubMed

    Lahera, Guillermo; Herrera, Sara; Fernández, Cristina; Bardón, Marta; de los Ángeles, Victoria; Fernández-Liria, Alberto

    2014-01-01

    To assess the emotion recognition in familiar and unknown faces in a sample of schizophrenic patients and healthy controls. Face emotion recognition of 18 outpatients diagnosed with schizophrenia (DSM-IVTR) and 18 healthy volunteers was assessed with two Emotion Recognition Tasks using familiar faces and unknown faces. Each subject was accompanied by 4 familiar people (parents, siblings or friends), which were photographed by expressing the 6 Ekman's basic emotions. Face emotion recognition in familiar faces was assessed with this ad hoc instrument. In each case, the patient scored (from 1 to 10) the subjective familiarity and affective valence corresponding to each person. Patients with schizophrenia not only showed a deficit in the recognition of emotions on unknown faces (p=.01), but they also showed an even more pronounced deficit on familiar faces (p=.001). Controls had a similar success rate in the unknown faces task (mean: 18 +/- 2.2) and the familiar face task (mean: 17.4 +/- 3). However, patients had a significantly lower score in the familiar faces task (mean: 13.2 +/- 3.8) than in the unknown faces task (mean: 16 +/- 2.4; p<.05). In both tests, the highest number of errors was with emotions of anger and fear. Subjectively, the patient group showed a lower level of familiarity and emotional valence to their respective relatives (p<.01). The sense of familiarity may be a factor involved in the face emotion recognition and it may be disturbed in schizophrenia. © 2013.

  15. Brain Signals of Face Processing as Revealed by Event-Related Potentials

    PubMed Central

    Olivares, Ela I.; Iglesias, Jaime; Saavedra, Cristina; Trujillo-Barreto, Nelson J.; Valdés-Sosa, Mitchell

    2015-01-01

    We analyze the functional significance of different event-related potentials (ERPs) as electrophysiological indices of face perception and face recognition, according to cognitive and neurofunctional models of face processing. Initially, the processing of faces seems to be supported by early extrastriate occipital cortices and revealed by modulations of the occipital P1. This early response is thought to reflect the detection of certain primary structural aspects indicating the presence grosso modo of a face within the visual field. The posterior-temporal N170 is more sensitive to the detection of faces as complex-structured stimuli and, therefore, to the presence of its distinctive organizational characteristics prior to within-category identification. In turn, the relatively late and probably more rostrally generated N250r and N400-like responses might respectively indicate processes of access and retrieval of face-related information, which is stored in long-term memory (LTM). New methods of analysis of electrophysiological and neuroanatomical data, namely, dynamic causal modeling, single-trial and time-frequency analyses, are highly recommended to advance in the knowledge of those brain mechanisms concerning face processing. PMID:26160999

  16. The Hierarchical Brain Network for Face Recognition

    PubMed Central

    Zhen, Zonglei; Fang, Huizhen; Liu, Jia

    2013-01-01

    Numerous functional magnetic resonance imaging (fMRI) studies have identified multiple cortical regions that are involved in face processing in the human brain. However, few studies have characterized the face-processing network as a functioning whole. In this study, we used fMRI to identify face-selective regions in the entire brain and then explore the hierarchical structure of the face-processing network by analyzing functional connectivity among these regions. We identified twenty-five regions mainly in the occipital, temporal and frontal cortex that showed a reliable response selective to faces (versus objects) across participants and across scan sessions. Furthermore, these regions were clustered into three relatively independent sub-networks in a face-recognition task on the basis of the strength of functional connectivity among them. The functionality of the sub-networks likely corresponds to the recognition of individual identity, retrieval of semantic knowledge and representation of emotional information. Interestingly, when the task was switched to object recognition from face recognition, the functional connectivity between the inferior occipital gyrus and the rest of the face-selective regions were significantly reduced, suggesting that this region may serve as an entry node in the face-processing network. In sum, our study provides empirical evidence for cognitive and neural models of face recognition and helps elucidate the neural mechanisms underlying face recognition at the network level. PMID:23527282

  17. Reading in developmental prosopagnosia: Evidence for a dissociation between word and face recognition.

    PubMed

    Starrfelt, Randi; Klargaard, Solja K; Petersen, Anders; Gerlach, Christian

    2018-02-01

    Recent models suggest that face and word recognition may rely on overlapping cognitive processes and neural regions. In support of this notion, face recognition deficits have been demonstrated in developmental dyslexia. Here we test whether the opposite association can also be found, that is, impaired reading in developmental prosopagnosia. We tested 10 adults with developmental prosopagnosia and 20 matched controls. All participants completed the Cambridge Face Memory Test, the Cambridge Face Perception test and a Face recognition questionnaire used to quantify everyday face recognition experience. Reading was measured in four experimental tasks, testing different levels of letter, word, and text reading: (a) single word reading with words of varying length,(b) vocal response times in single letter and short word naming, (c) recognition of single letters and short words at brief exposure durations (targeting the word superiority effect), and d) text reading. Participants with developmental prosopagnosia performed strikingly similar to controls across the four reading tasks. Formal analysis revealed a significant dissociation between word and face recognition, as the difference in performance with faces and words was significantly greater for participants with developmental prosopagnosia than for controls. Adult developmental prosopagnosics read as quickly and fluently as controls, while they are seemingly unable to learn efficient strategies for recognizing faces. We suggest that this is due to the differing demands that face and word recognition put on the perceptual system. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  18. Retention of identity versus expression of emotional faces differs in the recruitment of limbic areas.

    PubMed

    Röder, Christian H; Mohr, Harald; Linden, David E J

    2011-02-01

    Faces are multidimensional stimuli that convey information for complex social and emotional functions. Separate neural systems have been implicated in the recognition of facial identity (mainly extrastriate visual cortex) and emotional expression (limbic areas and the superior temporal sulcus). Working-memory (WM) studies with faces have shown different but partly overlapping activation patterns in comparison to spatial WM in parietal and prefrontal areas. However, little is known about the neural representations of the different facial dimensions during WM. In the present study 22 subjects performed a face-identity or face-emotion WM task at different load levels during functional magnetic resonance imaging. We found a fronto-parietal-visual WM-network for both tasks during maintenance, including fusiform gyrus. Limbic areas in the amygdala and parahippocampal gyrus demonstrated a stronger activation for the identity than the emotion condition. One explanation for this finding is that the repetitive presentation of faces with different identities but the same emotional expression during the identity-task is responsible for the stronger increase in BOLD signal in the amygdala. These results raise the question how different emotional expressions are coded in WM. Our findings suggest that emotional expressions are re-coded in an abstract representation that is supported at the neural level by the canonical fronto-parietal WM network. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. Gamma Activation in Young People with Autism Spectrum Disorders and Typically-Developing Controls When Viewing Emotions on Faces

    PubMed Central

    Wright, Barry; Alderson-Day, Ben; Prendergast, Garreth; Bennett, Sophie; Jordan, Jo; Whitton, Clare; Gouws, Andre; Jones, Nick; Attur, Ram; Tomlinson, Heather; Green, Gary

    2012-01-01

    Background Behavioural studies have highlighted irregularities in recognition of facial affect in children and young people with autism spectrum disorders (ASDs). Recent findings from studies utilising electroencephalography (EEG) and magnetoencephalography (MEG) have identified abnormal activation and irregular maintenance of gamma (>30 Hz) range oscillations when ASD individuals attempt basic visual and auditory tasks. Methodology/Principal Fndings The pilot study reported here is the first study to use spatial filtering techniques in MEG to explore face processing in children with ASD. We set out to examine theoretical suggestions that gamma activation underlying face processing may be different in a group of children and young people with ASD (n = 13) compared to typically developing (TD) age, gender and IQ matched controls. Beamforming and virtual electrode techniques were used to assess spatially localised induced and evoked activity. While lower-band (3–30 Hz) responses to faces were similar between groups, the ASD gamma response in occipital areas was observed to be largely absent when viewing emotions on faces. Virtual electrode analysis indicated the presence of intact evoked responses but abnormal induced activity in ASD participants. Conclusions/Significance These findings lend weight to previous suggestions that specific components of the early visual response to emotional faces is abnormal in ASD. Elucidation of the nature and specificity of these findings is worthy of further research. PMID:22859975

  20. NEONATAL VISUAL INFORMATION PROCESSING IN COCAINE-EXPOSED AND NON-EXPOSED INFANTS

    PubMed Central

    Singer, Lynn T.; Arendt, Robert; Fagan, Joseph; Minnes, Sonia; Salvator, Ann; Bolek, Tina; Becker, Michael

    2014-01-01

    This study investigated early neonatal visual preferences in 267 poly drug exposed neonates (131 cocaine-exposed and 136 non-cocaine exposed) whose drug exposure was documented through interviews and urine and meconium drug screens. Infants were given four visual recognition memory tasks comparing looking time to familiarized stimuli of lattices and rectangular shapes to novel stimuli of a schematic face and curved hourglass and bull’s eye forms. Cocaine-exposed infants performed more poorly, after consideration of confounding factors, with a relationship of severity of cocaine exposure to lower novelty score found for both self-report and biologic measures of exposure, Findings support theories which link prenatal cocaine exposure to deficits in information processing entailing attentional and arousal organizational systems. Neonatal visual discrimination and attention tasks should be further explored as potentially sensitive behavioral indicators of teratologic effects. PMID:25717215

  1. Holistic integration of gaze cues in visual face and body perception: Evidence from the composite design.

    PubMed

    Vrancken, Leia; Germeys, Filip; Verfaillie, Karl

    2017-01-01

    A considerable amount of research on identity recognition and emotion identification with the composite design points to the holistic processing of these aspects in faces and bodies. In this paradigm, the interference from a nonattended face half on the perception of the attended half is taken as evidence for holistic processing (i.e., a composite effect). Far less research, however, has been dedicated to the concept of gaze. Nonetheless, gaze perception is a substantial component of face and body perception, and holds critical information for everyday communicative interactions. Furthermore, the ability of human observers to detect direct versus averted eye gaze is effortless, perhaps similar to identity perception and emotion recognition. However, the hypothesis of holistic perception of eye gaze has never been tested directly. Research on gaze perception with the composite design could facilitate further systematic comparison with other aspects of face and body perception that have been investigated using the composite design (i.e., identity and emotion). In the present research, a composite design was administered to assess holistic processing of gaze cues in faces (Experiment 1) and bodies (Experiment 2). Results confirmed that eye and head orientation (Experiment 1A) and head and body orientation (Experiment 2A) are integrated in a holistic manner. However, the composite effect was not completely disrupted by inversion (Experiments 1B and 2B), a finding that will be discussed together with implications for future research.

  2. Feature saliency in judging the sex and familiarity of faces.

    PubMed

    Roberts, T; Bruce, V

    1988-01-01

    Two experiments are reported on the effect of feature masking on judgements of the sex and familiarity of faces. In experiment 1 the effect of masking the eyes, nose, or mouth of famous and nonfamous, male and female faces on response times in two tasks was investigated. In the first, recognition, task only masking of the eyes had a significant effect on response times. In the second, sex-judgement, task masking of the nose gave rise to a significant and large increase in response times. In experiment 2 it was found that when facial features were presented in isolation in a sex-judgement task, responses to noses were at chance level, unlike those for eyes or mouths. It appears that visual information available from the nose in isolation from the rest of the face is not sufficient for sex judgement, yet masking of the nose may disrupt the extraction of information about the overall topography of the face, information that may be more useful for sex judgement than for identification of a face.

  3. Atypical development of configural face recognition in children with autism, Down syndrome and Williams syndrome.

    PubMed

    Dimitriou, D; Leonard, H C; Karmiloff-Smith, A; Johnson, M H; Thomas, M S C

    2015-05-01

    Configural processing in face recognition is a sensitivity to the spacing between facial features. It has been argued both that its presence represents a high level of expertise in face recognition, and also that it is a developmentally vulnerable process. We report a cross-syndrome investigation of the development of configural face recognition in school-aged children with autism, Down syndrome and Williams syndrome compared with a typically developing comparison group. Cross-sectional trajectory analyses were used to compare configural and featural face recognition utilising the 'Jane faces' task. Trajectories were constructed linking featural and configural performance either to chronological age or to different measures of mental age (receptive vocabulary, visuospatial construction), as well as the Benton face recognition task. An emergent inversion effect across age for detecting configural but not featural changes in faces was established as the marker of typical development. Children from clinical groups displayed atypical profiles that differed across all groups. We discuss the implications for the nature of face processing within the respective developmental disorders, and how the cross-sectional syndrome comparison informs the constraints that shape the typical development of face recognition. © 2014 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.

  4. Impaired processing of self-face recognition in anorexia nervosa.

    PubMed

    Hirot, France; Lesage, Marine; Pedron, Lya; Meyer, Isabelle; Thomas, Pierre; Cottencin, Olivier; Guardia, Dewi

    2016-03-01

    Body image disturbances and massive weight loss are major clinical symptoms of anorexia nervosa (AN). The aim of the present study was to examine the influence of body changes and eating attitudes on self-face recognition ability in AN. Twenty-seven subjects suffering from AN and 27 control participants performed a self-face recognition task (SFRT). During the task, digital morphs between their own face and a gender-matched unfamiliar face were presented in a random sequence. Participants' self-face recognition failures, cognitive flexibility, body concern and eating habits were assessed with the Self-Face Recognition Questionnaire (SFRQ), Trail Making Test (TMT), Body Shape Questionnaire (BSQ) and Eating Disorder Inventory-2 (EDI-2), respectively. Subjects suffering from AN exhibited significantly greater difficulties than control participants in identifying their own face (p = 0.028). No significant difference was observed between the two groups for TMT (all p > 0.1, non-significant). Regarding predictors of self-face recognition skills, there was a negative correlation between SFRT and body mass index (p = 0.01) and a positive correlation between SFRQ and EDI-2 (p < 0.001) or BSQ (p < 0.001). Among factors involved, nutritional status and intensity of eating disorders could play a part in impaired self-face recognition.

  5. Uncooled Infrared Imaging Face Recognition Using Kernel-Based Feature Vector Selection

    DTIC Science & Technology

    2006-09-01

    j ij i jCI j G G p s t id g id p P t p P P ≥ = = ∀ ∈ (4.10) A visualization of that trade-off relationship is depicted in Figure 4.2...X_train_proj a value a_f a_g end t_hat(:,step)=(pp1(:,1))/sqrt(.2*sum(s2)) end 131 clear A1 A2 Num_replic dim1

  6. Emotional face recognition in adolescent suicide attempters and adolescents engaging in non-suicidal self-injury.

    PubMed

    Seymour, Karen E; Jones, Richard N; Cushman, Grace K; Galvan, Thania; Puzia, Megan E; Kim, Kerri L; Spirito, Anthony; Dickstein, Daniel P

    2016-03-01

    Little is known about the bio-behavioral mechanisms underlying and differentiating suicide attempts from non-suicidal self-injury (NSSI) in adolescents. Adolescents who attempt suicide or engage in NSSI often report significant interpersonal and social difficulties. Emotional face recognition ability is a fundamental skill required for successful social interactions, and deficits in this ability may provide insight into the unique brain-behavior interactions underlying suicide attempts versus NSSI in adolescents. Therefore, we examined emotional face recognition ability among three mutually exclusive groups: (1) inpatient adolescents who attempted suicide (SA, n = 30); (2) inpatient adolescents engaged in NSSI (NSSI, n = 30); and (3) typically developing controls (TDC, n = 30) without psychiatric illness. Participants included adolescents aged 13-17 years, matched on age, gender and full-scale IQ. Emotional face recognition was evaluated using the diagnostic assessment of nonverbal accuracy (DANVA-2). Compared to TDC youth, adolescents with NSSI made more errors on child fearful and adult sad face recognition while controlling for psychopathology and medication status (ps < 0.05). No differences were found on emotional face recognition between NSSI and SA groups. Secondary analyses showed that compared to inpatients without major depression, those with major depression made fewer errors on adult sad face recognition even when controlling for group status (p < 0.05). Further, compared to inpatients without generalized anxiety, those with generalized anxiety made fewer recognition errors on adult happy faces even when controlling for group status (p < 0.05). Adolescent inpatients engaged in NSSI showed greater deficits in emotional face recognition than TDC, but not inpatient adolescents who attempted suicide. Further results suggest the importance of psychopathology in emotional face recognition. Replication of these preliminary results and examination of the role of context-dependent emotional processing are needed moving forward.

  7. Tracking and Classification of In-Air Hand Gesture Based on Thermal Guided Joint Filter.

    PubMed

    Kim, Seongwan; Ban, Yuseok; Lee, Sangyoun

    2017-01-17

    The research on hand gestures has attracted many image processing-related studies, as it intuitively conveys the intention of a human as it pertains to motional meaning. Various sensors have been used to exploit the advantages of different modalities for the extraction of important information conveyed by the hand gesture of a user. Although many works have focused on learning the benefits of thermal information from thermal cameras, most have focused on face recognition or human body detection, rather than hand gesture recognition. Additionally, the majority of the works that take advantage of multiple modalities (e.g., the combination of a thermal sensor and a visual sensor), usually adopting simple fusion approaches between the two modalities. As both thermal sensors and visual sensors have their own shortcomings and strengths, we propose a novel joint filter-based hand gesture recognition method to simultaneously exploit the strengths and compensate the shortcomings of each. Our study is motivated by the investigation of the mutual supplementation between thermal and visual information in low feature level for the consistent representation of a hand in the presence of varying lighting conditions. Accordingly, our proposed method leverages the thermal sensor's stability against luminance and the visual sensors textural detail, while complementing the low resolution and halo effect of thermal sensors and the weakness against illumination of visual sensors. A conventional region tracking method and a deep convolutional neural network have been leveraged to track the trajectory of a hand gesture and to recognize the hand gesture, respectively. Our experimental results show stability in recognizing a hand gesture against varying lighting conditions based on the contribution of the joint kernels of spatial adjacency and thermal range similarity.

  8. Tracking and Classification of In-Air Hand Gesture Based on Thermal Guided Joint Filter

    PubMed Central

    Kim, Seongwan; Ban, Yuseok; Lee, Sangyoun

    2017-01-01

    The research on hand gestures has attracted many image processing-related studies, as it intuitively conveys the intention of a human as it pertains to motional meaning. Various sensors have been used to exploit the advantages of different modalities for the extraction of important information conveyed by the hand gesture of a user. Although many works have focused on learning the benefits of thermal information from thermal cameras, most have focused on face recognition or human body detection, rather than hand gesture recognition. Additionally, the majority of the works that take advantage of multiple modalities (e.g., the combination of a thermal sensor and a visual sensor), usually adopting simple fusion approaches between the two modalities. As both thermal sensors and visual sensors have their own shortcomings and strengths, we propose a novel joint filter-based hand gesture recognition method to simultaneously exploit the strengths and compensate the shortcomings of each. Our study is motivated by the investigation of the mutual supplementation between thermal and visual information in low feature level for the consistent representation of a hand in the presence of varying lighting conditions. Accordingly, our proposed method leverages the thermal sensor’s stability against luminance and the visual sensors textural detail, while complementing the low resolution and halo effect of thermal sensors and the weakness against illumination of visual sensors. A conventional region tracking method and a deep convolutional neural network have been leveraged to track the trajectory of a hand gesture and to recognize the hand gesture, respectively. Our experimental results show stability in recognizing a hand gesture against varying lighting conditions based on the contribution of the joint kernels of spatial adjacency and thermal range similarity. PMID:28106716

  9. HOTS: A Hierarchy of Event-Based Time-Surfaces for Pattern Recognition.

    PubMed

    Lagorce, Xavier; Orchard, Garrick; Galluppi, Francesco; Shi, Bertram E; Benosman, Ryad B

    2017-07-01

    This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy.This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy.

  10. Pose-Invariant Face Recognition via RGB-D Images.

    PubMed

    Sang, Gaoli; Li, Jing; Zhao, Qijun

    2016-01-01

    Three-dimensional (3D) face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D) face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions.

  11. Age-related differences in brain electrical activity during extended continuous face recognition in younger children, older children and adults.

    PubMed

    Van Strien, Jan W; Glimmerveen, Johanna C; Franken, Ingmar H A; Martens, Vanessa E G; de Bruin, Eveline A

    2011-09-01

    To examine the development of recognition memory in primary-school children, 36 healthy younger children (8-9 years old) and 36 healthy older children (11-12 years old) participated in an ERP study with an extended continuous face recognition task (Study 1). Each face of a series of 30 faces was shown randomly six times interspersed with distracter faces. The children were required to make old vs. new decisions. Older children responded faster than younger children, but younger children exhibited a steeper decrease in latencies across the five repetitions. Older children exhibited better accuracy for new faces, but there were no age differences in recognition accuracy for repeated faces. For the N2, N400 and late positive complex (LPC), we analyzed the old/new effects (repetition 1 vs. new presentation) and the extended repetition effects (repetitions 1 through 5). Compared to older children, younger children exhibited larger frontocentral N2 and N400 old/new effects. For extended face repetitions, negativity of the N2 and N400 decreased in a linear fashion in both age groups. For the LPC, an ERP component thought to reflect recollection, no significant old/new or extended repetition effects were found. Employing the same face recognition paradigm in 20 adults (Study 2), we found a significant N400 old/new effect at lateral frontal sites and a significant LPC repetition effect at parietal sites, with LPC amplitudes increasing linearly with the number of repetitions. This study clearly demonstrates differential developmental courses for the N400 and LPC pertaining to recognition memory for faces. It is concluded that face recognition in children is mediated by early and probably more automatic than conscious recognition processes. In adults, the LPC extended repetition effect indicates that adult face recognition memory is related to a conscious and graded recollection process rather than to an automatic recognition process. © 2011 Blackwell Publishing Ltd.

  12. Successful decoding of famous faces in the fusiform face area.

    PubMed

    Axelrod, Vadim; Yovel, Galit

    2015-01-01

    What are the neural mechanisms of face recognition? It is believed that the network of face-selective areas, which spans the occipital, temporal, and frontal cortices, is important in face recognition. A number of previous studies indeed reported that face identity could be discriminated based on patterns of multivoxel activity in the fusiform face area and the anterior temporal lobe. However, given the difficulty in localizing the face-selective area in the anterior temporal lobe, its role in face recognition is still unknown. Furthermore, previous studies limited their analysis to occipito-temporal regions without testing identity decoding in more anterior face-selective regions, such as the amygdala and prefrontal cortex. In the current high-resolution functional Magnetic Resonance Imaging study, we systematically examined the decoding of the identity of famous faces in the temporo-frontal network of face-selective and adjacent non-face-selective regions. A special focus has been put on the face-area in the anterior temporal lobe, which was reliably localized using an optimized scanning protocol. We found that face-identity could be discriminated above chance level only in the fusiform face area. Our results corroborate the role of the fusiform face area in face recognition. Future studies are needed to further explore the role of the more recently discovered anterior face-selective areas in face recognition.

  13. The role of skin colour in face recognition.

    PubMed

    Bar-Haim, Yair; Saidel, Talia; Yovel, Galit

    2009-01-01

    People have better memory for faces from their own racial group than for faces from other races. It has been suggested that this own-race recognition advantage depends on an initial categorisation of faces into own and other race based on racial markers, resulting in poorer encoding of individual variations in other-race faces. Here, we used a study--test recognition task with stimuli in which the skin colour of African and Caucasian faces was manipulated to produce four categories representing the cross-section between skin colour and facial features. We show that, despite the notion that skin colour plays a major role in categorising faces into own and other-race faces, its effect on face recognition is minor relative to differences across races in facial features.

  14. Recognition of face and non-face stimuli in autistic spectrum disorder.

    PubMed

    Arkush, Leo; Smith-Collins, Adam P R; Fiorentini, Chiara; Skuse, David H

    2013-12-01

    The ability to remember faces is critical for the development of social competence. From childhood to adulthood, we acquire a high level of expertise in the recognition of facial images, and neural processes become dedicated to sustaining competence. Many people with autism spectrum disorder (ASD) have poor face recognition memory; changes in hairstyle or other non-facial features in an otherwise familiar person affect their recollection skills. The observation implies that they may not use the configuration of the inner face to achieve memory competence, but bolster performance in other ways. We aimed to test this hypothesis by comparing the performance of a group of high-functioning unmedicated adolescents with ASD and a matched control group on a "surprise" face recognition memory task. We compared their memory for unfamiliar faces with their memory for images of houses. To evaluate the role that is played by peripheral cues in assisting recognition memory, we cropped both sets of pictures, retaining only the most salient central features. ASD adolescents had poorer recognition memory for faces than typical controls, but their recognition memory for houses was unimpaired. Cropping images of faces did not disproportionately influence their recall accuracy, relative to controls. House recognition skills (cropped and uncropped) were similar in both groups. In the ASD group only, performance on both sets of task was closely correlated, implying that memory for faces and other complex pictorial stimuli is achieved by domain-general (non-dedicated) cognitive mechanisms. Adolescents with ASD apparently do not use domain-specialized processing of inner facial cues to support face recognition memory. © 2013 International Society for Autism Research, Wiley Periodicals, Inc.

  15. Contextual modulation of biases in face recognition.

    PubMed

    Felisberti, Fatima Maria; Pavey, Louisa

    2010-09-23

    The ability to recognize the faces of potential cooperators and cheaters is fundamental to social exchanges, given that cooperation for mutual benefit is expected. Studies addressing biases in face recognition have so far proved inconclusive, with reports of biases towards faces of cheaters, biases towards faces of cooperators, or no biases at all. This study attempts to uncover possible causes underlying such discrepancies. Four experiments were designed to investigate biases in face recognition during social exchanges when behavioral descriptors (prosocial, antisocial or neutral) embedded in different scenarios were tagged to faces during memorization. Face recognition, measured as accuracy and response latency, was tested with modified yes-no, forced-choice and recall tasks (N = 174). An enhanced recognition of faces tagged with prosocial descriptors was observed when the encoding scenario involved financial transactions and the rules of the social contract were not explicit (experiments 1 and 2). Such bias was eliminated or attenuated by making participants explicitly aware of "cooperative", "cheating" and "neutral/indifferent" behaviors via a pre-test questionnaire and then adding such tags to behavioral descriptors (experiment 3). Further, in a social judgment scenario with descriptors of salient moral behaviors, recognition of antisocial and prosocial faces was similar, but significantly better than neutral faces (experiment 4). The results highlight the relevance of descriptors and scenarios of social exchange in face recognition, when the frequency of prosocial and antisocial individuals in a group is similar. Recognition biases towards prosocial faces emerged when descriptors did not state the rules of a social contract or the moral status of a behavior, and they point to the existence of broad and flexible cognitive abilities finely tuned to minor changes in social context.

  16. Facial emotion perception by intensity in children and adolescents with 22q11.2 deletion syndrome.

    PubMed

    Leleu, Arnaud; Saucourt, Guillaume; Rigard, Caroline; Chesnoy, Gabrielle; Baudouin, Jean-Yves; Rossi, Massimiliano; Edery, Patrick; Franck, Nicolas; Demily, Caroline

    2016-03-01

    Difficulties in the recognition of emotions in expressive faces have been reported in people with 22q11.2 deletion syndrome (22q11.2DS). However, while low-intensity expressive faces are frequent in everyday life, nothing is known about their ability to perceive facial emotions depending on the intensity of expression. Through a visual matching task, children and adolescents with 22q11.2DS as well as gender- and age-matched healthy participants were asked to categorise the emotion of a target face among six possible expressions. Static pictures of morphs between neutrality and expressions were used to parametrically manipulate the intensity of the target face. In comparison to healthy controls, results showed higher perception thresholds (i.e. a more intense expression is needed to perceive the emotion) and lower accuracy for the most expressive faces indicating reduced categorisation abilities in the 22q11.2DS group. The number of intrusions (i.e. each time an emotion is perceived as another one) and a more gradual perception performance indicated smooth boundaries between emotional categories. Correlational analyses with neuropsychological and clinical measures suggested that reduced visual skills may be associated with impaired categorisation of facial emotions. Overall, the present study indicates greater difficulties for children and adolescents with 22q11.2DS to perceive an emotion in low-intensity expressive faces. This disability is subtended by emotional categories that are not sharply organised. It also suggests that these difficulties may be associated with impaired visual cognition, a hallmark of the cognitive deficits observed in the syndrome. These data yield promising tracks for future experimental and clinical investigations.

  17. Correlations between psychometric schizotypy, scan path length, fixations on the eyes and face recognition.

    PubMed

    Hills, Peter J; Eaton, Elizabeth; Pake, J Michael

    2016-01-01

    Psychometric schizotypy in the general population correlates negatively with face recognition accuracy, potentially due to deficits in inhibition, social withdrawal, or eye-movement abnormalities. We report an eye-tracking face recognition study in which participants were required to match one of two faces (target and distractor) to a cue face presented immediately before. All faces could be presented with or without paraphernalia (e.g., hats, glasses, facial hair). Results showed that paraphernalia distracted participants, and that the most distracting condition was when the cue and the distractor face had paraphernalia but the target face did not, while there was no correlation between distractibility and participants' scores on the Schizotypal Personality Questionnaire (SPQ). Schizotypy was negatively correlated with proportion of time fixating on the eyes and positively correlated with not fixating on a feature. It was negatively correlated with scan path length and this variable correlated with face recognition accuracy. These results are interpreted as schizotypal traits being associated with a restricted scan path leading to face recognition deficits.

  18. The Oxytocin Receptor Gene ( OXTR) and Face Recognition.

    PubMed

    Verhallen, Roeland J; Bosten, Jenny M; Goodbourn, Patrick T; Lawrance-Owen, Adam J; Bargary, Gary; Mollon, J D

    2017-01-01

    A recent study has linked individual differences in face recognition to rs237887, a single-nucleotide polymorphism (SNP) of the oxytocin receptor gene ( OXTR; Skuse et al., 2014). In that study, participants were assessed using the Warrington Recognition Memory Test for Faces, but performance on Warrington's test has been shown not to rely purely on face recognition processes. We administered the widely used Cambridge Face Memory Test-a purer test of face recognition-to 370 participants. Performance was not significantly associated with rs237887, with 16 other SNPs of OXTR that we genotyped, or with a further 75 imputed SNPs. We also administered three other tests of face processing (the Mooney Face Test, the Glasgow Face Matching Test, and the Composite Face Test), but performance was never significantly associated with rs237887 or with any of the other genotyped or imputed SNPs, after corrections for multiple testing. In addition, we found no associations between OXTR and Autism-Spectrum Quotient scores.

  19. Deficits in long-term recognition memory reveal dissociated subtypes in congenital prosopagnosia.

    PubMed

    Stollhoff, Rainer; Jost, Jürgen; Elze, Tobias; Kennerknecht, Ingo

    2011-01-25

    The study investigates long-term recognition memory in congenital prosopagnosia (CP), a lifelong impairment in face identification that is present from birth. Previous investigations of processing deficits in CP have mostly relied on short-term recognition tests to estimate the scope and severity of individual deficits. We firstly report on a controlled test of long-term (one year) recognition memory for faces and objects conducted with a large group of participants with CP. Long-term recognition memory is significantly impaired in eight CP participants (CPs). In all but one case, this deficit was selective to faces and didn't extend to intra-class recognition of object stimuli. In a test of famous face recognition, long-term recognition deficits were less pronounced, even after accounting for differences in media consumption between controls and CPs. Secondly, we combined test results on long-term and short-term recognition of faces and objects, and found a large heterogeneity in severity and scope of individual deficits. Analysis of the observed heterogeneity revealed a dissociation of CP into subtypes with a homogeneous phenotypical profile. Thirdly, we found that among CPs self-assessment of real-life difficulties, based on a standardized questionnaire, and experimentally assessed face recognition deficits are strongly correlated. Our results demonstrate that controlled tests of long-term recognition memory are needed to fully assess face recognition deficits in CP. Based on controlled and comprehensive experimental testing, CP can be dissociated into subtypes with a homogeneous phenotypical profile. The CP subtypes identified align with those found in prosopagnosia caused by cortical lesions; they can be interpreted with respect to a hierarchical neural system for face perception.

  20. Deficits in Long-Term Recognition Memory Reveal Dissociated Subtypes in Congenital Prosopagnosia

    PubMed Central

    Stollhoff, Rainer; Jost, Jürgen; Elze, Tobias; Kennerknecht, Ingo

    2011-01-01

    The study investigates long-term recognition memory in congenital prosopagnosia (CP), a lifelong impairment in face identification that is present from birth. Previous investigations of processing deficits in CP have mostly relied on short-term recognition tests to estimate the scope and severity of individual deficits. We firstly report on a controlled test of long-term (one year) recognition memory for faces and objects conducted with a large group of participants with CP. Long-term recognition memory is significantly impaired in eight CP participants (CPs). In all but one case, this deficit was selective to faces and didn't extend to intra-class recognition of object stimuli. In a test of famous face recognition, long-term recognition deficits were less pronounced, even after accounting for differences in media consumption between controls and CPs. Secondly, we combined test results on long-term and short-term recognition of faces and objects, and found a large heterogeneity in severity and scope of individual deficits. Analysis of the observed heterogeneity revealed a dissociation of CP into subtypes with a homogeneous phenotypical profile. Thirdly, we found that among CPs self-assessment of real-life difficulties, based on a standardized questionnaire, and experimentally assessed face recognition deficits are strongly correlated. Our results demonstrate that controlled tests of long-term recognition memory are needed to fully assess face recognition deficits in CP. Based on controlled and comprehensive experimental testing, CP can be dissociated into subtypes with a homogeneous phenotypical profile. The CP subtypes identified align with those found in prosopagnosia caused by cortical lesions; they can be interpreted with respect to a hierarchical neural system for face perception. PMID:21283572

Top