Science.gov

Sample records for accurate face recognition

  1. A Highly Accurate Face Recognition System Using Filtering Correlation

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Sayuri; Kodate, Kashiko

    2007-09-01

    The authors previously constructed a highly accurate fast face recognition optical correlator (FARCO) [E. Watanabe and K. Kodate: Opt. Rev. 12 (2005) 460], and subsequently developed an improved, super high-speed FARCO (S-FARCO), which is able to process several hundred thousand frames per second. The principal advantage of our new system is its wide applicability to any correlation scheme. Three different configurations were proposed, each depending on correlation speed. This paper describes and evaluates a software correlation filter. The face recognition function proved highly accurate, seeing that a low-resolution facial image size (64 × 64 pixels) has been successfully implemented. An operation speed of less than 10 ms was achieved using a personal computer with a central processing unit (CPU) of 3 GHz and 2 GB memory. When we applied the software correlation filter to a high-security cellular phone face recognition system, experiments on 30 female students over a period of three months yielded low error rates: 0% false acceptance rate and 2% false rejection rate. Therefore, the filtering correlation works effectively when applied to low resolution images such as web-based images or faces captured by a monitoring camera.

  2. A cross-race effect in metamemory: Predictions of face recognition are more accurate for members of our own race

    PubMed Central

    Hourihan, Kathleen L.; Benjamin, Aaron S.; Liu, Xiping

    2012-01-01

    The Cross-Race Effect (CRE) in face recognition is the well-replicated finding that people are better at recognizing faces from their own race, relative to other races. The CRE reveals systematic limitations on eyewitness identification accuracy and suggests that some caution is warranted in evaluating cross-race identification. The CRE is a problem because jurors value eyewitness identification highly in verdict decisions. In the present paper, we explore how accurate people are in predicting their ability to recognize own-race and other-race faces. Caucasian and Asian participants viewed photographs of Caucasian and Asian faces, and made immediate judgments of learning during study. An old/new recognition test replicated the CRE: both groups displayed superior discriminability of own-race faces, relative to other-race faces. Importantly, relative metamnemonic accuracy was also greater for own-race faces, indicating that the accuracy of predictions about face recognition is influenced by race. This result indicates another source of concern when eliciting or evaluating eyewitness identification: people are less accurate in judging whether they will or will not recognize a face when that face is of a different race than they are. This new result suggests that a witness’s claim of being likely to recognize a suspect from a lineup should be interpreted with caution when the suspect is of a different race than the witness. PMID:23162788

  3. The Cambridge Face Tracker: Accurate, Low Cost Measurement of Head Posture Using Computer Vision and Face Recognition Software

    PubMed Central

    Thomas, Peter B. M.; Baltrušaitis, Tadas; Robinson, Peter; Vivian, Anthony J.

    2016-01-01

    Purpose We validate a video-based method of head posture measurement. Methods The Cambridge Face Tracker uses neural networks (constrained local neural fields) to recognize facial features in video. The relative position of these facial features is used to calculate head posture. First, we assess the accuracy of this approach against videos in three research databases where each frame is tagged with a precisely measured head posture. Second, we compare our method to a commercially available mechanical device, the Cervical Range of Motion device: four subjects each adopted 43 distinct head postures that were measured using both methods. Results The Cambridge Face Tracker achieved confident facial recognition in 92% of the approximately 38,000 frames of video from the three databases. The respective mean error in absolute head posture was 3.34°, 3.86°, and 2.81°, with a median error of 1.97°, 2.16°, and 1.96°. The accuracy decreased with more extreme head posture. Comparing The Cambridge Face Tracker to the Cervical Range of Motion Device gave correlation coefficients of 0.99 (P < 0.0001), 0.96 (P < 0.0001), and 0.99 (P < 0.0001) for yaw, pitch, and roll, respectively. Conclusions The Cambridge Face Tracker performs well under real-world conditions and within the range of normally-encountered head posture. It allows useful quantification of head posture in real time or from precaptured video. Its performance is similar to that of a clinically validated mechanical device. It has significant advantages over other approaches in that subjects do not need to wear any apparatus, and it requires only low cost, easy-to-setup consumer electronics. Translational Relevance Noncontact assessment of head posture allows more complete clinical assessment of patients, and could benefit surgical planning in future. PMID:27730008

  4. Famous face recognition, face matching, and extraversion.

    PubMed

    Lander, Karen; Poyarekar, Siddhi

    2015-01-01

    It has been previously established that extraverts who are skilled at interpersonal interaction perform significantly better than introverts on a face-specific recognition memory task. In our experiment we further investigate the relationship between extraversion and face recognition, focusing on famous face recognition and face matching. Results indicate that more extraverted individuals perform significantly better on an upright famous face recognition task and show significantly larger face inversion effects. However, our results did not find an effect of extraversion on face matching or inverted famous face recognition.

  5. [Comparative studies of face recognition].

    PubMed

    Kawai, Nobuyuki

    2012-07-01

    Every human being is proficient in face recognition. However, the reason for and the manner in which humans have attained such an ability remain unknown. These questions can be best answered-through comparative studies of face recognition in non-human animals. Studies in both primates and non-primates show that not only primates, but also non-primates possess the ability to extract information from their conspecifics and from human experimenters. Neural specialization for face recognition is shared with mammals in distant taxa, suggesting that face recognition evolved earlier than the emergence of mammals. A recent study indicated that a social insect, the golden paper wasp, can distinguish their conspecific faces, whereas a closely related species, which has a less complex social lifestyle with just one queen ruling a nest of underlings, did not show strong face recognition for their conspecifics. Social complexity and the need to differentiate between one another likely led humans to evolve their face recognition abilities.

  6. Genetic specificity of face recognition.

    PubMed

    Shakeshaft, Nicholas G; Plomin, Robert

    2015-10-13

    Specific cognitive abilities in diverse domains are typically found to be highly heritable and substantially correlated with general cognitive ability (g), both phenotypically and genetically. Recent twin studies have found the ability to memorize and recognize faces to be an exception, being similarly heritable but phenotypically substantially uncorrelated both with g and with general object recognition. However, the genetic relationships between face recognition and other abilities (the extent to which they share a common genetic etiology) cannot be determined from phenotypic associations. In this, to our knowledge, first study of the genetic associations between face recognition and other domains, 2,000 18- and 19-year-old United Kingdom twins completed tests assessing their face recognition, object recognition, and general cognitive abilities. Results confirmed the substantial heritability of face recognition (61%), and multivariate genetic analyses found that most of this genetic influence is unique and not shared with other cognitive abilities.

  7. Holistic processing predicts face recognition.

    PubMed

    Richler, Jennifer J; Cheung, Olivia S; Gauthier, Isabel

    2011-04-01

    The concept of holistic processing is a cornerstone of face-recognition research. In the study reported here, we demonstrated that holistic processing predicts face-recognition abilities on the Cambridge Face Memory Test and on a perceptual face-identification task. Our findings validate a large body of work that relies on the assumption that holistic processing is related to face recognition. These findings also reconcile the study of face recognition with the perceptual-expertise work it inspired; such work links holistic processing of objects with people's ability to individuate them. Our results differ from those of a recent study showing no link between holistic processing and face recognition. This discrepancy can be attributed to the use in prior research of a popular but flawed measure of holistic processing. Our findings salvage the central role of holistic processing in face recognition and cast doubt on a subset of the face-perception literature that relies on a problematic measure of holistic processing.

  8. Semantic information can facilitate covert face recognition in congenital prosopagnosia.

    PubMed

    Rivolta, Davide; Schmalzl, Laura; Coltheart, Max; Palermo, Romina

    2010-11-01

    People with congenital prosopagnosia have never developed the ability to accurately recognize faces. This single case investigation systematically investigates covert and overt face recognition in "C.," a 69 year-old woman with congenital prosopagnosia. Specifically, we: (a) describe the first assessment of covert face recognition in congenital prosopagnosia using multiple tasks; (b) show that semantic information can contribute to covert recognition; and (c) provide a theoretical explanation for the mechanisms underlying covert face recognition.

  9. Effective indexing for face recognition

    NASA Astrophysics Data System (ADS)

    Sochenkov, I.; Sochenkova, A.; Vokhmintsev, A.; Makovetskii, A.; Melnikov, A.

    2016-09-01

    Face recognition is one of the most important tasks in computer vision and pattern recognition. Face recognition is useful for security systems to provide safety. In some situations it is necessary to identify the person among many others. In this case this work presents new approach in data indexing, which provides fast retrieval in big image collections. Data indexing in this research consists of five steps. First, we detect the area containing face, second we align face, and then we detect areas containing eyes and eyebrows, nose, mouth. After that we find key points of each area using different descriptors and finally index these descriptors with help of quantization procedure. The experimental analysis of this method is performed. This paper shows that performing method has results at the level of state-of-the-art face recognition methods, but it is also gives results fast that is important for the systems that provide safety.

  10. Automated Face Recognition System

    DTIC Science & Technology

    1992-12-01

    done at the University of California San Diego will be given(3, 1). Finally, the review will end with a short overview of the Karhunen Lorve and...define a face space. This basis set which is optimally tuned to the training data is derived using the Karhunen Lorve principal component analysis (7

  11. Thermal to Visible Face Recognition

    DTIC Science & Technology

    2012-04-01

    recognition has been an active area of research for the past two decades due its wide range of applications in law enforcement and verification...an ideal modality for nighttime tasks, but the large disparateness between the thermal IR and visible spectrums results in a wide modality gap that...CONCLUSION AND FUTURE WORK In this study, we investigated the thermal-to-visible face recognition problem, which has a wide modality gap. We showed

  12. Face Recognition Using Local Quantized Patterns and Gabor Filters

    NASA Astrophysics Data System (ADS)

    Khryashchev, V.; Priorov, A.; Stepanova, O.; Nikitin, A.

    2015-05-01

    The problem of face recognition in a natural or artificial environment has received a great deal of researchers' attention over the last few years. A lot of methods for accurate face recognition have been proposed. Nevertheless, these methods often fail to accurately recognize the person in difficult scenarios, e.g. low resolution, low contrast, pose variations, etc. We therefore propose an approach for accurate and robust face recognition by using local quantized patterns and Gabor filters. The estimation of the eye centers is used as a preprocessing stage. The evaluation of our algorithm on different samples from a standardized FERET database shows that our method is invariant to the general variations of lighting, expression, occlusion and aging. The proposed approach allows about 20% correct recognition accuracy increase compared with the known face recognition algorithms from the OpenCV library. The additional use of Gabor filters can significantly improve the robustness to changes in lighting conditions.

  13. Face Recognition Incorporating Ancillary Information

    NASA Astrophysics Data System (ADS)

    Kim, Sang-Ki; Toh, Kar-Ann; Lee, Sangyoun

    2007-12-01

    Due to vast variations of extrinsic and intrinsic imaging conditions, face recognition remained to be a challenging computer vision problem even today. This is particularly true when the passive imaging approach is considered for robust applications. To advance existing recognition systems for face, numerous techniques and methods have been proposed to overcome the almost inevitable performance degradation due to external factors such as pose, expression, occlusion, and illumination. In particular, the recent part-based method has provided noticeable room for verification performance improvement based on the localized features which have good tolerance to variation of external conditions. The part-based method, however, does not really stretch the performance without incorporation of global information from the holistic method. In view of the need to fuse the local information and the global information in an adaptive manner for reliable recognition, in this paper we investigate whether such external factors can be explicitly estimated and be used to boost the verification performance during fusion of the holistic and part-based methods. Our empirical evaluations show noticeable performance improvement adopting the proposed method.

  14. Recognition of Faces of Ingroup and Outgroup Children and Adults

    ERIC Educational Resources Information Center

    Corenblum, B.; Meissner, Christian A.

    2006-01-01

    People are often more accurate in recognizing faces of ingroup members than in recognizing faces of outgroup members. Although own-group biases in face recognition are well established among adults, less attention has been given to such biases among children. This is surprising considering how often children give testimony in criminal and civil…

  15. Unaware person recognition from the body when face identification fails.

    PubMed

    Rice, Allyson; Phillips, P Jonathon; Natu, Vaidehi; An, Xiaobo; O'Toole, Alice J

    2013-11-01

    How does one recognize a person when face identification fails? Here, we show that people rely on the body but are unaware of doing so. State-of-the-art face-recognition algorithms were used to select images of people with almost no useful identity information in the face. Recognition of the face alone in these cases was near chance level, but recognition of the person was accurate. Accuracy in identifying the person without the face was identical to that in identifying the whole person. Paradoxically, people reported relying heavily on facial features over noninternal face and body features in making their identity decisions. Eye movements indicated otherwise, with gaze duration and fixations shifting adaptively toward the body and away from the face when the body was a better indicator of identity than the face. This shift occurred with no cost to accuracy or response time. Human identity processing may be partially inaccessible to conscious awareness.

  16. Bayesian Face Recognition and Perceptual Narrowing in Face-Space

    ERIC Educational Resources Information Center

    Balas, Benjamin

    2012-01-01

    During the first year of life, infants' face recognition abilities are subject to "perceptual narrowing", the end result of which is that observers lose the ability to distinguish previously discriminable faces (e.g. other-race faces) from one another. Perceptual narrowing has been reported for faces of different species and different races, in…

  17. Configural processing in face recognition in schizophrenia

    PubMed Central

    Schwartz, Barbara L.; Marvel, Cherie L.; Drapalski, Amy; Rosse, Richard B.; Deutsch, Stephen I.

    2006-01-01

    Introduction. There is currently substantial literature to suggest that patients with schizophrenia are impaired on many face-processing tasks. This study investigated the specific effects of configural changes on face recognition in groups of schizophrenia patients. Methods. In Experiment 1, participants identified facial expressions in upright faces and in faces inverted from their upright orientation. Experiments 2 and 3 examined recognition memory for faces and other non-face objects presented in upright and inverted orientations. Experiment 4 explored recognition of facial identity in composite images where the top half of one face was fused to the bottom half of another face to form a new face configuration. Results. In each experiment, the configural change had the same effect on face recognition for the schizophenia patients as it did for control participants. Recognising inverted faces was more difficult than recognising upright faces, with a disproportionate effect of inversion on faces relative to other objects. Recognition of facial identity in face-halves was interfered with by the formation of a new face configuration. Conclusion. Collectively, these results suggest that people with schizophrenia rely on configural information to recognise photographs of faces. PMID:16528403

  18. The neural speed of familiar face recognition.

    PubMed

    Barragan-Jason, G; Cauchoix, M; Barbeau, E J

    2015-08-01

    Rapidly recognizing familiar people from their faces appears critical for social interactions (e.g., to differentiate friend from foe). However, the actual speed at which the human brain can distinguish familiar from unknown faces still remains debated. In particular, it is not clear whether familiarity can be extracted from rapid face individualization or if it requires additional time consuming processing. We recorded scalp EEG activity in 28 subjects performing a go/no-go, famous/non-famous, unrepeated, face recognition task. Speed constraints were used to encourage subjects to use the earliest familiarity information available. Event related potential (ERP) analyses show that both the N170 and the N250 components were modulated by familiarity. The N170 modulation was related to behaviour: subjects presenting the strongest N170 modulation were also faster but less accurate than those who only showed weak N170 modulation. A complementary Multi-Variate Pattern Analysis (MVPA) confirmed ERP results and provided some more insights into the dynamics of face recognition as the N170 differential effect appeared to be related to a first transitory phase (transitory bump of decoding power) starting at around 140 ms, which returned to baseline afterwards. This bump of activity was henceforth followed by an increase of decoding power starting around 200 ms after stimulus onset. Overall, our results suggest that rather than a simple single-process, familiarity for faces may rely on a cascade of neural processes, including a coarse and fast stage starting at 140 ms and a more refined but slower stage occurring after 200 ms.

  19. Neural microgenesis of personally familiar face recognition.

    PubMed

    Ramon, Meike; Vizioli, Luca; Liu-Shuang, Joan; Rossion, Bruno

    2015-09-01

    Despite a wealth of information provided by neuroimaging research, the neural basis of familiar face recognition in humans remains largely unknown. Here, we isolated the discriminative neural responses to unfamiliar and familiar faces by slowly increasing visual information (i.e., high-spatial frequencies) to progressively reveal faces of unfamiliar or personally familiar individuals. Activation in ventral occipitotemporal face-preferential regions increased with visual information, independently of long-term face familiarity. In contrast, medial temporal lobe structures (perirhinal cortex, amygdala, hippocampus) and anterior inferior temporal cortex responded abruptly when sufficient information for familiar face recognition was accumulated. These observations suggest that following detailed analysis of individual faces in core posterior areas of the face-processing network, familiar face recognition emerges categorically in medial temporal and anterior regions of the extended cortical face network.

  20. Neural microgenesis of personally familiar face recognition

    PubMed Central

    Ramon, Meike; Vizioli, Luca; Liu-Shuang, Joan; Rossion, Bruno

    2015-01-01

    Despite a wealth of information provided by neuroimaging research, the neural basis of familiar face recognition in humans remains largely unknown. Here, we isolated the discriminative neural responses to unfamiliar and familiar faces by slowly increasing visual information (i.e., high-spatial frequencies) to progressively reveal faces of unfamiliar or personally familiar individuals. Activation in ventral occipitotemporal face-preferential regions increased with visual information, independently of long-term face familiarity. In contrast, medial temporal lobe structures (perirhinal cortex, amygdala, hippocampus) and anterior inferior temporal cortex responded abruptly when sufficient information for familiar face recognition was accumulated. These observations suggest that following detailed analysis of individual faces in core posterior areas of the face-processing network, familiar face recognition emerges categorically in medial temporal and anterior regions of the extended cortical face network. PMID:26283361

  1. Traditional facial tattoos disrupt face recognition processes.

    PubMed

    Buttle, Heather; East, Julie

    2010-01-01

    Factors that are important to successful face recognition, such as features, configuration, and pigmentation/reflectance, are all subject to change when a face has been engraved with ink markings. Here we show that the application of facial tattoos, in the form of spiral patterns (typically associated with the Maori tradition of a Moko), disrupts face recognition to a similar extent as face inversion, with recognition accuracy little better than chance performance (2AFC). These results indicate that facial tattoos can severely disrupt our ability to recognise a face that previously did not have the pattern.

  2. Voice Recognition in Face-Blind Patients.

    PubMed

    Liu, Ran R; Pancaroglu, Raika; Hills, Charlotte S; Duchaine, Brad; Barton, Jason J S

    2016-04-01

    Right or bilateral anterior temporal damage can impair face recognition, but whether this is an associative variant of prosopagnosia or part of a multimodal disorder of person recognition is an unsettled question, with implications for cognitive and neuroanatomic models of person recognition. We assessed voice perception and short-term recognition of recently heard voices in 10 subjects with impaired face recognition acquired after cerebral lesions. All 4 subjects with apperceptive prosopagnosia due to lesions limited to fusiform cortex had intact voice discrimination and recognition. One subject with bilateral fusiform and anterior temporal lesions had a combined apperceptive prosopagnosia and apperceptive phonagnosia, the first such described case. Deficits indicating a multimodal syndrome of person recognition were found only in 2 subjects with bilateral anterior temporal lesions. All 3 subjects with right anterior temporal lesions had normal voice perception and recognition, 2 of whom performed normally on perceptual discrimination of faces. This confirms that such lesions can cause a modality-specific associative prosopagnosia.

  3. The hierarchical brain network for face recognition.

    PubMed

    Zhen, Zonglei; Fang, Huizhen; Liu, Jia

    2013-01-01

    Numerous functional magnetic resonance imaging (fMRI) studies have identified multiple cortical regions that are involved in face processing in the human brain. However, few studies have characterized the face-processing network as a functioning whole. In this study, we used fMRI to identify face-selective regions in the entire brain and then explore the hierarchical structure of the face-processing network by analyzing functional connectivity among these regions. We identified twenty-five regions mainly in the occipital, temporal and frontal cortex that showed a reliable response selective to faces (versus objects) across participants and across scan sessions. Furthermore, these regions were clustered into three relatively independent sub-networks in a face-recognition task on the basis of the strength of functional connectivity among them. The functionality of the sub-networks likely corresponds to the recognition of individual identity, retrieval of semantic knowledge and representation of emotional information. Interestingly, when the task was switched to object recognition from face recognition, the functional connectivity between the inferior occipital gyrus and the rest of the face-selective regions were significantly reduced, suggesting that this region may serve as an entry node in the face-processing network. In sum, our study provides empirical evidence for cognitive and neural models of face recognition and helps elucidate the neural mechanisms underlying face recognition at the network level.

  4. Face recognition increases during saccade preparation.

    PubMed

    Lin, Hai; Rizak, Joshua D; Ma, Yuan-ye; Yang, Shang-chuan; Chen, Lin; Hu, Xin-tian

    2014-01-01

    Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition.

  5. Face Recognition Performance: Role of Demographic Information

    DTIC Science & Technology

    2012-01-01

    BTAS). His other research interests include pattern recognition and computer vision . Mark J. Burge is a scientist with The MITRE Corporation, McLean... Pattern Anal. Mach. Intell., vol. 28, no. 12, pp. 2037–2041, 2006. [23] X. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition ...wavelets for face recognition ,” Pattern Analysis & Applications, vol. 9, pp. 273–292, 2006. [25] M. Riesenhuber and T. Poggio, “Hierarchical models of

  6. Optoelectronic-based face recognition versus electronic PCA-based face recognition

    NASA Astrophysics Data System (ADS)

    Alsamman, A.

    2003-11-01

    Face recognition based on principal component analysis (PCA) using eigenfaces is popular in face recognition markets. In this paper we present a comparison between various optoelectronic face recognition techniques and principal component analysis (PCA) based technique for face recognition. Computer simulations are used to study the effectiveness of PCA based technique especially for facial images with a high level of distortion. Results are then compared to various distortion-invariant optoelectronic face recognition algorithms such as synthetic discriminant functions (SDF), projection-slice SDF, optical correlator based neural networks, and pose estimation based correlation.

  7. Face photo-sketch synthesis and recognition.

    PubMed

    Wang, Xiaogang; Tang, Xiaoou

    2009-11-01

    In this paper, we propose a novel face photo-sketch synthesis and recognition method using a multiscale Markov Random Fields (MRF) model. Our system has three components: 1) given a face photo, synthesizing a sketch drawing; 2) given a face sketch drawing, synthesizing a photo; and 3) searching for face photos in the database based on a query sketch drawn by an artist. It has useful applications for both digital entertainment and law enforcement. We assume that faces to be studied are in a frontal pose, with normal lighting and neutral expression, and have no occlusions. To synthesize sketch/photo images, the face region is divided into overlapping patches for learning. The size of the patches decides the scale of local face structures to be learned. From a training set which contains photo-sketch pairs, the joint photo-sketch model is learned at multiple scales using a multiscale MRF model. By transforming a face photo to a sketch (or transforming a sketch to a photo), the difference between photos and sketches is significantly reduced, thus allowing effective matching between the two in face sketch recognition. After the photo-sketch transformation, in principle, most of the proposed face photo recognition approaches can be applied to face sketch recognition in a straightforward way. Extensive experiments are conducted on a face sketch database including 606 faces, which can be downloaded from our Web site (http://mmlab.ie.cuhk.edu.hk/facesketch.html).

  8. Face recognition system and method using face pattern words and face pattern bytes

    DOEpatents

    Zheng, Yufeng

    2014-12-23

    The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.

  9. Extraversion predicts individual differences in face recognition.

    PubMed

    Li, Jingguang; Tian, Moqian; Fang, Huizhen; Xu, Miao; Li, He; Liu, Jia

    2010-07-01

    In daily life, one of the most common social tasks we perform is to recognize faces. However, the relation between face recognition ability and social activities is largely unknown. Here we ask whether individuals with better social skills are also better at recognizing faces. We found that extraverts who have better social skills correctly recognized more faces than introverts. However, this advantage was absent when extraverts were asked to recognize non-social stimuli (e.g., flowers). In particular, the underlying facet that makes extraverts better face recognizers is the gregariousness facet that measures the degree of inter-personal interaction. In addition, the link between extraversion and face recognition ability was independent of general cognitive abilities. These findings provide the first evidence that links face recognition ability to our daily activity in social communication, supporting the hypothesis that extraverts are better at decoding social information than introverts.

  10. Face recognition motivated by human approach

    NASA Astrophysics Data System (ADS)

    Kamgar-Parsi, Behrooz; Lawson, Wallace Edgar; Kamgar-Parsi, Behzad

    2010-04-01

    We report the development of a face recognition system which operates in the same way as humans in that it is capable of recognizing a number of people, while rejecting everybody else as strangers. While humans do it routinely, a particularly challenging aspect of the problem of open-world face recognition has been the question of rejecting previously unseen faces as unfamiliar. Our approach can handle previously unseen faces; it is based on identifying and enclosing the region(s) in the human face space which belong to the target person(s).

  11. Automatic face recognition in HDR imaging

    NASA Astrophysics Data System (ADS)

    Pereira, Manuela; Moreno, Juan-Carlos; Proença, Hugo; Pinheiro, António M. G.

    2014-05-01

    The gaining popularity of the new High Dynamic Range (HDR) imaging systems is raising new privacy issues caused by the methods used for visualization. HDR images require tone mapping methods for an appropriate visualization on conventional and non-expensive LDR displays. These visualization methods might result in completely different visualization raising several issues on privacy intrusion. In fact, some visualization methods result in a perceptual recognition of the individuals, while others do not even show any identity. Although perceptual recognition might be possible, a natural question that can rise is how computer based recognition will perform using tone mapping generated images? In this paper, a study where automatic face recognition using sparse representation is tested with images that result from common tone mapping operators applied to HDR images. Its ability for the face identity recognition is described. Furthermore, typical LDR images are used for the face recognition training.

  12. Bayesian face recognition and perceptual narrowing in face-space.

    PubMed

    Balas, Benjamin

    2012-07-01

    During the first year of life, infants' face recognition abilities are subject to 'perceptual narrowing', the end result of which is that observers lose the ability to distinguish previously discriminable faces (e.g. other-race faces) from one another. Perceptual narrowing has been reported for faces of different species and different races, in developing humans and primates. Though the phenomenon is highly robust and replicable, there have been few efforts to model the emergence of perceptual narrowing as a function of the accumulation of experience with faces during infancy. The goal of the current study is to examine how perceptual narrowing might manifest as statistical estimation in 'face-space', a geometric framework for describing face recognition that has been successfully applied to adult face perception. Here, I use a computer vision algorithm for Bayesian face recognition to study how the acquisition of experience in face-space and the presence of race categories affect performance for own and other-race faces. Perceptual narrowing follows from the establishment of distinct race categories, suggesting that the acquisition of category boundaries for race is a key computational mechanism in developing face expertise.

  13. Real-time, face recognition technology

    SciTech Connect

    Brady, S.

    1995-11-01

    The Institute for Scientific Computing Research (ISCR) at Lawrence Livermore National Laboratory recently developed the real-time, face recognition technology KEN. KEN uses novel imaging devices such as silicon retinas developed at Caltech or off-the-shelf CCD cameras to acquire images of a face and to compare them to a database of known faces in a robust fashion. The KEN-Online project makes that recognition technology accessible through the World Wide Web (WWW), an internet service that has recently seen explosive growth. A WWW client can submit face images, add them to the database of known faces and submit other pictures that the system tries to recognize. KEN-Online serves to evaluate the recognition technology and grow a large face database. KEN-Online includes the use of public domain tools such as mSQL for its name-database and perl scripts to assist the uploading of images.

  14. Face Recognition in Humans and Machines

    NASA Astrophysics Data System (ADS)

    O'Toole, Alice; Tistarelli, Massimo

    The study of human face recognition by psychologists and neuroscientists has run parallel to the development of automatic face recognition technologies by computer scientists and engineers. In both cases, there are analogous steps of data acquisition, image processing, and the formation of representations that can support the complex and diverse tasks we accomplish with faces. These processes can be understood and compared in the context of their neural and computational implementations. In this chapter, we present the essential elements of face recognition by humans and machines, taking a perspective that spans psychological, neural, and computational approaches. From the human side, we overview the methods and techniques used in the neurobiology of face recognition, the underlying neural architecture of the system, the role of visual attention, and the nature of the representations that emerges. From the computational side, we discuss face recognition technologies and the strategies they use to overcome challenges to robust operation over viewing parameters. Finally, we conclude the chapter with a look at some recent studies that compare human and machine performances at face recognition.

  15. Learning Compact Binary Face Descriptor for Face Recognition.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Xiuzhuang; Zhou, Jie

    2015-10-01

    Binary feature descriptors such as local binary patterns (LBP) and its variations have been widely used in many face recognition systems due to their excellent robustness and strong discriminative power. However, most existing binary face descriptors are hand-crafted, which require strong prior knowledge to engineer them by hand. In this paper, we propose a compact binary face descriptor (CBFD) feature learning method for face representation and recognition. Given each face image, we first extract pixel difference vectors (PDVs) in local patches by computing the difference between each pixel and its neighboring pixels. Then, we learn a feature mapping to project these pixel difference vectors into low-dimensional binary vectors in an unsupervised manner, where 1) the variance of all binary codes in the training set is maximized, 2) the loss between the original real-valued codes and the learned binary codes is minimized, and 3) binary codes evenly distribute at each learned bin, so that the redundancy information in PDVs is removed and compact binary codes are obtained. Lastly, we cluster and pool these binary codes into a histogram feature as the final representation for each face image. Moreover, we propose a coupled CBFD (C-CBFD) method by reducing the modality gap of heterogeneous faces at the feature level to make our method applicable to heterogeneous face recognition. Extensive experimental results on five widely used face datasets show that our methods outperform state-of-the-art face descriptors.

  16. How Fast is Famous Face Recognition?

    PubMed Central

    Barragan-Jason, Gladys; Lachat, Fanny; Barbeau, Emmanuel J.

    2012-01-01

    The rapid recognition of familiar faces is crucial for social interactions. However the actual speed with which recognition can be achieved remains largely unknown as most studies have been carried out without any speed constraints. Different paradigms have been used, leading to conflicting results, and although many authors suggest that face recognition is fast, the speed of face recognition has not been directly compared to “fast” visual tasks. In this study, we sought to overcome these limitations. Subjects performed three tasks, a familiarity categorization task (famous faces among unknown faces), a superordinate categorization task (human faces among animal ones), and a gender categorization task. All tasks were performed under speed constraints. The results show that, despite the use of speed constraints, subjects were slow when they had to categorize famous faces: minimum reaction time was 467 ms, which is 180 ms more than during superordinate categorization and 160 ms more than in the gender condition. Our results are compatible with a hierarchy of face processing from the superordinate level to the familiarity level. The processes taking place between detection and recognition need to be investigated in detail. PMID:23162503

  17. [Face recognition in patients with schizophrenia].

    PubMed

    Doi, Hirokazu; Shinohara, Kazuyuki

    2012-07-01

    It is well known that patients with schizophrenia show severe deficiencies in social communication skills. These deficiencies are believed to be partly derived from abnormalities in face recognition. However, the exact nature of these abnormalities exhibited by schizophrenic patients with respect to face recognition has yet to be clarified. In the present paper, we review the main findings on face recognition deficiencies in patients with schizophrenia, particularly focusing on abnormalities in the recognition of facial expression and gaze direction, which are the primary sources of information of others' mental states. The existing studies reveal that the abnormal recognition of facial expression and gaze direction in schizophrenic patients is attributable to impairments in both perceptual processing of visual stimuli, and cognitive-emotional responses to social information. Furthermore, schizophrenic patients show malfunctions in distributed neural regions, ranging from the fusiform gyrus recruited in the structural encoding of facial stimuli, to the amygdala which plays a primary role in the detection of the emotional significance of stimuli. These findings were obtained from research in patient groups with heterogeneous characteristics. Because previous studies have indicated that impairments in face recognition in schizophrenic patients might vary according to the types of symptoms, it is of primary importance to compare the nature of face recognition deficiencies and the impairments of underlying neural functions across sub-groups of patients.

  18. The own-age face recognition bias is task dependent.

    PubMed

    Proietti, Valentina; Macchi Cassia, Viola; Mondloch, Catherine J

    2015-08-01

    The own-age bias (OAB) in face recognition (more accurate recognition of own-age than other-age faces) is robust among young adults but not older adults. We investigated the OAB under two different task conditions. In Experiment 1 young and older adults (who reported more recent experience with own than other-age faces) completed a match-to-sample task with young and older adult faces; only young adults showed an OAB. In Experiment 2 young and older adults completed an identity detection task in which we manipulated the identity strength of target and distracter identities by morphing each face with an average face in 20% steps. Accuracy increased with identity strength and facial age influenced older adults' (but not younger adults') strategy, but there was no evidence of an OAB. Collectively, these results suggest that the OAB depends on task demands and may be absent when searching for one identity.

  19. A novel thermal face recognition approach using face pattern words

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng

    2010-04-01

    A reliable thermal face recognition system can enhance the national security applications such as prevention against terrorism, surveillance, monitoring and tracking, especially at nighttime. The system can be applied at airports, customs or high-alert facilities (e.g., nuclear power plant) for 24 hours a day. In this paper, we propose a novel face recognition approach utilizing thermal (long wave infrared) face images that can automatically identify a subject at both daytime and nighttime. With a properly acquired thermal image (as a query image) in monitoring zone, the following processes will be employed: normalization and denoising, face detection, face alignment, face masking, Gabor wavelet transform, face pattern words (FPWs) creation, face identification by similarity measure (Hamming distance). If eyeglasses are present on a subject's face, an eyeglasses mask will be automatically extracted from the querying face image, and then masked with all comparing FPWs (no more transforms). A high identification rate (97.44% with Top-1 match) has been achieved upon our preliminary face dataset (of 39 subjects) from the proposed approach regardless operating time and glasses-wearing condition.e

  20. Famous faces as icons. The illusion of being an expert in the recognition of famous faces.

    PubMed

    Carbon, Claus-Christian

    2008-01-01

    It is a common belief that we are experts in the processing of famous faces. Although our ability to quickly and accurately recognise pictures of famous faces is quite impressive, we might not really process famous faces as faces per se, but as 'icons' or famous still pictures of famous faces. This assumption was tested in two parallel experiments employing a recognition task on famous, but personally unfamiliar, and on personally familiar faces. Both tests included (a) original, 'iconic' pictures, (b) slightly modified versions of familiar pictures, and (c) rather unfamiliar pictures of familiar persons. Participants (n = 70 + 70) indeed recognised original pictures of famous and personally familiar people very accurately, while performing poorly in recognising slightly modified, as well as unfamiliar versions of famous, but not personally familiar persons. These results indicate that the successful processing of famous faces may depend on icons imbued in society but not on the face as such.

  1. Face Recognition With Neural Networks

    DTIC Science & Technology

    1992-12-01

    condition known as prosopagnosia . Both researchers agree that patients with prosopagnosia , when they have come to autopsy, always have bilateral lesions...parietal region) do not have prosopagnosia . This also supports, albeit in a limited manner, the notion that the process is localized. Accepting...global to local idea is also supported in the prosopagnosia studies. Individuals with prosopagnosia can still identify a face as a face, but they can

  2. Do people have insight into their face recognition abilities?

    PubMed

    Palermo, Romina; Rossion, Bruno; Rhodes, Gillian; Laguesse, Renaud; Tez, Tolga; Hall, Bronwyn; Albonico, Andrea; Malaspina, Manuela; Daini, Roberta; Irons, Jessica; Al-Janabi, Shahd; Taylor, Libby C; Rivolta, Davide; McKone, Elinor

    2017-02-01

    Diagnosis of developmental or congenital prosopagnosia (CP) involves self-report of everyday face recognition difficulties, which are corroborated with poor performance on behavioural tests. This approach requires accurate self-evaluation. We examine the extent to which typical adults have insight into their face recognition abilities across four experiments involving nearly 300 participants. The experiments used five tests of face recognition ability: two that tap into the ability to learn and recognize previously unfamiliar faces [the Cambridge Face Memory Test, CFMT; Duchaine, B., & Nakayama, K. (2006). The Cambridge Face Memory Test: Results for neurologically intact individuals and an investigation of its validity using inverted face stimuli and prosopagnosic participants. Neuropsychologia, 44(4), 576-585. doi:10.1016/j.neuropsychologia.2005.07.001; and a newly devised test based on the CFMT but where the study phases involve watching short movies rather than viewing static faces-the CFMT-Films] and three that tap face matching [Benton Facial Recognition Test, BFRT; Benton, A., Sivan, A., Hamsher, K., Varney, N., & Spreen, O. (1983). Contribution to neuropsychological assessment. New York: Oxford University Press; and two recently devised sequential face matching tests]. Self-reported ability was measured with the 15-item Kennerknecht et al. questionnaire [Kennerknecht, I., Ho, N. Y., & Wong, V. C. (2008). Prevalence of hereditary prosopagnosia (HPA) in Hong Kong Chinese population. American Journal of Medical Genetics Part A, 146A(22), 2863-2870. doi:10.1002/ajmg.a.32552]; two single-item questions assessing face recognition ability; and a new 77-item meta-cognition questionnaire. Overall, we find that adults with typical face recognition abilities have only modest insight into their ability to recognize faces on behavioural tests. In a fifth experiment, we assess self-reported face recognition ability in people with CP and find that some people who expect to

  3. Face detection and eyeglasses detection for thermal face recognition

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng

    2012-01-01

    Thermal face recognition becomes an active research direction in human identification because it does not rely on illumination condition. Face detection and eyeglasses detection are necessary steps prior to face recognition using thermal images. Infrared light cannot go through glasses and thus glasses will appear as dark areas in a thermal image. One possible solution is to detect eyeglasses and to exclude the eyeglasses areas before face matching. In thermal face detection, a projection profile analysis algorithm is proposed, where region growing and morphology operations are used to segment the body of a subject; then the derivatives of two projections (horizontal and vertical) are calculated and analyzed to locate a minimal rectangle of containing the face area. Of course, the searching region of a pair of eyeglasses is within the detected face area. The eyeglasses detection algorithm should produce either a binary mask if eyeglasses present, or an empty set if no eyeglasses at all. In the proposed eyeglasses detection algorithm, block processing, region growing, and priori knowledge (i.e., low mean and variance within glasses areas, the shapes and locations of eyeglasses) are employed. The results of face detection and eyeglasses detection are quantitatively measured and analyzed using the manually defined ground truths (for both face and eyeglasses). Our experimental results shown that the proposed face detection and eyeglasses detection algorithms performed very well in contrast with the predefined ground truths.

  4. Face-space: A unifying concept in face recognition research.

    PubMed

    Valentine, Tim; Lewis, Michael B; Hills, Peter J

    2016-10-01

    The concept of a multidimensional psychological space, in which faces can be represented according to their perceived properties, is fundamental to the modern theorist in face processing. Yet the idea was not clearly expressed until 1991. The background that led to the development of face-space is explained, and its continuing influence on theories of face processing is discussed. Research that has explored the properties of the face-space and sought to understand caricature, including facial adaptation paradigms, is reviewed. Face-space as a theoretical framework for understanding the effect of ethnicity and the development of face recognition is evaluated. Finally, two applications of face-space in the forensic setting are discussed. From initially being presented as a model to explain distinctiveness, inversion, and the effect of ethnicity, face-space has become a central pillar in many aspects of face processing. It is currently being developed to help us understand adaptation effects with faces. While being in principle a simple concept, face-space has shaped, and continues to shape, our understanding of face perception.

  5. Recognition of own-race and other-race caricatures: implications for models of face recognition.

    PubMed

    Byatt, G; Rhodes, G

    1998-08-01

    Valentine's (Valentine T. Q J Exp Psychol 1991;43A:161-204) face recognition framework supports both a norm-based coding (NBC) and an exemplar-only, absolute coding, model (ABC). According to NBC; (1) faces are represented in terms of deviations from a prototype or norm; (2) caricatures are effective because they exaggerate this norm deviation information; and (3) other-race faces are coded relative to the (only available) own-race norm. Therefore NBC predicts that, for European subjects, caricatures of Chinese faces made by distorting differences from the European norm would be more effective than caricatures made relative to the Chinese norm. According to ABC; (1) faces are encoded as absolute values on a set of shared dimensions with the norm playing no role in recognition; (2) caricatures are effective because they minimise exemplar density and (3) the dimensions of face-space are inappropriate for other-race faces leaving them relatively densely clustered. ABC predicts that all faces would be recognised more accurately when caricatured against their own-race norm. We tested European subjects' identification of European and Chinese faces, caricatured against both race norms. The ABC model's prediction was supported. European faces were also rated as more distinctive and recognised more easily than Chinese faces. However, the own-race recognition bias held even when the races were equated for distinctiveness which suggests that the ABC model may not provide a complete account of race effects in recognition.

  6. About-face on face recognition ability and holistic processing

    PubMed Central

    Richler, Jennifer J.; Floyd, R. Jackie; Gauthier, Isabel

    2015-01-01

    Previous work found a small but significant relationship between holistic processing measured with the composite task and face recognition ability measured by the Cambridge Face Memory Test (CFMT; Duchaine & Nakayama, 2006). Surprisingly, recent work using a different measure of holistic processing (Vanderbilt Holistic Face Processing Test [VHPT-F]; Richler, Floyd, & Gauthier, 2014) and a larger sample found no evidence for such a relationship. In Experiment 1 we replicate this unexpected result, finding no relationship between holistic processing (VHPT-F) and face recognition ability (CFMT). A key difference between the VHPT-F and other holistic processing measures is that unique face parts are used on each trial in the VHPT-F, unlike in other tasks where a small set of face parts repeat across the experiment. In Experiment 2, we test the hypothesis that correlations between the CFMT and holistic processing tasks are driven by stimulus repetition that allows for learning during the composite task. Consistent with our predictions, CFMT performance was correlated with holistic processing in the composite task when a small set of face parts repeated over trials, but not when face parts did not repeat. A meta-analysis confirms that relationships between the CFMT and holistic processing depend on stimulus repetition. These results raise important questions about what is being measured by the CFMT, and challenge current assumptions about why faces are processed holistically. PMID:26223027

  7. Robust kernel collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong

    2015-05-01

    One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.

  8. Self-face recognition in social context.

    PubMed

    Sugiura, Motoaki; Sassa, Yuko; Jeong, Hyeonjeong; Wakusawa, Keisuke; Horie, Kaoru; Sato, Shigeru; Kawashima, Ryuta

    2012-06-01

    The concept of "social self" is often described as a representation of the self-reflected in the eyes or minds of others. Although the appearance of one's own face has substantial social significance for humans, neuroimaging studies have failed to link self-face recognition and the likely neural substrate of the social self, the medial prefrontal cortex (MPFC). We assumed that the social self is recruited during self-face recognition under a rich social context where multiple other faces are available for comparison of social values. Using functional magnetic resonance imaging (fMRI), we examined the modulation of neural responses to the faces of the self and of a close friend in a social context. We identified an enhanced response in the ventral MPFC and right occipitoparietal sulcus in the social context specifically for the self-face. Neural response in the right lateral parietal and inferior temporal cortices, previously claimed as self-face-specific, was unaffected for the self-face but unexpectedly enhanced for the friend's face in the social context. Self-face-specific activation in the pars triangularis of the inferior frontal gyrus, and self-face-specific reduction of activation in the left middle temporal gyrus and the right supramarginal gyrus, replicating a previous finding, were not subject to such modulation. Our results thus demonstrated the recruitment of a social self during self-face recognition in the social context. At least three brain networks for self-face-specific activation may be dissociated by different patterns of response-modulation in the social context, suggesting multiple dynamic self-other representations in the human brain.

  9. Compositional Dictionaries for Domain Adaptive Face Recognition.

    PubMed

    Qiang Qiu; Chellappa, Rama

    2015-12-01

    We present a dictionary learning approach to compensate for the transformation of faces due to the changes in view point, illumination, resolution, and so on. The key idea of our approach is to force domain-invariant sparse coding, i.e., designing a consistent sparse representation of the same face in different domains. In this way, the classifiers trained on the sparse codes in the source domain consisting of frontal faces can be applied to the target domain (consisting of faces in different poses, illumination conditions, and so on) without much loss in recognition accuracy. The approach is to first learn a domain base dictionary, and then describe each domain shift (identity, pose, and illumination) using a sparse representation over the base dictionary. The dictionary adapted to each domain is expressed as the sparse linear combinations of the base dictionary. In the context of face recognition, with the proposed compositional dictionary approach, a face image can be decomposed into sparse representations for a given subject, pose, and illumination. This approach has three advantages. First, the extracted sparse representation for a subject is consistent across domains, and enables pose and illumination insensitive face recognition. Second, sparse representations for pose and illumination can be subsequently used to estimate the pose and illumination condition of a face image. Last, by composing sparse representations for the subject and the different domains, we can also perform pose alignment and illumination normalization. Extensive experiments using two public face data sets are presented to demonstrate the effectiveness of the proposed approach for face recognition.

  10. FaceID: A face detection and recognition system

    SciTech Connect

    Shah, M.B.; Rao, N.S.V.; Olman, V.; Uberbacher, E.C.; Mann, R.C.

    1996-12-31

    A face detection system that automatically locates faces in gray-level images is described. Also described is a system which matches a given face image with faces in a database. Face detection in an Image is performed by template matching using templates derived from a selected set of normalized faces. Instead of using original gray level images, vertical gradient images were calculated and used to make the system more robust against variations in lighting conditions and skin color. Faces of different sizes are detected by processing the image at several scales. Further, a coarse-to-fine strategy is used to speed up the processing, and a combination of whole face and face component templates are used to ensure low false detection rates. The input to the face recognition system is a normalized vertical gradient image of a face, which is compared against a database using a set of pretrained feedforward neural networks with a winner-take-all fuser. The training is performed by using an adaptation of the backpropagation algorithm. This system has been developed and tested using images from the FERET database and a set of images obtained from Rowley, et al and Sung and Poggio.

  11. Nonlinear fusion for face recognition using fuzzy integral

    NASA Astrophysics Data System (ADS)

    Chen, Xuerong; Jing, Zhongliang; Xiao, Gang

    2007-08-01

    Face recognition based only on the visual spectrum is not accurate or robust enough to be used in uncontrolled environments. Recently, infrared (IR) imagery of human face is considered as a promising alternative to visible imagery due to its relative insensitive to illumination changes. However, IR has its own limitations. In order to fuse information from the two modalities to achieve better result, we propose a new fusion recognition scheme based on nonlinear decision fusion, using fuzzy integral to fuse the objective evidence supplied by each modality. The scheme also employs independent component analysis (ICA) for feature extraction and support vector machines (SVMs) for classification evidence. Recognition rate is used to evaluate the proposed scheme. Experimental results show the scheme improves recognition performance substantially.

  12. Infrared and visible image fusion for face recognition

    NASA Astrophysics Data System (ADS)

    Singh, Saurabh; Gyaourova, Aglika; Bebis, George; Pavlidis, Ioannis

    2004-08-01

    Considerable progress has been made in face recognition research over the last decade especially with the development of powerful models of face appearance (i.e., eigenfaces). Despite the variety of approaches and tools studied, however, face recognition is not accurate or robust enough to be deployed in uncontrolled environments. Recently, a number of studies have shown that infrared (IR) imagery offers a promising alternative to visible imagery due to its relative insensitive to illumination changes. However, IR has other limitations including that it is opaque to glass. As a result, IR imagery is very sensitive to facial occlusion caused by eyeglasses. In this paper, we propose fusing IR with visible images, exploiting the relatively lower sensitivity of visible imagery to occlusions caused by eyeglasses. Two different fusion schemes have been investigated in this study: (1) image-based fusion performed in the wavelet domain and, (2) feature-based fusion performed in the eigenspace domain. In both cases, we employ Genetic Algorithms (GAs) to find an optimum strategy to perform the fusion. To evaluate and compare the proposed fusion schemes, we have performed extensive recognition experiments using the Equinox face dataset and the popular method of eigenfaces. Our results show substantial improvements in recognition performance overall, suggesting that the idea of fusing IR with visible images for face recognition deserves further consideration.

  13. Influence of motion on face recognition.

    PubMed

    Bonfiglio, Natale S; Manfredi, Valentina; Pessa, Eliano

    2012-02-01

    The influence of motion information and temporal associations on recognition of non-familiar faces was investigated using two groups which performed a face recognition task. One group was presented with regular temporal sequences of face views designed to produce the impression of motion of the face rotating in depth, the other group with random sequences of the same views. In one condition, participants viewed the sequences of the views in rapid succession with a negligible interstimulus interval (ISI). This condition was characterized by three different presentation times. In another condition, participants were presented a sequence with a 1-sec. ISI among the views. That regular sequences of views with a negligible ISI and a shorter presentation time were hypothesized to give rise to better recognition, related to a stronger impression of face rotation. Analysis of data from 45 participants showed a shorter presentation time was associated with significantly better accuracy on the recognition task; however, differences between performances associated with regular and random sequences were not significant.

  14. Holistic face processing can inhibit recognition of forensic facial composites.

    PubMed

    McIntyre, Alex H; Hancock, Peter J B; Frowd, Charlie D; Langton, Stephen R H

    2016-04-01

    Facial composite systems help eyewitnesses to show the appearance of criminals. However, likenesses created by unfamiliar witnesses will not be completely accurate, and people familiar with the target can find them difficult to identify. Faces are processed holistically; we explore whether this impairs identification of inaccurate composite images and whether recognition can be improved. In Experiment 1 (n = 64) an imaging technique was used to make composites of celebrity faces more accurate and identification was contrasted with the original composite images. Corrected composites were better recognized, confirming that errors in production of the likenesses impair identification. The influence of holistic face processing was explored by misaligning the top and bottom parts of the composites (cf. Young, Hellawell, & Hay, 1987). Misalignment impaired recognition of corrected composites but identification of the original, inaccurate composites significantly improved. This effect was replicated with facial composites of noncelebrities in Experiment 2 (n = 57). We conclude that, like real faces, facial composites are processed holistically: recognition is impaired because unlike real faces, composites contain inaccuracies and holistic face processing makes it difficult to perceive identifiable features. This effect was consistent across composites of celebrities and composites of people who are personally familiar. Our findings suggest that identification of forensic facial composites can be enhanced by presenting composites in a misaligned format.

  15. Neurocomputational bases of object and face recognition.

    PubMed Central

    Biederman, I; Kalocsai, P

    1997-01-01

    A number of behavioural phenomena distinguish the recognition of faces and objects, even when members of a set of objects are highly similar. Because faces have the same parts in approximately the same relations, individuation of faces typically requires specification of the metric variation in a holistic and integral representation of the facial surface. The direct mapping of a hypercolumn-like pattern of activation onto a representation layer that preserves relative spatial filter values in a two-dimensional (2D) coordinate space, as proposed by C. von der Malsburg and his associates, may account for many of the phenomena associated with face recognition. An additional refinement, in which each column of filters (termed a 'jet') is centred on a particular facial feature (or fiducial point), allows selectivity of the input into the holistic representation to avoid incorporation of occluding or nearby surfaces. The initial hypercolumn representation also characterizes the first stage of object perception, but the image variation for objects at a given location in a 2D coordinate space may be too great to yield sufficient predictability directly from the output of spatial kernels. Consequently, objects can be represented by a structural description specifying qualitative (typically, non-accidental) characterizations of an object's parts, the attributes of the parts, and the relations among the parts, largely based on orientation and depth discontinuities (as shown by Hummel & Biederman). A series of experiments on the name priming or physical matching of complementary images (in the Fourier domain) of objects and faces documents that whereas face recognition is strongly dependent on the original spatial filter values, evidence from object recognition indicates strong invariance to these values, even when distinguishing among objects that are as similar as faces. PMID:9304687

  16. Face recognition with L1-norm subspaces

    NASA Astrophysics Data System (ADS)

    Maritato, Federica; Liu, Ying; Colonnese, Stefania; Pados, Dimitris A.

    2016-05-01

    We consider the problem of representing individual faces by maximum L1-norm projection subspaces calculated from available face-image ensembles. In contrast to conventional L2-norm subspaces, L1-norm subspaces are seen to offer significant robustness to image variations, disturbances, and rank selection. Face recognition becomes then the problem of associating a new unknown face image to the "closest," in some sense, L1 subspace in the database. In this work, we also introduce the concept of adaptively allocating the available number of principal components to different face image classes, subject to a given total number/budget of principal components. Experimental studies included in this paper illustrate and support the theoretical developments.

  17. Finding Faces Among Faces: Human Faces are Located More Quickly and Accurately than Other Primate and Mammal Faces

    PubMed Central

    Simpson, Elizabeth A.; Buchin, Zachary; Werner, Katie; Worrell, Rey; Jakobsen, Krisztina V.

    2014-01-01

    We tested the specificity of human face search efficiency by examining whether there is a broad window of detection for various face-like stimuli—human and animal faces—or whether own-species faces receive greater attentional allocation. We assessed the strength of the own-species face detection bias by testing whether human faces are located more efficiently than other animal faces, when presented among various other species’ faces, in heterogeneous 16-, 36-, and 64-item arrays. Across all array sizes, we found that, controlling for distractor type, human faces were located faster and more accurately than primate and mammal faces, and that, controlling for target type, searches were faster when distractors were human faces compared to animal faces, revealing more efficient processing of human faces regardless of their role as targets or distractors (Experiment 1). Critically, these effects remained when searches were for specific species’ faces (human, chimpanzee, otter), ruling out a category-level explanation (Experiment 2). Together, these results suggest that human faces may be processed more efficiently than animal faces, both when task-relevant (targets), and when task-irrelevant (distractors), even when in direct competition with other faces. These results suggest that there is not a broad window of detection for all face-like patterns, but that human adults process own-species’ faces more efficiently than other species’ faces. Such own-species search efficiencies may arise through experience with own-species faces throughout development, or may be privileged early in development, due to the evolutionary importance of conspecifics’ faces. PMID:25113852

  18. Tolerance for distorted faces: challenges to a configural processing account of familiar face recognition.

    PubMed

    Sandford, Adam; Burton, A Mike

    2014-09-01

    Face recognition is widely held to rely on 'configural processing', an analysis of spatial relations between facial features. We present three experiments in which viewers were shown distorted faces, and asked to resize these to their correct shape. Based on configural theories appealing to metric distances between features, we reason that this should be an easier task for familiar than unfamiliar faces (whose subtle arrangements of features are unknown). In fact, participants were inaccurate at this task, making between 8% and 13% errors across experiments. Importantly, we observed no advantage for familiar faces: in one experiment participants were more accurate with unfamiliars, and in two experiments there was no difference. These findings were not due to general task difficulty - participants were able to resize blocks of colour to target shapes (squares) more accurately. We also found an advantage of familiarity for resizing other stimuli (brand logos). If configural processing does underlie face recognition, these results place constraints on the definition of 'configural'. Alternatively, familiar face recognition might rely on more complex criteria - based on tolerance to within-person variation rather than highly specific measurement.

  19. Double linear regression classification for face recognition

    NASA Astrophysics Data System (ADS)

    Feng, Qingxiang; Zhu, Qi; Tang, Lin-Lin; Pan, Jeng-Shyang

    2015-02-01

    A new classifier designed based on linear regression classification (LRC) classifier and simple-fast representation-based classifier (SFR), named double linear regression classification (DLRC) classifier, is proposed for image recognition in this paper. As we all know, the traditional LRC classifier only uses the distance between test image vectors and predicted image vectors of the class subspace for classification. And the SFR classifier uses the test image vectors and the nearest image vectors of the class subspace to classify the test sample. However, the DLRC classifier computes out the predicted image vectors of each class subspace and uses all the predicted vectors to construct a novel robust global space. Then, the DLRC utilizes the novel global space to get the novel predicted vectors of each class for classification. A mass number of experiments on AR face database, JAFFE face database, Yale face database, Extended YaleB face database, and PIE face database are used to evaluate the performance of the proposed classifier. The experimental results show that the proposed classifier achieves better recognition rate than the LRC classifier, SFR classifier, and several other classifiers.

  20. Face Recognition with Multi-Resolution Spectral Feature Images

    PubMed Central

    Sun, Zhan-Li; Lam, Kin-Man; Dong, Zhao-Yang; Wang, Han; Gao, Qing-Wei; Zheng, Chun-Hou

    2013-01-01

    The one-sample-per-person problem has become an active research topic for face recognition in recent years because of its challenges and significance for real-world applications. However, achieving relatively higher recognition accuracy is still a difficult problem due to, usually, too few training samples being available and variations of illumination and expression. To alleviate the negative effects caused by these unfavorable factors, in this paper we propose a more accurate spectral feature image-based 2DLDA (two-dimensional linear discriminant analysis) ensemble algorithm for face recognition, with one sample image per person. In our algorithm, multi-resolution spectral feature images are constructed to represent the face images; this can greatly enlarge the training set. The proposed method is inspired by our finding that, among these spectral feature images, features extracted from some orientations and scales using 2DLDA are not sensitive to variations of illumination and expression. In order to maintain the positive characteristics of these filters and to make correct category assignments, the strategy of classifier committee learning (CCL) is designed to combine the results obtained from different spectral feature images. Using the above strategies, the negative effects caused by those unfavorable factors can be alleviated efficiently in face recognition. Experimental results on the standard databases demonstrate the feasibility and efficiency of the proposed method. PMID:23418451

  1. Learning invariant face recognition from examples.

    PubMed

    Müller, Marco K; Tremer, Michael; Bodenstein, Christian; Würtz, Rolf P

    2013-05-01

    Autonomous learning is demonstrated by living beings that learn visual invariances during their visual experience. Standard neural network models do not show this sort of learning. On the example of face recognition in different situations we propose a learning process that separates learning of the invariance proper from learning new instances of individuals. The invariance is learned by a set of examples called model, which contains instances of all situations. New instances are compared with these on the basis of rank lists, which allow generalization across situations. The result is also implemented as a spike-time-based neural network, which is shown to be robust against disturbances. The learning capability is demonstrated by recognition experiments on a set of standard face databases.

  2. Gender-Based Prototype Formation in Face Recognition

    ERIC Educational Resources Information Center

    Baudouin, Jean-Yves; Brochard, Renaud

    2011-01-01

    The role of gender categories in prototype formation during face recognition was investigated in 2 experiments. The participants were asked to learn individual faces and then to recognize them. During recognition, individual faces were mixed with faces, which were blended faces of same or different genders. The results of the 2 experiments showed…

  3. Super-resolution benefit for face recognition

    NASA Astrophysics Data System (ADS)

    Hu, Shuowen; Maschal, Robert; Young, S. Susan; Hong, Tsai Hong; Phillips, Jonathon P.

    2011-06-01

    Vast amounts of video footage are being continuously acquired by surveillance systems on private premises, commercial properties, government compounds, and military installations. Facial recognition systems have the potential to identify suspicious individuals on law enforcement watchlists, but accuracy is severely hampered by the low resolution of typical surveillance footage and the far distance of suspects from the cameras. To improve accuracy, super-resolution can enhance suspect details by utilizing a sequence of low resolution frames from the surveillance footage to reconstruct a higher resolution image for input into the facial recognition system. This work measures the improvement of face recognition with super-resolution in a realistic surveillance scenario. Low resolution and super-resolved query sets are generated using a video database at different eye-to-eye distances corresponding to different distances of subjects from the camera. Performance of a face recognition algorithm using the super-resolved and baseline query sets was calculated by matching against galleries consisting of frontal mug shots. The results show that super-resolution improves performance significantly at the examined mid and close ranges.

  4. Face recognition: a model specific ability.

    PubMed

    Wilmer, Jeremy B; Germine, Laura T; Nakayama, Ken

    2014-01-01

    In our everyday lives, we view it as a matter of course that different people are good at different things. It can be surprising, in this context, to learn that most of what is known about cognitive ability variation across individuals concerns the broadest of all cognitive abilities; an ability referred to as general intelligence, general mental ability, or just g. In contrast, our knowledge of specific abilities, those that correlate little with g, is severely constrained. Here, we draw upon our experience investigating an exceptionally specific ability, face recognition, to make the case that many specific abilities could easily have been missed. In making this case, we derive key insights from earlier false starts in the measurement of face recognition's variation across individuals, and we highlight the convergence of factors that enabled the recent discovery that this variation is specific. We propose that the case of face recognition ability illustrates a set of tools and perspectives that could accelerate fruitful work on specific cognitive abilities. By revealing relatively independent dimensions of human ability, such work would enhance our capacity to understand the uniqueness of individual minds.

  5. Face and body recognition show similar improvement during childhood.

    PubMed

    Bank, Samantha; Rhodes, Gillian; Read, Ainsley; Jeffery, Linda

    2015-09-01

    Adults are proficient in extracting identity cues from faces. This proficiency develops slowly during childhood, with performance not reaching adult levels until adolescence. Bodies are similar to faces in that they convey identity cues and rely on specialized perceptual mechanisms. However, it is currently unclear whether body recognition mirrors the slow development of face recognition during childhood. Recent evidence suggests that body recognition develops faster than face recognition. Here we measured body and face recognition in 6- and 10-year-old children and adults to determine whether these two skills show different amounts of improvement during childhood. We found no evidence that they do. Face and body recognition showed similar improvement with age, and children, like adults, were better at recognizing faces than bodies. These results suggest that the mechanisms of face and body memory mature at a similar rate or that improvement of more general cognitive and perceptual skills underlies improvement of both face and body recognition.

  6. Face recognition: a model specific ability

    PubMed Central

    Wilmer, Jeremy B.; Germine, Laura T.; Nakayama, Ken

    2014-01-01

    In our everyday lives, we view it as a matter of course that different people are good at different things. It can be surprising, in this context, to learn that most of what is known about cognitive ability variation across individuals concerns the broadest of all cognitive abilities; an ability referred to as general intelligence, general mental ability, or just g. In contrast, our knowledge of specific abilities, those that correlate little with g, is severely constrained. Here, we draw upon our experience investigating an exceptionally specific ability, face recognition, to make the case that many specific abilities could easily have been missed. In making this case, we derive key insights from earlier false starts in the measurement of face recognition’s variation across individuals, and we highlight the convergence of factors that enabled the recent discovery that this variation is specific. We propose that the case of face recognition ability illustrates a set of tools and perspectives that could accelerate fruitful work on specific cognitive abilities. By revealing relatively independent dimensions of human ability, such work would enhance our capacity to understand the uniqueness of individual minds. PMID:25346673

  7. Impaired face recognition is associated with social inhibition.

    PubMed

    Avery, Suzanne N; VanDerKlok, Ross M; Heckers, Stephan; Blackford, Jennifer U

    2016-02-28

    Face recognition is fundamental to successful social interaction. Individuals with deficits in face recognition are likely to have social functioning impairments that may lead to heightened risk for social anxiety. A critical component of social interaction is how quickly a face is learned during initial exposure to a new individual. Here, we used a novel Repeated Faces task to assess how quickly memory for faces is established. Face recognition was measured over multiple exposures in 52 young adults ranging from low to high in social inhibition, a core dimension of social anxiety. High social inhibition was associated with a smaller slope of change in recognition memory over repeated face exposure, indicating participants with higher social inhibition showed smaller improvements in recognition memory after seeing faces multiple times. We propose that impaired face learning is an important mechanism underlying social inhibition and may contribute to, or maintain, social anxiety.

  8. Impaired face recognition is associated with social inhibition

    PubMed Central

    Avery, Suzanne N; VanDerKlok, Ross M; Heckers, Stephan; Blackford, Jennifer U

    2016-01-01

    Face recognition is fundamental to successful social interaction. Individuals with deficits in face recognition are likely to have social functioning impairments that may lead to heightened risk for social anxiety. A critical component of social interaction is how quickly a face is learned during initial exposure to a new individual. Here, we used a novel Repeated Faces task to assess how quickly memory for faces is established. Face recognition was measured over multiple exposures in 52 young adults ranging from low to high in social inhibition, a core dimension of social anxiety. High social inhibition was associated with a smaller slope of change in recognition memory over repeated face exposure, indicating participants with higher social inhibition showed smaller improvements in recognition memory after seeing faces multiple times. We propose that impaired face learning is an important mechanism underlying social inhibition and may contribute to, or maintain, social anxiety. PMID:26776300

  9. Towards Robust Face Recognition from Video

    SciTech Connect

    Price, JR

    2001-10-18

    A novel, template-based method for face recognition is presented. The goals of the proposed method are to integrate multiple observations for improved robustness and to provide auxiliary confidence data for subsequent use in an automated video surveillance system. The proposed framework consists of a parallel system of classifiers, referred to as observers, where each observer is trained on one face region. The observer outputs are combined to yield the final recognition result. Three of the four confounding factors--expression, illumination, and decoration--are specifically addressed in this paper. The extension of the proposed approach to address the fourth confounding factor--pose--is straightforward and well supported in previous work. A further contribution of the proposed approach is the computation of a revealing confidence measure. This confidence measure will aid the subsequent application of the proposed method to video surveillance scenarios. Results are reported for a database comprising 676 images of 160 subjects under a variety of challenging circumstances. These results indicate significant performance improvements over previous methods and demonstrate the usefulness of the confidence data.

  10. Combination of direct matching and collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Chongyang

    2013-06-01

    It has been proved that representation-based classification (RBC) can achieve high accuracy in face recognition. However, conventional RBC has a very high computational cost. Collaborative representation proposed in [1] not only has the advantages of RBC but also is computationally very efficient. In this paper, a combination of direct matching of images and collaborative representation is proposed for face recognition. Experimental results show that the proposed method can always classify more accurately than collaborative representation! The underlying reason is that direct matching of images and collaborative representation use different ways to calculate the dissimilarity between the test sample and training sample. As a result, the score obtained using direct matching of images is very complementary to the score obtained using collaborative representation. Actually, the analysis shows that the matching scores generated from direct matching of images and collaborative representation always have a low correlation. This allows the proposed method to exploit more information for face recognition and to produce a better result.

  11. Prevalence of face recognition deficits in middle childhood.

    PubMed

    Bennetts, Rachel J; Murray, Ebony; Boyce, Tian; Bate, Sarah

    2017-02-01

    Approximately 2-2.5% of the adult population is believed to show severe difficulties with face recognition, in the absence of any neurological injury-a condition known as developmental prosopagnosia (DP). However, to date no research has attempted to estimate the prevalence of face recognition deficits in children, possibly because there are very few child-friendly, well-validated tests of face recognition. In the current study, we examined face and object recognition in a group of primary school children (aged 5-11 years), to establish whether our tests were suitable for children and to provide an estimate of face recognition difficulties in children. In Experiment 1 (n = 184), children completed a pre-existing test of child face memory, the Cambridge Face Memory Test-Kids (CFMT-K), and a bicycle test with the same format. In Experiment 2 (n = 413), children completed three-alternative forced-choice matching tasks with faces and bicycles. All tests showed good psychometric properties. The face and bicycle tests were well matched for difficulty and showed a similar developmental trajectory. Neither the memory nor the matching tests were suitable to detect impairments in the youngest groups of children, but both tests appear suitable to screen for face recognition problems in middle childhood. In the current sample, 1.2-5.2% of children showed difficulties with face recognition; 1.2-4% showed face-specific difficulties-that is, poor face recognition with typical object recognition abilities. This is somewhat higher than previous adult estimates: It is possible that face matching tests overestimate the prevalence of face recognition difficulties in children; alternatively, some children may "outgrow" face recognition difficulties.

  12. [Face recognition in patients with autism spectrum disorders].

    PubMed

    Kita, Yosuke; Inagaki, Masumi

    2012-07-01

    The present study aimed to review previous research conducted on face recognition in patients with autism spectrum disorders (ASD). Face recognition is a key question in the ASD research field because it can provide clues for elucidating the neural substrates responsible for the social impairment of these patients. Historically, behavioral studies have reported low performance and/or unique strategies of face recognition among ASD patients. However, the performance and strategy of ASD patients is comparable to those of the control group, depending on the experimental situation or developmental stage, suggesting that face recognition of ASD patients is not entirely impaired. Recent brain function studies, including event-related potential and functional magnetic resonance imaging studies, have investigated the cognitive process of face recognition in ASD patients, and revealed impaired function in the brain's neural network comprising the fusiform gyrus and amygdala. This impaired function is potentially involved in the diminished preference for faces, and in the atypical development of face recognition, eliciting symptoms of unstable behavioral characteristics in these patients. Additionally, face recognition in ASD patients is examined from a different perspective, namely self-face recognition, and facial emotion recognition. While the former topic is intimately linked to basic social abilities such as self-other discrimination, the latter is closely associated with mentalizing. Further research on face recognition in ASD patients should investigate the connection between behavioral and neurological specifics in these patients, by considering developmental changes and the spectrum clinical condition of ASD.

  13. Direct Gaze Modulates Face Recognition in Young Infants

    ERIC Educational Resources Information Center

    Farroni, Teresa; Massaccesi, Stefano; Menon, Enrica; Johnson, Mark H.

    2007-01-01

    From birth, infants prefer to look at faces that engage them in direct eye contact. In adults, direct gaze is known to modulate the processing of faces, including the recognition of individuals. In the present study, we investigate whether direction of gaze has any effect on face recognition in four-month-old infants. Four-month infants were shown…

  14. Image preprocessing study on KPCA-based face recognition

    NASA Astrophysics Data System (ADS)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  15. Familiar Person Recognition: Is Autonoetic Consciousness More Likely to Accompany Face Recognition Than Voice Recognition?

    NASA Astrophysics Data System (ADS)

    Barsics, Catherine; Brédart, Serge

    2010-11-01

    Autonoetic consciousness is a fundamental property of human memory, enabling us to experience mental time travel, to recollect past events with a feeling of self-involvement, and to project ourselves in the future. Autonoetic consciousness is a characteristic of episodic memory. By contrast, awareness of the past associated with a mere feeling of familiarity or knowing relies on noetic consciousness, depending on semantic memory integrity. Present research was aimed at evaluating whether conscious recollection of episodic memories is more likely to occur following the recognition of a familiar face than following the recognition of a familiar voice. Recall of semantic information (biographical information) was also assessed. Previous studies that investigated the recall of biographical information following person recognition used faces and voices of famous people as stimuli. In this study, the participants were presented with personally familiar people's voices and faces, thus avoiding the presence of identity cues in the spoken extracts and allowing a stricter control of frequency exposure with both types of stimuli (voices and faces). In the present study, the rate of retrieved episodic memories, associated with autonoetic awareness, was significantly higher from familiar faces than familiar voices even though the level of overall recognition was similar for both these stimuli domains. The same pattern was observed regarding semantic information retrieval. These results and their implications for current Interactive Activation and Competition person recognition models are discussed.

  16. Face age and sex modulate the other-race effect in face recognition.

    PubMed

    Wallis, Jennifer; Lipp, Ottmar V; Vanman, Eric J

    2012-11-01

    Faces convey a variety of socially relevant cues that have been shown to affect recognition, such as age, sex, and race, but few studies have examined the interactive effect of these cues. White participants of two distinct age groups were presented with faces that differed in race, age, and sex in a face recognition paradigm. Replicating the other-race effect, young participants recognized young own-race faces better than young other-race faces. However, recognition performance did not differ across old faces of different races (Experiments 1, 2A). In addition, participants showed an other-age effect, recognizing White young faces better than White old faces. Sex affected recognition performance only when age was not varied (Experiment 2B). Overall, older participants showed a similar recognition pattern (Experiment 3) as young participants, displaying an other-race effect for young, but not old, faces. However, they recognized young and old White faces on a similar level. These findings indicate that face cues interact to affect recognition performance such that age and sex information reliably modulate the effect of race cues. These results extend accounts of face recognition that explain recognition biases (such as the other-race effect) as a function of dichotomous ingroup/outgroup categorization, in that outgroup characteristics are not simply additive but interactively determine recognition performance.

  17. Comparison of computer-based and optical face recognition paradigms

    NASA Astrophysics Data System (ADS)

    Alorf, Abdulaziz A.

    The main objectives of this thesis are to validate an improved principal components analysis (IPCA) algorithm on images; designing and simulating a digital model for image compression, face recognition and image detection by using a principal components analysis (PCA) algorithm and the IPCA algorithm; designing and simulating an optical model for face recognition and object detection by using the joint transform correlator (JTC); establishing detection and recognition thresholds for each model; comparing between the performance of the PCA algorithm and the performance of the IPCA algorithm in compression, recognition and, detection; and comparing between the performance of the digital model and the performance of the optical model in recognition and detection. The MATLAB(c) software was used for simulating the models. PCA is a technique used for identifying patterns in data and representing the data in order to highlight any similarities or differences. The identification of patterns in data of high dimensions (more than three dimensions) is too difficult because the graphical representation of data is impossible. Therefore, PCA is a powerful method for analyzing data. IPCA is another statistical tool for identifying patterns in data. It uses information theory for improving PCA. The joint transform correlator (JTC) is an optical correlator used for synthesizing a frequency plane filter for coherent optical systems. The IPCA algorithm, in general, behaves better than the PCA algorithm in the most of the applications. It is better than the PCA algorithm in image compression because it obtains higher compression, more accurate reconstruction, and faster processing speed with acceptable errors; in addition, it is better than the PCA algorithm in real-time image detection due to the fact that it achieves the smallest error rate as well as remarkable speed. On the other hand, the PCA algorithm performs better than the IPCA algorithm in face recognition because it offers

  18. The Reverse-Caricature Effect Revisited: Familiarization With Frontal Facial Caricatures Improves Veridical Face Recognition.

    PubMed

    Rodríguez, Jobany; Bortfeld, Heather; Rudomín, Isaac; Hernández, Benjamín; Gutiérrez-Osuna, Ricardo

    2009-07-01

    Prior research suggests that recognition of a person's face can be facilitated by exaggerating the distinctive features of the face during training. We tested if this 'reverse-caricature effect' would be robust to procedural variations that created more difficult learning environments. Specifically, we examined whether the effect would emerge with frontal rather than three-quarter views, after very brief exposure to caricatures during the learning phase and after modest rotations of faces during the recognition phase. Results indicate that, even under these difficult training conditions, people are more accurate at recognizing unaltered faces if they are first familiarized with caricatures of the faces, rather than with the unaltered faces. These findings support the development of new training methods to improve face recognition.

  19. The Reverse-Caricature Effect Revisited: Familiarization With Frontal Facial Caricatures Improves Veridical Face Recognition

    PubMed Central

    RODRÍGUEZ, JOBANY; BORTFELD, HEATHER; RUDOMÍN, ISAAC; HERNÁNDEZ, BENJAMÍN; GUTIÉRREZ-OSUNA, RICARDO

    2010-01-01

    SUMMARY Prior research suggests that recognition of a person's face can be facilitated by exaggerating the distinctive features of the face during training. We tested if this ‘reverse-caricature effect’ would be robust to procedural variations that created more difficult learning environments. Specifically, we examined whether the effect would emerge with frontal rather than three-quarter views, after very brief exposure to caricatures during the learning phase and after modest rotations of faces during the recognition phase. Results indicate that, even under these difficult training conditions, people are more accurate at recognizing unaltered faces if they are first familiarized with caricatures of the faces, rather than with the unaltered faces. These findings support the development of new training methods to improve face recognition. PMID:21132058

  20. Covert face recognition relies on affective valence in congenital prosopagnosia.

    PubMed

    Bate, Sarah; Haslam, Catherine; Jansari, Ashok; Hodgson, Timothy L

    2009-06-01

    Dominant accounts of covert recognition in prosopagnosia assume subthreshold activation of face representations created prior to onset of the disorder. Yet, such accounts cannot explain covert recognition in congenital prosopagnosia, where the impairment is present from birth. Alternatively, covert recognition may rely on affective valence, yet no study has explored this possibility. The current study addressed this issue in 3 individuals with congenital prosopagnosia, using measures of the scanpath to indicate recognition. Participants were asked to memorize 30 faces paired with descriptions of aggressive, nice, or neutral behaviours. In a later recognition test, eye movements were monitored while participants discriminated studied from novel faces. Sampling was reduced for studied--nice compared to studied--aggressive faces, and performance for studied--neutral and novel faces fell between these two conditions. This pattern of findings suggests that (a) positive emotion can facilitate processing in prosopagnosia, and (b) covert recognition may rely on emotional valence rather than familiarity.

  1. Newborns' Face Recognition: Role of Inner and Outer Facial Features

    ERIC Educational Resources Information Center

    Turati, Chiara; Macchi Cassia, Viola; Simion, Francesca; Leo, Irene

    2006-01-01

    Existing data indicate that newborns are able to recognize individual faces, but little is known about what perceptual cues drive this ability. The current study showed that either the inner or outer features of the face can act as sufficient cues for newborns' face recognition (Experiment 1), but the outer part of the face enjoys an advantage…

  2. Graph optimized Laplacian eigenmaps for face recognition

    NASA Astrophysics Data System (ADS)

    Dornaika, F.; Assoum, A.; Ruichek, Y.

    2015-01-01

    In recent years, a variety of nonlinear dimensionality reduction techniques (NLDR) have been proposed in the literature. They aim to address the limitations of traditional techniques such as PCA and classical scaling. Most of these techniques assume that the data of interest lie on an embedded non-linear manifold within the higher-dimensional space. They provide a mapping from the high-dimensional space to the low-dimensional embedding and may be viewed, in the context of machine learning, as a preliminary feature extraction step, after which pattern recognition algorithms are applied. Laplacian Eigenmaps (LE) is a nonlinear graph-based dimensionality reduction method. It has been successfully applied in many practical problems such as face recognition. However the construction of LE graph suffers, similarly to other graph-based DR techniques from the following issues: (1) the neighborhood graph is artificially defined in advance, and thus does not necessary benefit the desired DR task; (2) the graph is built using the nearest neighbor criterion which tends to work poorly due to the high-dimensionality of original space; and (3) its computation depends on two parameters whose values are generally uneasy to assign, the neighborhood size and the heat kernel parameter. To address the above-mentioned problems, for the particular case of the LPP method (a linear version of LE), L. Zhang et al.1 have developed a novel DR algorithm whose idea is to integrate graph construction with specific DR process into a unified framework. This algorithm results in an optimized graph rather than a predefined one.

  3. Appearance-based color face recognition with 3D model

    NASA Astrophysics Data System (ADS)

    Wang, Chengzhang; Bai, Xiaoming

    2013-03-01

    Appearance-based face recognition approaches explore color cues of face images, i.e. grey or color information for recognition task. They first encode color face images, and then extract facial features for classification. Similar to conventional singular value decomposition, hypercomplex matrix also exists singular value decomposition on hypercomplex field. In this paper, a novel color face recognition approach based on hypercomplex singular value decomposition is proposed. The approach employs hypercomplex to encode color face information of different channels simultaneously. Hypercomplex singular value decomposition is utilized then to compute the basis vectors of the color face subspace. To improve learning efficiency of the algorithm, 3D active deformable model is exploited to generate virtual face images. Color face samples are projected onto the subspace and projection coefficients are utilized as facial features. Experimental results on CMU PIE face database verify the effectiveness of the proposed approach.

  4. Familiar Face Recognition in Children with Autism: The Differential Use of Inner and Outer Face Parts

    ERIC Educational Resources Information Center

    Wilson, Rebecca; Pascalis, Olivier; Blades, Mark

    2007-01-01

    We investigated whether children with autistic spectrum disorders (ASD) have a deficit in recognising familiar faces. Children with ASD were given a forced choice familiar face recognition task with three conditions: full faces, inner face parts and outer face parts. Control groups were children with developmental delay (DD) and typically…

  5. Isolating the Special Component of Face Recognition: Peripheral Identification and a Mooney Face

    ERIC Educational Resources Information Center

    McKone, Elinor

    2004-01-01

    A previous finding argues that, for faces, configural (holistic) processing can operate even in the complete absence of part-based contributions to recognition. Here, this result is confirmed using 2 methods. In both, recognition of inverted faces (parts only) was removed altogether (chance identification of faces in the periphery; no perception…

  6. Effects of compression and individual variability on face recognition performance

    NASA Astrophysics Data System (ADS)

    McGarry, Delia P.; Arndt, Craig M.; McCabe, Steven A.; D'Amato, Donald P.

    2004-08-01

    The Enhanced Border Security and Visa Entry Reform Act of 2002 requires that the Visa Waiver Program be available only to countries that have a program to issue to their nationals machine-readable passports incorporating biometric identifiers complying with applicable standards established by the International Civil Aviation Organization (ICAO). In June 2002, the New Technologies Working Group of ICAO unanimously endorsed the use of face recognition (FR) as the globally interoperable biometric for machine-assisted identity confirmation with machine-readable travel documents (MRTDs), although Member States may elect to use fingerprint and/or iris recognition as additional biometric technologies. The means and formats are still being developed through which biometric information might be stored in the constrained space of integrated circuit chips embedded within travel documents. Such information will be stored in an open, yet unalterable and very compact format, probably as digitally signed and efficiently compressed images. The objective of this research is to characterize the many factors that affect FR system performance with respect to the legislated mandates concerning FR. A photograph acquisition environment and a commercial face recognition system have been installed at Mitretek, and over 1,400 images have been collected of volunteers. The image database and FR system are being used to analyze the effects of lossy image compression, individual differences, such as eyeglasses and facial hair, and the acquisition environment on FR system performance. Images are compressed by varying ratios using JPEG2000 to determine the trade-off points between recognition accuracy and compression ratio. The various acquisition factors that contribute to differences in FR system performance among individuals are also being measured. The results of this study will be used to refine and test efficient face image interchange standards that ensure highly accurate recognition, both

  7. Hyperspectral face recognition with spatiospectral information fusion and PLS regression.

    PubMed

    Uzair, Muhammad; Mahmood, Arif; Mian, Ajmal

    2015-03-01

    Hyperspectral imaging offers new opportunities for face recognition via improved discrimination along the spectral dimension. However, it poses new challenges, including low signal-to-noise ratio, interband misalignment, and high data dimensionality. Due to these challenges, the literature on hyperspectral face recognition is not only sparse but is limited to ad hoc dimensionality reduction techniques and lacks comprehensive evaluation. We propose a hyperspectral face recognition algorithm using a spatiospectral covariance for band fusion and partial least square regression for classification. Moreover, we extend 13 existing face recognition techniques, for the first time, to perform hyperspectral face recognition.We formulate hyperspectral face recognition as an image-set classification problem and evaluate the performance of seven state-of-the-art image-set classification techniques. We also test six state-of-the-art grayscale and RGB (color) face recognition algorithms after applying fusion techniques on hyperspectral images. Comparison with the 13 extended and five existing hyperspectral face recognition techniques on three standard data sets show that the proposed algorithm outperforms all by a significant margin. Finally, we perform band selection experiments to find the most discriminative bands in the visible and near infrared response spectrum.

  8. Transfer between Pose and Illumination Training in Face Recognition

    ERIC Educational Resources Information Center

    Liu, Chang Hong; Bhuiyan, Md. Al-Amin; Ward, James; Sui, Jie

    2009-01-01

    The relationship between pose and illumination learning in face recognition was examined in a yes-no recognition paradigm. The authors assessed whether pose training can transfer to a new illumination or vice versa. Results show that an extensive level of pose training through a face-name association task was able to generalize to a new…

  9. Recognition of Moving and Static Faces by Young Infants

    ERIC Educational Resources Information Center

    Otsuka, Yumiko; Konishi, Yukuo; Kanazawa, So; Yamaguchi, Masami K.; Abdi, Herve; O'Toole, Alice J.

    2009-01-01

    This study compared 3- to 4-month-olds' recognition of previously unfamiliar faces learned in a moving or a static condition. Infants in the moving condition showed successful recognition with only 30 s familiarization, even when different images of a face were used in the familiarization and test phase (Experiment 1). In contrast, infants in the…

  10. When the face fits: recognition of celebrities from matching and mismatching faces and voices.

    PubMed

    Stevenage, Sarah V; Neil, Greg J; Hamlin, Iain

    2014-01-01

    The results of two experiments are presented in which participants engaged in a face-recognition or a voice-recognition task. The stimuli were face-voice pairs in which the face and voice were co-presented and were either "matched" (same person), "related" (two highly associated people), or "mismatched" (two unrelated people). Analysis in both experiments confirmed that accuracy and confidence in face recognition was consistently high regardless of the identity of the accompanying voice. However accuracy of voice recognition was increasingly affected as the relationship between voice and accompanying face declined. Moreover, when considering self-reported confidence in voice recognition, confidence remained high for correct responses despite the proportion of these responses declining across conditions. These results converged with existing evidence indicating the vulnerability of voice recognition as a relatively weak signaller of identity, and results are discussed in the context of a person-recognition framework.

  11. Electrophysiological markers of covert face recognition in developmental prosopagnosia.

    PubMed

    Eimer, Martin; Gosling, Angela; Duchaine, Bradley

    2012-02-01

    To study the existence and neural basis of covert face recognition in individuals with developmental prosopagnosia, we tested a group of 12 participants with developmental prosopagnosia in a task that required them to judge the familiarity of successively presented famous or non-famous faces. Electroencephalography was recorded during task performance, and event-related brain potentials were computed for recognized famous faces, non-recognized famous faces and non-famous faces. In six individuals with developmental prosopagnosia, non-recognized famous faces triggered an occipito-temporal N250 component, which is thought to reflect the activation of stored visual memory traces of known individual faces. In contrast to the N250, the P600f component, which is linked to late semantic stages of face identity processing, was not triggered by non-recognized famous faces. Event-related potential correlates of explicit face recognition obtained on those few trials where participants with developmental prosopagnosia classified famous faces as known or familiar, were similar to the effects previously found in participants with intact face recognition abilities, suggesting that face recognition mechanisms in individuals with developmental prosopagnosia are not qualitatively different from that of unimpaired individuals. Overall, these event-related potential results provide the first neurophysiological evidence for covert face recognition in developmental prosopagnosia, and suggest this phenomenon results from disconnected links between intact identity-specific visual memory traces and later semantic face processing stages. They also imply that the activation of stored visual representations of familiar faces is not sufficient for conscious explicit face recognition.

  12. The role of skin colour in face recognition.

    PubMed

    Bar-Haim, Yair; Saidel, Talia; Yovel, Galit

    2009-01-01

    People have better memory for faces from their own racial group than for faces from other races. It has been suggested that this own-race recognition advantage depends on an initial categorisation of faces into own and other race based on racial markers, resulting in poorer encoding of individual variations in other-race faces. Here, we used a study--test recognition task with stimuli in which the skin colour of African and Caucasian faces was manipulated to produce four categories representing the cross-section between skin colour and facial features. We show that, despite the notion that skin colour plays a major role in categorising faces into own and other-race faces, its effect on face recognition is minor relative to differences across races in facial features.

  13. The asymmetric distribution of informative face information during gender recognition.

    PubMed

    Hu, Fengpei; Hu, Huan; Xu, Lian; Qin, Jungang

    2013-02-01

    Recognition of the gender of a face is important in social interactions. In the current study, the distribution of informative facial information was systematically examined during gender judgment using two methods, Bubbles and Focus windows techniques. Two experiments found that the most informative information was around the eyes, followed by the mouth and nose. Other parts of the face contributed to the gender recognition but were less important. The left side of the face was used more during gender recognition in two experiments. These results show mainly areas around the eyes are used for gender judgment and demonstrate perceptual asymmetry with a normal (non-chimeric) face.

  14. Face averages enhance user recognition for smartphone security.

    PubMed

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.

  15. Graph Laplace for occluded face completion and recognition.

    PubMed

    Deng, Yue; Dai, Qionghai; Zhang, Zengke

    2011-08-01

    This paper proposes a spectral-graph-based algorithm for face image repairing, which can improve the recognition performance on occluded faces. The face completion algorithm proposed in this paper includes three main procedures: 1) sparse representation for partially occluded face classification; 2) image-based data mining; and 3) graph Laplace (GL) for face image completion. The novel part of the proposed framework is GL, as named from graphical models and the Laplace equation, and can achieve a high-quality repairing of damaged or occluded faces. The relationship between the GL and the traditional Poisson equation is proven. We apply our face repairing algorithm to produce completed faces, and use face recognition to evaluate the performance of the algorithm. Experimental results verify the effectiveness of the GL method for occluded face completion.

  16. Pose-Invariant Face Recognition via RGB-D Images

    PubMed Central

    Sang, Gaoli; Li, Jing; Zhao, Qijun

    2016-01-01

    Three-dimensional (3D) face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D) face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions. PMID:26819581

  17. The activation of visual face memory and explicit face recognition are delayed in developmental prosopagnosia.

    PubMed

    Parketny, Joanna; Towler, John; Eimer, Martin

    2015-08-01

    Individuals with developmental prosopagnosia (DP) are strongly impaired in recognizing faces, but the causes of this deficit are not well understood. We employed event-related brain potentials (ERPs) to study the time-course of neural processes involved in the recognition of previously unfamiliar faces in DPs and in age-matched control participants with normal face recognition abilities. Faces of different individuals were presented sequentially in one of three possible views, and participants had to detect a specific Target Face ("Joe"). EEG was recorded during task performance to Target Faces, Nontarget Faces, or the participants' Own Face (which had to be ignored). The N250 component was measured as a marker of the match between a seen face and a stored representation in visual face memory. The subsequent P600f was measured as an index of attentional processes associated with the conscious awareness and recognition of a particular face. Target Faces elicited reliable N250 and P600f in the DP group, but both of these components emerged later in DPs than in control participants. This shows that the activation of visual face memory for previously unknown learned faces and the subsequent attentional processing and conscious recognition of these faces are delayed in DP. N250 and P600f components to Own Faces did not differ between the two groups, indicating that the processing of long-term familiar faces is less affected in DP. However, P600f components to Own Faces were absent in two participants with DP who failed to recognize their Own Face during the experiment. These results provide new evidence that face recognition deficits in DP may be linked to a delayed activation of visual face memory and explicit identity recognition mechanisms.

  18. A Spatial Frequency Account of the Detriment that Local Processing of Navon Letters Has on Face Recognition

    ERIC Educational Resources Information Center

    Hills, Peter J.; Lewis, Michael B.

    2009-01-01

    Five minutes of processing the local features of a Navon letter causes a detriment in subsequent face-recognition performance (Macrae & Lewis, 2002). We hypothesize a perceptual after effect explanation of this effect in which face recognition is less accurate after adapting to high-spatial frequencies at high contrasts. Five experiments were…

  19. Impaired processing of self-face recognition in anorexia nervosa.

    PubMed

    Hirot, France; Lesage, Marine; Pedron, Lya; Meyer, Isabelle; Thomas, Pierre; Cottencin, Olivier; Guardia, Dewi

    2016-03-01

    Body image disturbances and massive weight loss are major clinical symptoms of anorexia nervosa (AN). The aim of the present study was to examine the influence of body changes and eating attitudes on self-face recognition ability in AN. Twenty-seven subjects suffering from AN and 27 control participants performed a self-face recognition task (SFRT). During the task, digital morphs between their own face and a gender-matched unfamiliar face were presented in a random sequence. Participants' self-face recognition failures, cognitive flexibility, body concern and eating habits were assessed with the Self-Face Recognition Questionnaire (SFRQ), Trail Making Test (TMT), Body Shape Questionnaire (BSQ) and Eating Disorder Inventory-2 (EDI-2), respectively. Subjects suffering from AN exhibited significantly greater difficulties than control participants in identifying their own face (p = 0.028). No significant difference was observed between the two groups for TMT (all p > 0.1, non-significant). Regarding predictors of self-face recognition skills, there was a negative correlation between SFRT and body mass index (p = 0.01) and a positive correlation between SFRQ and EDI-2 (p < 0.001) or BSQ (p < 0.001). Among factors involved, nutritional status and intensity of eating disorders could play a part in impaired self-face recognition.

  20. A Neural Model of Face Recognition: a Comprehensive Approach

    NASA Astrophysics Data System (ADS)

    Stara, Vera; Montesanto, Anna; Puliti, Paolo; Tascini, Guido; Sechi, Cristina

    Visual recognition of faces is an essential behavior of humans: we have optimal performance in everyday life and just such a performance makes us able to establish the continuity of actors in our social life and to quickly identify and categorize people. This remarkable ability justifies the general interest in face recognition of researchers belonging to different fields and specially of designers of biometrical identification systems able to recognize the features of person's faces in a background. Due to interdisciplinary nature of this topic in this contribute we deal with face recognition through a comprehensive approach with the purpose to reproduce some features of human performance, as evidenced by studies in psychophysics and neuroscience, relevant to face recognition. This approach views face recognition as an emergent phenomenon resulting from the nonlinear interaction of a number of different features. For this reason our model of face recognition has been based on a computational system implemented through an artificial neural network. This synergy between neuroscience and engineering efforts allowed us to implement a model that had a biological plausibility, performed the same tasks as human subjects, and gave a possible account of human face perception and recognition. In this regard the paper reports on an experimental study of performance of a SOM-based neural network in a face recognition task, with reference both to the ability to learn to discriminate different faces, and to the ability to recognize a face already encountered in training phase, when presented in a pose or with an expression differing from the one present in the training context.

  1. Tolerance of geometric distortions in infant's face recognition.

    PubMed

    Yamashita, Wakayo; Kanazawa, So; Yamaguchi, Masami K

    2014-02-01

    The aim of the current study is to reveal the effect of global linear transformations (shearing, horizontal stretching, and vertical stretching) on the recognition of familiar faces (e.g., a mother's face) in 6- to 7-month-old infants. In this experiment, we applied the global linear transformations to both the infants' own mother's face and to a stranger's face, and we tested infants' preference between these faces. We found that only 7-month-old infants maintained preference for their own mother's face during the presentation of vertical stretching, while the preference for the mother's face disappeared during the presentation of shearing or horizontal stretching. These findings suggest that 7-month-old infants might not recognize faces based on calculating the absolute distance between facial features, and that the vertical dimension of facial features might be more related to infants' face recognition rather than the horizontal dimension.

  2. Multi-feature fusion for thermal face recognition

    NASA Astrophysics Data System (ADS)

    Bi, Yin; Lv, Mingsong; Wei, Yangjie; Guan, Nan; Yi, Wang

    2016-07-01

    Human face recognition has been researched for the last three decades. Face recognition with thermal images now attracts significant attention since they can be used in low/none illuminated environment. However, thermal face recognition performance is still insufficient for practical applications. One main reason is that most existing work leverage only single feature to characterize a face in a thermal image. To solve the problem, we propose multi-feature fusion, a technique that combines multiple features in thermal face characterization and recognition. In this work, we designed a systematical way to combine four features, including Local binary pattern, Gabor jet descriptor, Weber local descriptor and Down-sampling feature. Experimental results show that our approach outperforms methods that leverage only a single feature and is robust to noise, occlusion, expression, low resolution and different l1 -minimization methods.

  3. 3D face database for human pattern recognition

    NASA Astrophysics Data System (ADS)

    Song, LiMei; Lu, Lu

    2008-10-01

    Face recognition is an essential work to ensure human safety. It is also an important task in biomedical engineering. 2D image is not enough for precision face recognition. 3D face data includes more exact information, such as the precision size of eyes, mouth, etc. 3D face database is an important part in human pattern recognition. There is a lot of method to get 3D data, such as 3D laser scan system, 3D phase measurement, shape from shading, shape from motion, etc. This paper will introduce a non-orbit, non-contact, non-laser 3D measurement system. The main idea is from shape from stereo technique. Two cameras are used in different angle. A sequence of light will project on the face. Human face, human head, human tooth, human body can all be measured by the system. The visualization data of each person can form to a large 3D face database, which can be used in human recognition. The 3D data can provide a vivid copy of a face, so the recognition exactness can be reached to 100%. Although the 3D data is larger than 2D image, it can be used in the occasion where only few people include, such as the recognition of a family, a small company, etc.

  4. A multibiometric face recognition fusion framework with template protection

    NASA Astrophysics Data System (ADS)

    Chindaro, S.; Deravi, F.; Zhou, Z.; Ng, M. W. R.; Castro Neves, M.; Zhou, X.; Kelkboom, E.

    2010-04-01

    In this work we present a multibiometric face recognition framework based on combining information from 2D with 3D facial features. The 3D biometrics channel is protected by a privacy enhancing technology, which uses error correcting codes and cryptographic primitives to safeguard the privacy of the users of the biometric system at the same time enabling accurate matching through fusion with 2D. Experiments are conducted to compare the matching performance of such multibiometric systems with the individual biometric channels working alone and with unprotected multibiometric systems. The results show that the proposed hybrid system incorporating template protection, match and in some cases exceed the performance of corresponding unprotected equivalents, in addition to offering the additional privacy protection.

  5. Perception and recognition of faces in adolescence

    PubMed Central

    Fuhrmann, D.; Knoll, L. J.; Sakhardande, A. L.; Speekenbrink, M.; Kadosh, K. C.; Blakemore, S. -J.

    2016-01-01

    Most studies on the development of face cognition abilities have focussed on childhood, with early maturation accounts contending that face cognition abilities are mature by 3–5 years. Late maturation accounts, in contrast, propose that some aspects of face cognition are not mature until at least 10 years. Here, we measured face memory and face perception, two core face cognition abilities, in 661 participants (397 females) in four age groups (younger adolescents (11.27–13.38 years); mid-adolescents (13.39–15.89 years); older adolescents (15.90–18.00 years); and adults (18.01–33.15 years)) while controlling for differences in general cognitive ability. We showed that both face cognition abilities mature relatively late, at around 16 years, with a female advantage in face memory, but not in face perception, both in adolescence and adulthood. Late maturation in the face perception task was driven mainly by protracted development in identity perception, while gaze perception abilities were already comparatively mature in early adolescence. These improvements in the ability to memorize, recognize and perceive faces during adolescence may be related to increasing exploratory behaviour and exposure to novel faces during this period of life. PMID:27647477

  6. The Impact of Early Bilingualism on Face Recognition Processes.

    PubMed

    Kandel, Sonia; Burfin, Sabine; Méary, David; Ruiz-Tada, Elisa; Costa, Albert; Pascalis, Olivier

    2016-01-01

    Early linguistic experience has an impact on the way we decode audiovisual speech in face-to-face communication. The present study examined whether differences in visual speech decoding could be linked to a broader difference in face processing. To identify a phoneme we have to do an analysis of the speaker's face to focus on the relevant cues for speech decoding (e.g., locating the mouth with respect to the eyes). Face recognition processes were investigated through two classic effects in face recognition studies: the Other-Race Effect (ORE) and the Inversion Effect. Bilingual and monolingual participants did a face recognition task with Caucasian faces (own race), Chinese faces (other race), and cars that were presented in an Upright or Inverted position. The results revealed that monolinguals exhibited the classic ORE. Bilinguals did not. Overall, bilinguals were slower than monolinguals. These results suggest that bilinguals' face processing abilities differ from monolinguals'. Early exposure to more than one language may lead to a perceptual organization that goes beyond language processing and could extend to face analysis. We hypothesize that these differences could be due to the fact that bilinguals focus on different parts of the face than monolinguals, making them more efficient in other race face processing but slower. However, more studies using eye-tracking techniques are necessary to confirm this explanation.

  7. The Impact of Early Bilingualism on Face Recognition Processes

    PubMed Central

    Kandel, Sonia; Burfin, Sabine; Méary, David; Ruiz-Tada, Elisa; Costa, Albert; Pascalis, Olivier

    2016-01-01

    Early linguistic experience has an impact on the way we decode audiovisual speech in face-to-face communication. The present study examined whether differences in visual speech decoding could be linked to a broader difference in face processing. To identify a phoneme we have to do an analysis of the speaker’s face to focus on the relevant cues for speech decoding (e.g., locating the mouth with respect to the eyes). Face recognition processes were investigated through two classic effects in face recognition studies: the Other-Race Effect (ORE) and the Inversion Effect. Bilingual and monolingual participants did a face recognition task with Caucasian faces (own race), Chinese faces (other race), and cars that were presented in an Upright or Inverted position. The results revealed that monolinguals exhibited the classic ORE. Bilinguals did not. Overall, bilinguals were slower than monolinguals. These results suggest that bilinguals’ face processing abilities differ from monolinguals’. Early exposure to more than one language may lead to a perceptual organization that goes beyond language processing and could extend to face analysis. We hypothesize that these differences could be due to the fact that bilinguals focus on different parts of the face than monolinguals, making them more efficient in other race face processing but slower. However, more studies using eye-tracking techniques are necessary to confirm this explanation. PMID:27486422

  8. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition

    PubMed Central

    Wang, Rong

    2015-01-01

    In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification. PMID:26576452

  9. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition.

    PubMed

    Wang, Rong

    2015-01-01

    In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  10. Face recognition in newly hatched chicks at the onset of vision.

    PubMed

    Wood, Samantha M W; Wood, Justin N

    2015-04-01

    How does face recognition emerge in the newborn brain? To address this question, we used an automated controlled-rearing method with a newborn animal model: the domestic chick (Gallus gallus). This automated method allowed us to examine chicks' face recognition abilities at the onset of both face experience and object experience. In the first week of life, newly hatched chicks were raised in controlled-rearing chambers that contained no objects other than a single virtual human face. In the second week of life, we used an automated forced-choice testing procedure to examine whether chicks could distinguish that familiar face from a variety of unfamiliar faces. Chicks successfully distinguished the familiar face from most of the unfamiliar faces-for example, chicks were sensitive to changes in the face's age, gender, and orientation (upright vs. inverted). Thus, chicks can build an accurate representation of the first face they see in their life. These results show that the initial state of face recognition is surprisingly powerful: Newborn visual systems can begin encoding and recognizing faces at the onset of vision.

  11. Face engagement during infancy predicts later face recognition ability in younger siblings of children with autism.

    PubMed

    de Klerk, Carina C J M; Gliga, Teodora; Charman, Tony; Johnson, Mark H

    2014-07-01

    Face recognition difficulties are frequently documented in children with autism spectrum disorders (ASD). It has been hypothesized that these difficulties result from a reduced interest in faces early in life, leading to decreased cortical specialization and atypical development of the neural circuitry for face processing. However, a recent study by our lab demonstrated that infants at increased familial risk for ASD, irrespective of their diagnostic status at 3 years, exhibit a clear orienting response to faces. The present study was conducted as a follow-up on the same cohort to investigate how measures of early engagement with faces relate to face-processing abilities later in life. We also investigated whether face recognition difficulties are specifically related to an ASD diagnosis, or whether they are present at a higher rate in all those at familial risk. At 3 years we found a reduced ability to recognize unfamiliar faces in the high-risk group that was not specific to those children who received an ASD diagnosis, consistent with face recognition difficulties being an endophenotype of the disorder. Furthermore, we found that longer looking at faces at 7 months was associated with poorer performance on the face recognition task at 3 years in the high-risk group. These findings suggest that longer looking at faces in infants at risk for ASD might reflect early face-processing difficulties and predicts difficulties with recognizing faces later in life.

  12. Face recognition in simulated prosthetic vision: face detection-based image processing strategies

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Wu, Xiaobei; Lu, Yanyu; Wu, Hao; Kan, Han; Chai, Xinyu

    2014-08-01

    Objective. Given the limited visual percepts elicited by current prosthetic devices, it is essential to optimize image content in order to assist implant wearers to achieve better performance of visual tasks. This study focuses on recognition of familiar faces using simulated prosthetic vision. Approach. Combined with region-of-interest (ROI) magnification, three face extraction strategies based on a face detection technique were used: the Viola-Jones face region, the statistical face region (SFR) and the matting face region. Main results. These strategies significantly enhanced recognition performance compared to directly lowering resolution (DLR) with Gaussian dots. The inclusion of certain external features, such as hairstyle, was beneficial for face recognition. Given the high recognition accuracy achieved and applicable processing speed, SFR-ROI was the preferred strategy. DLR processing resulted in significant face gender recognition differences (i.e. females were more easily recognized than males), but these differences were not apparent with other strategies. Significance. Face detection-based image processing strategies improved visual perception by highlighting useful information. Their use is advisable for face recognition when using low-resolution prosthetic vision. These results provide information for the continued design of image processing modules for use in visual prosthetics, thus maximizing the benefits for future prosthesis wearers.

  13. Understanding eye movements in face recognition using hidden Markov models.

    PubMed

    Chuk, Tim; Chan, Antoni B; Hsiao, Janet H

    2014-09-16

    We use a hidden Markov model (HMM) based approach to analyze eye movement data in face recognition. HMMs are statistical models that are specialized in handling time-series data. We conducted a face recognition task with Asian participants, and model each participant's eye movement pattern with an HMM, which summarized the participant's scan paths in face recognition with both regions of interest and the transition probabilities among them. By clustering these HMMs, we showed that participants' eye movements could be categorized into holistic or analytic patterns, demonstrating significant individual differences even within the same culture. Participants with the analytic pattern had longer response times, but did not differ significantly in recognition accuracy from those with the holistic pattern. We also found that correct and wrong recognitions were associated with distinctive eye movement patterns; the difference between the two patterns lies in the transitions rather than locations of the fixations alone.

  14. Robust face recognition algorithm for identifition of disaster victims

    NASA Astrophysics Data System (ADS)

    Gevaert, Wouter J. R.; de With, Peter H. N.

    2013-02-01

    We present a robust face recognition algorithm for the identification of occluded, injured and mutilated faces with a limited training set per person. In such cases, the conventional face recognition methods fall short due to specific aspects in the classification. The proposed algorithm involves recursive Principle Component Analysis for reconstruction of afiected facial parts, followed by a feature extractor based on Gabor wavelets and uniform multi-scale Local Binary Patterns. As a classifier, a Radial Basis Neural Network is employed. In terms of robustness to facial abnormalities, tests show that the proposed algorithm outperforms conventional face recognition algorithms like, the Eigenfaces approach, Local Binary Patterns and the Gabor magnitude method. To mimic real-life conditions in which the algorithm would have to operate, specific databases have been constructed and merged with partial existing databases and jointly compiled. Experiments on these particular databases show that the proposed algorithm achieves recognition rates beyond 95%.

  15. Face recognition performance of individuals with Asperger syndrome on the Cambridge Face Memory Test.

    PubMed

    Hedley, Darren; Brewer, Neil; Young, Robyn

    2011-12-01

    Although face recognition deficits in individuals with Autism Spectrum Disorder (ASD), including Asperger syndrome (AS), are widely acknowledged, the empirical evidence is mixed. This in part reflects the failure to use standardized and psychometrically sound tests. We contrasted standardized face recognition scores on the Cambridge Face Memory Test (CFMT) for 34 individuals with AS with those for 42, IQ-matched non-ASD individuals, and age-standardized scores from a large Australian cohort. We also examined the influence of IQ, autistic traits, and negative affect on face recognition performance. Overall, participants with AS performed significantly worse on the CFMT than the non-ASD participants and when evaluated against standardized test norms. However, while 24% of participants with AS presented with severe face recognition impairment (>2 SDs below the mean), many individuals performed at or above the typical level for their age: 53% scored within +/- 1 SD of the mean and 9% demonstrated superior performance (>1 SD above the mean). Regression analysis provided no evidence that IQ, autistic traits, or negative affect significantly influenced face recognition: diagnostic group membership was the only significant predictor of face recognition performance. In sum, face recognition performance in ASD is on a continuum, but with average levels significantly below non-ASD levels of performance.

  16. Image dependency in the recognition of newly learnt faces.

    PubMed

    Longmore, Christopher A; Santos, Isabel M; Silva, Carlos F; Hall, Abi; Faloyin, Dipo; Little, Emily

    2017-05-01

    Research investigating the effect of lighting and viewpoint changes on unfamiliar and newly learnt faces has revealed that such recognition is highly image dependent and that changes in either of these leads to poor recognition accuracy. Three experiments are reported to extend these findings by examining the effect of apparent age on the recognition of newly learnt faces. Experiment 1 investigated the ability to generalize to novel ages of a face after learning a single image. It was found that recognition was best for the learnt image with performance falling the greater the dissimilarity between the study and test images. Experiments 2 and 3 examined whether learning two images aids subsequent recognition of a novel image. The results indicated that interpolation between two studied images (Experiment 2) provided some additional benefit over learning a single view, but that this did not extend to extrapolation (Experiment 3). The results from all studies suggest that recognition was driven primarily by pictorial codes and that the recognition of faces learnt from a limited number of sources operates on stored images of faces as opposed to more abstract, structural, representations.

  17. What Types of Visual Recognition Tasks Are Mediated by the Neural Subsystem that Subserves Face Recognition?

    ERIC Educational Resources Information Center

    Brooks, Brian E.; Cooper, Eric E.

    2006-01-01

    Three divided visual field experiments tested current hypotheses about the types of visual shape representation tasks that recruit the cognitive and neural mechanisms underlying face recognition. Experiment 1 found a right hemisphere advantage for subordinate but not basic-level face recognition. Experiment 2 found a right hemisphere advantage for…

  18. Super-resolution method for face recognition using nonlinear mappings on coherent features.

    PubMed

    Huang, Hua; He, Huiting

    2011-01-01

    Low-resolution (LR) of face images significantly decreases the performance of face recognition. To address this problem, we present a super-resolution method that uses nonlinear mappings to infer coherent features that favor higher recognition of the nearest neighbor (NN) classifiers for recognition of single LR face image. Canonical correlation analysis is applied to establish the coherent subspaces between the principal component analysis (PCA) based features of high-resolution (HR) and LR face images. Then, a nonlinear mapping between HR/LR features can be built by radial basis functions (RBFs) with lower regression errors in the coherent feature space than in the PCA feature space. Thus, we can compute super-resolved coherent features corresponding to an input LR image according to the trained RBF model efficiently and accurately. And, face identity can be obtained by feeding these super-resolved features to a simple NN classifier. Extensive experiments on the Facial Recognition Technology, University of Manchester Institute of Science and Technology, and Olivetti Research Laboratory databases show that the proposed method outperforms the state-of-the-art face recognition algorithms for single LR image in terms of both recognition rate and robustness to facial variations of pose and expression.

  19. Newborns' Face Recognition over Changes in Viewpoint

    ERIC Educational Resources Information Center

    Turati, Chiara; Bulf, Hermann; Simion, Francesca

    2008-01-01

    The study investigated the origins of the ability to recognize faces despite rotations in depth. Four experiments are reported that tested, using the habituation technique, whether 1-to-3-day-old infants are able to recognize the invariant aspects of a face over changes in viewpoint. Newborns failed to recognize facial perceptual invariances…

  20. Lateralization of kin recognition signals in the human face

    PubMed Central

    Dal Martello, Maria F.; Maloney, Laurence T.

    2010-01-01

    When human subjects view photographs of faces, their judgments of identity, gender, emotion, age, and attractiveness depend more on one side of the face than the other. We report an experiment testing whether allocentric kin recognition (the ability to judge the degree of kinship between individuals other than the observer) is also lateralized. One hundred and twenty-four observers judged whether or not pairs of children were biological siblings by looking at photographs of their faces. In three separate conditions, (1) the right hemi-face was masked, (2) the left hemi-face was masked, or (3) the face was fully visible. The d′ measures for the masked left hemi-face and masked right hemi-face were 1.024 and 1.004, respectively (no significant difference), and the d′ measure for the unmasked face was 1.079, not significantly greater than that for either of the masked conditions. We conclude, first, that there is no superiority of one or the other side of the observed face in kin recognition, second, that the information present in the left and right hemi-faces relevant to recognizing kin is completely redundant, and last that symmetry cues are not used for kin recognition. PMID:20884584

  1. A new accurate pill recognition system using imprint information

    NASA Astrophysics Data System (ADS)

    Chen, Zhiyuan; Kamata, Sei-ichiro

    2013-12-01

    Great achievements in modern medicine benefit human beings. Also, it has brought about an explosive growth of pharmaceuticals that current in the market. In daily life, pharmaceuticals sometimes confuse people when they are found unlabeled. In this paper, we propose an automatic pill recognition technique to solve this problem. It functions mainly based on the imprint feature of the pills, which is extracted by proposed MSWT (modified stroke width transform) and described by WSC (weighted shape context). Experiments show that our proposed pill recognition method can reach an accurate rate up to 92.03% within top 5 ranks when trying to classify more than 10 thousand query pill images into around 2000 categories.

  2. Intact recognition of facial expression, gender, and age in patients with impaired recognition of face identity.

    PubMed

    Tranel, D; Damasio, A R; Damasio, H

    1988-05-01

    We conducted a series of experiments to assess the ability to recognize the meaning of facial expressions, gender, and age in four patients with severe impairments of the recognition of facial identity. In three patients the recognition of face identity could be dissociated from that of facial expression, age, and gender. In one, all forms of face recognition were impaired. Thus, a given lesion may preclude one type of recognition but not another. We conclude that (1) the cognitive demands posed by different forms of recognition are met at different processing levels, and (2) different levels depend on different neural substrates.

  3. Error Rates in Users of Automatic Face Recognition Software.

    PubMed

    White, David; Dunn, James D; Schmid, Alexandra C; Kemp, Richard I

    2015-01-01

    In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated 'candidate lists' selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers-who use the system in their daily work-and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced "facial examiners" outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems-potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems.

  4. Error Rates in Users of Automatic Face Recognition Software

    PubMed Central

    White, David; Dunn, James D.; Schmid, Alexandra C.; Kemp, Richard I.

    2015-01-01

    In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated ‘candidate lists’ selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers–who use the system in their daily work–and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems–potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems. PMID:26465631

  5. Developmental Changes in Face Recognition during Childhood: Evidence from Upright and Inverted Faces

    ERIC Educational Resources Information Center

    de Heering, Adelaide; Rossion, Bruno; Maurer, Daphne

    2012-01-01

    Adults are experts at recognizing faces but there is controversy about how this ability develops with age. We assessed 6- to 12-year-olds and adults using a digitized version of the Benton Face Recognition Test, a sensitive tool for assessing face perception abilities. Children's response times for correct responses did not decrease between ages 6…

  6. Face Engagement during Infancy Predicts Later Face Recognition Ability in Younger Siblings of Children with Autism

    ERIC Educational Resources Information Center

    de Klerk, Carina C. J. M.; Gliga, Teodora; Charman, Tony; Johnson, Mark H.

    2014-01-01

    Face recognition difficulties are frequently documented in children with autism spectrum disorders (ASD). It has been hypothesized that these difficulties result from a reduced interest in faces early in life, leading to decreased cortical specialization and atypical development of the neural circuitry for face processing. However, a recent study…

  7. The Development of Spatial Frequency Biases in Face Recognition

    ERIC Educational Resources Information Center

    Leonard, Hayley C.; Karmiloff-Smith, Annette; Johnson, Mark H.

    2010-01-01

    Previous research has suggested that a mid-band of spatial frequencies is critical to face recognition in adults, but few studies have explored the development of this bias in children. We present a paradigm adapted from the adult literature to test spatial frequency biases throughout development. Faces were presented on a screen with particular…

  8. Development of Face Recognition in Infant Chimpanzees (Pan Troglodytes)

    ERIC Educational Resources Information Center

    Myowa-Yamakoshi, M.; Yamaguchi, M.K.; Tomonaga, M.; Tanaka, M.; Matsuzawa, T.

    2005-01-01

    In this paper, we assessed the developmental changes in face recognition by three infant chimpanzees aged 1-18 weeks, using preferential-looking procedures that measured the infants' eye- and head-tracking of moving stimuli. In Experiment 1, we prepared photographs of the mother of each infant and an ''average'' chimpanzee face using…

  9. Supervised Filter Learning for Representation Based Face Recognition

    PubMed Central

    Bi, Chao; Zhang, Lei; Qi, Miao; Zheng, Caixia; Yi, Yugen; Wang, Jianzhong; Zhang, Baoxue

    2016-01-01

    Representation based classification methods, such as Sparse Representation Classification (SRC) and Linear Regression Classification (LRC) have been developed for face recognition problem successfully. However, most of these methods use the original face images without any preprocessing for recognition. Thus, their performances may be affected by some problematic factors (such as illumination and expression variances) in the face images. In order to overcome this limitation, a novel supervised filter learning algorithm is proposed for representation based face recognition in this paper. The underlying idea of our algorithm is to learn a filter so that the within-class representation residuals of the faces' Local Binary Pattern (LBP) features are minimized and the between-class representation residuals of the faces' LBP features are maximized. Therefore, the LBP features of filtered face images are more discriminative for representation based classifiers. Furthermore, we also extend our algorithm for heterogeneous face recognition problem. Extensive experiments are carried out on five databases and the experimental results verify the efficacy of the proposed algorithm. PMID:27416030

  10. Individual differences in cortical face selectivity predict behavioral performance in face recognition

    PubMed Central

    Huang, Lijie; Song, Yiying; Li, Jingguang; Zhen, Zonglei; Yang, Zetian; Liu, Jia

    2014-01-01

    In functional magnetic resonance imaging studies, object selectivity is defined as a higher neural response to an object category than other object categories. Importantly, object selectivity is widely considered as a neural signature of a functionally-specialized area in processing its preferred object category in the human brain. However, the behavioral significance of the object selectivity remains unclear. In the present study, we used the individual differences approach to correlate participants' face selectivity in the face-selective regions with their behavioral performance in face recognition measured outside the scanner in a large sample of healthy adults. Face selectivity was defined as the z score of activation with the contrast of faces vs. non-face objects, and the face recognition ability was indexed as the normalized residual of the accuracy in recognizing previously-learned faces after regressing out that for non-face objects in an old/new memory task. We found that the participants with higher face selectivity in the fusiform face area (FFA) and the occipital face area (OFA), but not in the posterior part of the superior temporal sulcus (pSTS), possessed higher face recognition ability. Importantly, the association of face selectivity in the FFA and face recognition ability cannot be accounted for by FFA response to objects or behavioral performance in object recognition, suggesting that the association is domain-specific. Finally, the association is reliable, confirmed by the replication from another independent participant group. In sum, our finding provides empirical evidence on the validity of using object selectivity as a neural signature in defining object-selective regions in the human brain. PMID:25071513

  11. A face recognition algorithm based on thermal and visible data

    NASA Astrophysics Data System (ADS)

    Sochenkov, Ilya; Tihonkih, Dmitrii; Vokhmintcev, Aleksandr; Melnikov, Andrey; Makovetskii, Artyom

    2016-09-01

    In this work we present an algorithm of fusing thermal infrared and visible imagery to identify persons. The proposed face recognition method contains several components. In particular this is rigid body image registration. The rigid registration is achieved by a modified variant of the iterative closest point (ICP) algorithm. We consider an affine transformation in three-dimensional space that preserves the angles between the lines. An algorithm of matching is inspirited by the recent results of neurophysiology of vision. Also we consider the ICP minimizing error metric stage for the case of an arbitrary affine transformation. Our face recognition algorithm also uses the localized-contouring algorithms to segment the subject's face; thermal matching based on partial least squares discriminant analysis. Thermal imagery face recognition methods are advantageous when there is no control over illumination or for detecting disguised faces. The proposed algorithm leads to good matching accuracies for different person recognition scenarios (near infrared, far infrared, thermal infrared, viewed sketch). The performance of the proposed face recognition algorithm in real indoor environments is presented and discussed.

  12. On the facilitative effects of face motion on face recognition and its development

    PubMed Central

    Xiao, Naiqi G.; Perrotta, Steve; Quinn, Paul C.; Wang, Zhe; Sun, Yu-Hao P.; Lee, Kang

    2014-01-01

    For the past century, researchers have extensively studied human face processing and its development. These studies have advanced our understanding of not only face processing, but also visual processing in general. However, most of what we know about face processing was investigated using static face images as stimuli. Therefore, an important question arises: to what extent does our understanding of static face processing generalize to face processing in real-life contexts in which faces are mostly moving? The present article addresses this question by examining recent studies on moving face processing to uncover the influence of facial movements on face processing and its development. First, we describe evidence on the facilitative effects of facial movements on face recognition and two related theoretical hypotheses: the supplementary information hypothesis and the representation enhancement hypothesis. We then highlight several recent studies suggesting that facial movements optimize face processing by activating specific face processing strategies that accommodate to task requirements. Lastly, we review the influence of facial movements on the development of face processing in the first year of life. We focus on infants' sensitivity to facial movements and explore the facilitative effects of facial movements on infants' face recognition performance. We conclude by outlining several future directions to investigate moving face processing and emphasize the importance of including dynamic aspects of facial information to further understand face processing in real-life contexts. PMID:25009517

  13. Recognition memory in developmental prosopagnosia: electrophysiological evidence for abnormal routes to face recognition

    PubMed Central

    Burns, Edwin J.; Tree, Jeremy J.; Weidemann, Christoph T.

    2014-01-01

    Dual process models of recognition memory propose two distinct routes for recognizing a face: recollection and familiarity. Recollection is characterized by the remembering of some contextual detail from a previous encounter with a face whereas familiarity is the feeling of finding a face familiar without any contextual details. The Remember/Know (R/K) paradigm is thought to index the relative contributions of recollection and familiarity to recognition performance. Despite researchers measuring face recognition deficits in developmental prosopagnosia (DP) through a variety of methods, none have considered the distinct contributions of recollection and familiarity to recognition performance. The present study examined recognition memory for faces in eight individuals with DP and a group of controls using an R/K paradigm while recording electroencephalogram (EEG) data at the scalp. Those with DP were found to produce fewer correct “remember” responses and more false alarms than controls. EEG results showed that posterior “remember” old/new effects were delayed and restricted to the right posterior (RP) area in those with DP in comparison to the controls. A posterior “know” old/new effect commonly associated with familiarity for faces was only present in the controls whereas individuals with DP exhibited a frontal “know” old/new effect commonly associated with words, objects and pictures. These results suggest that individuals with DP do not utilize normal face-specific routes when making face recognition judgments but instead process faces using a pathway more commonly associated with objects. PMID:25177283

  14. Face recognition with illumination and pose variations using MINACE filters

    NASA Astrophysics Data System (ADS)

    Casasent, David; Patnaik, Rohit

    2005-10-01

    This paper presents the status of our present CMU face recognition work. We first present a face recognition system that functions in the presence of illumination variations. We then present initial results when pose variations are also considered. A separate minimum noise and correlation energy (MINACE) filter is synthesized for each person. Our concern is face identification and impostor (non-database face) rejection. Most prior face identification did not address impostor rejection. We also present results for face verification with impostor rejection. The MINACE parameter c trades-off distortion-tolerance (recognition) versus discrimination (impostor rejection) performance. We use an automated filter-synthesis algorithm to select c and to synthesize the MINACE filter for each person using a training set of images of that person and a validation set of a few faces of other persons; this synthesis ensures both good recognition and impostor rejection performance. No impostor data is present in the training or validation sets. The peak-tocorrelation energy ratio (PCE) metric is used as the match-score in both the filter-synthesis and test stages and we show that it is better than use of the correlation peak value. We use circular correlations in filter synthesis and in tests, since such filters require one-fourth the storage space and similarly fewer on-line correlation calculations compared to the use of linear correlation filters. All training set images are registered (aligned) using the coordinates of several facial landmarks to remove scale variations and tilt bias. We also discuss the proper handling of pose variations by either pose estimation or by transforming the test input to all reference poses. Our face recognition system is evaluated using images from the CMU Pose, Illumination, and Expression (PIE) database. The same set of MINACE filters and impostor faces are used to evaluate the performance of the face identification and verification systems.

  15. [Neural basis of self-face recognition: social aspects].

    PubMed

    Sugiura, Motoaki

    2012-07-01

    Considering the importance of the face in social survival and evidence from evolutionary psychology of visual self-recognition, it is reasonable that we expect neural mechanisms for higher social-cognitive processes to underlie self-face recognition. A decade of neuroimaging studies so far has, however, not provided an encouraging finding in this respect. Self-face specific activation has typically been reported in the areas for sensory-motor integration in the right lateral cortices. This observation appears to reflect the physical nature of the self-face which representation is developed via the detection of contingency between one's own action and sensory feedback. We have recently revealed that the medial prefrontal cortex, implicated in socially nuanced self-referential process, is activated during self-face recognition under a rich social context where multiple other faces are available for reference. The posterior cingulate cortex has also exhibited this activation modulation, and in the separate experiment showed a response to attractively manipulated self-face suggesting its relevance to positive self-value. Furthermore, the regions in the right lateral cortices typically showing self-face-specific activation have responded also to the face of one's close friend under the rich social context. This observation is potentially explained by the fact that the contingency detection for physical self-recognition also plays a role in physical social interaction, which characterizes the representation of personally familiar people. These findings demonstrate that neuroscientific exploration reveals multiple facets of the relationship between self-face recognition and social-cognitive process, and that technically the manipulation of social context is key to its success.

  16. Uniform design based SVM model selection for face recognition

    NASA Astrophysics Data System (ADS)

    Li, Weihong; Liu, Lijuan; Gong, Weiguo

    2010-02-01

    Support vector machine (SVM) has been proved to be a powerful tool for face recognition. The generalization capacity of SVM depends on the model with optimal hyperparameters. The computational cost of SVM model selection results in application difficulty in face recognition. In order to overcome the shortcoming, we utilize the advantage of uniform design--space filling designs and uniformly scattering theory to seek for optimal SVM hyperparameters. Then we propose a face recognition scheme based on SVM with optimal model which obtained by replacing the grid and gradient-based method with uniform design. The experimental results on Yale and PIE face databases show that the proposed method significantly improves the efficiency of SVM model selection.

  17. Face recognition under variable illumination via sparse representation of patches

    NASA Astrophysics Data System (ADS)

    Fan, Shouke; Liu, Rui; Feng, Weiguo; Zhu, Ming

    2013-10-01

    The objective of this work is to recognize faces under variations in illumination. Previous works have indicated that the variations in illumination can dramatically reduce the performance of face recognition. To this end - ;an efficient method for face recognition which is robust under variable illumination is proposed in this paper. First of all, a discrete cosine transform(DCT) in the logarithm domain is employed to preprocess the images, removing the illumination variations by discarding an appropriate number of low-frequency DCT coefficients. Then, a face image is partitioned into several patches, and we classify the patches using Sparse Representation-based Classification, respectively. At last, the identity of a test image can be determined by the classification results of its patches. Experimental results on the Yale B database and the CMU PIE database show that excellent recognition rates can be achieved by the proposed method.

  18. Facilitation of face recognition through the retino-tectal pathway.

    PubMed

    Nakano, Tamami; Higashida, Noriko; Kitazawa, Shigeru

    2013-08-01

    Humans can shift their gazes faster to human faces than to non-face targets during a task in which they are required to choose between face and non-face targets. However, it remains unclear whether a direct projection from the retina to the superior colliculus is specifically involved in this facilitated recognition of faces. To address this question, we presented a pair of face and non-face pictures to participants modulated in greyscale (luminance-defined stimuli) in one condition and modulated in a blue-yellow scale (S-cone-isolating stimuli) in another. The information of the S-cone-isolating stimuli is conveyed through the retino-geniculate pathway rather than the retino-tectal pathway. For the luminance stimuli, the reaction time was shorter towards a face than towards a non-face target. The facilitatory effect while choosing a face disappeared with the S-cone stimuli. Moreover, fearful faces elicited a significantly larger facilitatory effect relative to neutral faces, when the face (with or without emotion) and non-face stimuli were presented in greyscale. The effect of emotional expressions disappeared with the S-cone stimuli. In contrast to the S-cone stimuli, the face facilitatory effect was still observed with negated stimuli that were prepared by reversing the polarity of the original colour pictures and looked as unusual as the S-cone stimuli but still contained luminance information. These results demonstrate that the face facilitatory effect requires the facial and emotional information defined by luminance, suggesting that the luminance information conveyed through the retino-tectal pathway is responsible for the faster recognition of human faces.

  19. Robust Point Set Matching for Partial Face Recognition.

    PubMed

    Weng, Renliang; Lu, Jiwen; Tan, Yap-Peng

    2016-03-01

    Over the past three decades, a number of face recognition methods have been proposed in computer vision, and most of them use holistic face images for person identification. In many real-world scenarios especially some unconstrained environments, human faces might be occluded by other objects, and it is difficult to obtain fully holistic face images for recognition. To address this, we propose a new partial face recognition approach to recognize persons of interest from their partial faces. Given a pair of gallery image and probe face patch, we first detect keypoints and extract their local textural features. Then, we propose a robust point set matching method to discriminatively match these two extracted local feature sets, where both the textural information and geometrical information of local features are explicitly used for matching simultaneously. Finally, the similarity of two faces is converted as the distance between these two aligned feature sets. Experimental results on four public face data sets show the effectiveness of the proposed approach.

  20. Influence of social anxiety on recognition memory for happy and angry faces: Comparison between own- and other-race faces.

    PubMed

    Kikutani, Mariko

    2017-03-15

    The reported experiment investigated memory of unfamiliar faces and how it is influenced by race, facial expression, direction of gaze, and observers' level of social anxiety. Eighty- seven Japanese participants initially memorized images of Oriental and Caucasian faces displaying either happy or angry expressions with direct or averted gaze. They then saw the previously seen faces and additional distractor faces displaying neutral expressions, and judged if they had seen them before. Their level of social anxiety was measured with a questionnaire. Regardless of gaze or race of the faces, recognition for faces studied with happy expressions was more accurate than for those studied with angry expressions (happiness advantage), but this tendency weakened for people with higher levels of social anxiety, possibly due to their increased anxiety for positive feedback regarding social interactions. Interestingly, the reduction of the happiness advantage observed for the highly anxious participants was more prominent for the own-race faces than for the other-race faces. The results suggest that angry expression disrupts processing of identity-relevant features of the faces, but the memory for happy faces is affected by the social anxiety traits, and the magnitude of the impact may depend on the importance of the face.

  1. A Benchmark and Comparative Study of Video-Based Face Recognition on COX Face Database.

    PubMed

    Huang, Zhiwu; Shan, Shiguang; Wang, Ruiping; Zhang, Haihong; Lao, Shihong; Kuerban, Alifu; Chen, Xilin

    2015-12-01

    Face recognition with still face images has been widely studied, while the research on video-based face recognition is inadequate relatively, especially in terms of benchmark datasets and comparisons. Real-world video-based face recognition applications require techniques for three distinct scenarios: 1) Videoto-Still (V2S); 2) Still-to-Video (S2V); and 3) Video-to-Video (V2V), respectively, taking video or still image as query or target. To the best of our knowledge, few datasets and evaluation protocols have benchmarked for all the three scenarios. In order to facilitate the study of this specific topic, this paper contributes a benchmarking and comparative study based on a newly collected still/video face database, named COX(1) Face DB. Specifically, we make three contributions. First, we collect and release a largescale still/video face database to simulate video surveillance with three different video-based face recognition scenarios (i.e., V2S, S2V, and V2V). Second, for benchmarking the three scenarios designed on our database, we review and experimentally compare a number of existing set-based methods. Third, we further propose a novel Point-to-Set Correlation Learning (PSCL) method, and experimentally show that it can be used as a promising baseline method for V2S/S2V face recognition on COX Face DB. Extensive experimental results clearly demonstrate that video-based face recognition needs more efforts, and our COX Face DB is a good benchmark database for evaluation.

  2. Recognition of face and non-face stimuli in autistic spectrum disorder.

    PubMed

    Arkush, Leo; Smith-Collins, Adam P R; Fiorentini, Chiara; Skuse, David H

    2013-12-01

    The ability to remember faces is critical for the development of social competence. From childhood to adulthood, we acquire a high level of expertise in the recognition of facial images, and neural processes become dedicated to sustaining competence. Many people with autism spectrum disorder (ASD) have poor face recognition memory; changes in hairstyle or other non-facial features in an otherwise familiar person affect their recollection skills. The observation implies that they may not use the configuration of the inner face to achieve memory competence, but bolster performance in other ways. We aimed to test this hypothesis by comparing the performance of a group of high-functioning unmedicated adolescents with ASD and a matched control group on a "surprise" face recognition memory task. We compared their memory for unfamiliar faces with their memory for images of houses. To evaluate the role that is played by peripheral cues in assisting recognition memory, we cropped both sets of pictures, retaining only the most salient central features. ASD adolescents had poorer recognition memory for faces than typical controls, but their recognition memory for houses was unimpaired. Cropping images of faces did not disproportionately influence their recall accuracy, relative to controls. House recognition skills (cropped and uncropped) were similar in both groups. In the ASD group only, performance on both sets of task was closely correlated, implying that memory for faces and other complex pictorial stimuli is achieved by domain-general (non-dedicated) cognitive mechanisms. Adolescents with ASD apparently do not use domain-specialized processing of inner facial cues to support face recognition memory.

  3. Equivalent activation of the hippocampus by face-face and face-laugh paired associate learning and recognition.

    PubMed

    Holdstock, J S; Crane, J; Bachorowski, J-A; Milner, B

    2010-11-01

    The human hippocampus is known to play an important role in relational memory. Both patient lesion studies and functional-imaging studies have shown that it is involved in the encoding and retrieval from memory of arbitrary associations. Two recent patient lesion studies, however, have found dissociations between spared and impaired memory within the domain of relational memory. Recognition of associations between information of the same kind (e.g., two faces) was spared, whereas recognition of associations between information of different kinds (e.g., face-name or face-voice associations) was impaired by hippocampal lesions. Thus, recognition of associations between information of the same kind may not be mediated by the hippocampus. Few imaging studies have directly compared activation at encoding and recognition of associations between same and different types of information. Those that have have shown mixed findings and been open to alternative interpretation. We used fMRI to compare hippocampal activation while participants studied and later recognized face-face and face-laugh paired associates. We found no differences in hippocampal activation between our two types of stimulus materials during either study or recognition. Study of both types of paired associate activated the hippocampus bilaterally, but the hippocampus was not activated by either condition during recognition. Our findings suggest that the human hippocampus is normally engaged to a similar extent by study and recognition of associations between information of the same kind and associations between information of different kinds.

  4. Can massive but passive exposure to faces contribute to face recognition abilities?

    PubMed

    Yovel, Galit; Halsband, Keren; Pelleg, Michel; Farkash, Naomi; Gal, Bracha; Goshen-Gottstein, Yonatan

    2012-04-01

    Recent studies have suggested that individuation of other-race faces is more crucial for enhancing recognition performance than exposure that involves categorization of these faces to an identity-irrelevant criterion. These findings were primarily based on laboratory training protocols that dissociated exposure and individuation by using categorization tasks. However, the absence of enhanced recognition following categorization may not simulate key aspects of real-life massive exposure without individuation to other-race faces. Real-life exposure spans years of seeing a multitude of faces, under variant conditions, including expression, view, lighting and gaze, albeit with no subcategory individuation. However, in most real-life settings, massive exposure operates in concert with individuation. An exception to that are neonatology nurses, a unique population that is exposed to--but do not individuate--massive numbers of newborn faces. Our findings show that recognition of newborn faces by nurses does not differ from adults who are rarely exposed to newborn faces. A control study showed that the absence of enhanced recognition cannot be attributed to the relatively short exposure to each newborn face in the neonatology unit or to newborns' apparent homogeneous appearance. It is therefore the quality--not the quantity--of exposure that determines recognition abilities.

  5. Arguments Against a Configural Processing Account of Familiar Face Recognition.

    PubMed

    Burton, A Mike; Schweinberger, Stefan R; Jenkins, Rob; Kaufmann, Jürgen M

    2015-07-01

    Face recognition is a remarkable human ability, which underlies a great deal of people's social behavior. Individuals can recognize family members, friends, and acquaintances over a very large range of conditions, and yet the processes by which they do this remain poorly understood, despite decades of research. Although a detailed understanding remains elusive, face recognition is widely thought to rely on configural processing, specifically an analysis of spatial relations between facial features (so-called second-order configurations). In this article, we challenge this traditional view, raising four problems: (1) configural theories are underspecified; (2) large configural changes leave recognition unharmed; (3) recognition is harmed by nonconfigural changes; and (4) in separate analyses of face shape and face texture, identification tends to be dominated by texture. We review evidence from a variety of sources and suggest that failure to acknowledge the impact of familiarity on facial representations may have led to an overgeneralization of the configural account. We argue instead that second-order configural information is remarkably unimportant for familiar face recognition.

  6. Perspective projection for variance pose face recognition from camera calibration

    NASA Astrophysics Data System (ADS)

    Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.

    2016-04-01

    Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.

  7. Thermal Face Recognition in an Operational Scenario

    DTIC Science & Technology

    2004-01-01

    were based on gallery and probe sets collected in- doors during a single session . In that respect, they resemble the fa/fb tests in the FERET program...part by the DARPA Human Identifica- tion at a Distance (HID) program, contract # DARPA/AFOSR F49620-01- C-0008. imagery collected in a single session . Their...during a single session . During data collection illumination condi- tions were purposely varied in order to present a challenge for visible face

  8. Eye contrast polarity is critical for face recognition by infants.

    PubMed

    Otsuka, Yumiko; Motoyoshi, Isamu; Hill, Harold C; Kobayashi, Megumi; Kanazawa, So; Yamaguchi, Masami K

    2013-07-01

    Just as faces share the same basic arrangement of features, with two eyes above a nose above a mouth, human eyes all share the same basic contrast polarity relations, with a sclera lighter than an iris and a pupil, and this is unique among primates. The current study examined whether this bright-dark relationship of sclera to iris plays a critical role in face recognition from early in development. Specifically, we tested face discrimination in 7- and 8-month-old infants while independently manipulating the contrast polarity of the eye region and of the rest of the face. This gave four face contrast polarity conditions: fully positive condition, fully negative condition, positive face with negated eyes ("negative eyes") condition, and negated face with positive eyes ("positive eyes") condition. In a familiarization and novelty preference procedure, we found that 7- and 8-month-olds could discriminate between faces only when the contrast polarity of the eyes was preserved (positive) and that this did not depend on the contrast polarity of the rest of the face. This demonstrates the critical role of eye contrast polarity for face recognition in 7- and 8-month-olds and is consistent with previous findings for adults.

  9. Familiarity is not notoriety: phenomenological accounts of face recognition

    PubMed Central

    Liccione, Davide; Moruzzi, Sara; Rossi, Federica; Manganaro, Alessia; Porta, Marco; Nugrahaningsih, Nahumi; Caserio, Valentina; Allegri, Nicola

    2014-01-01

    From a phenomenological perspective, faces are perceived differently from objects as their perception always involves the possibility of a relational engagement (Bredlau, 2011). This is especially true for familiar faces, i.e., faces of people with a history of real relational engagements. Similarly, valence of emotional expressions assumes a key role, as they define the sense and direction of this engagement. Following these premises, the aim of the present study is to demonstrate that face recognition is facilitated by at least two variables, familiarity and emotional expression, and that perception of familiar faces is not influenced by orientation. In order to verify this hypothesis, we implemented a 3 × 3 × 2 factorial design, showing 17 healthy subjects three type of faces (unfamiliar, personally familiar, famous) characterized by three different emotional expressions (happy, hungry/sad, neutral) and in two different orientation (upright vs. inverted). We showed every subject a total of 180 faces with the instructions to give a familiarity judgment. Reaction times (RTs) were recorded and we found that the recognition of a face is facilitated by personal familiarity and emotional expression, and that this process is otherwise independent from a cognitive elaboration of stimuli and remains stable despite orientation. These results highlight the need to make a distinction between famous and personally familiar faces when studying face perception and to consider its historical aspects from a phenomenological point of view. PMID:25225476

  10. Wavelet-based illumination invariant preprocessing in face recognition

    NASA Astrophysics Data System (ADS)

    Goh, Yi Zheng; Teoh, Andrew Beng Jin; Goh, Kah Ong Michael

    2009-04-01

    Performance of a contemporary two-dimensional face-recognition system has not been satisfied due to the variation in lighting. As a result, many works of solving illumination variation in face recognition have been carried out in past decades. Among them, the Illumination-Reflectance model is one of the generic models that is used to separate the individual reflectance and illumination components of an object. The illumination component can be removed by means of image-processing techniques to regain an intrinsic face feature, which is depicted by the reflectance component. We present a wavelet-based illumination invariant algorithm as a preprocessing technique for face recognition. On the basis of the multiresolution nature of wavelet analysis, we decompose both illumination and reflectance components from a face image in a systematic way. The illumination component wherein resides in the low-spatial-frequency subband can be eliminated efficiently. This technique works out very advantageously for achieving higher recognition performance on YaleB, CMU PIE, and FRGC face databases.

  11. Face Recognition by Metropolitan Police Super-Recognisers.

    PubMed

    Robertson, David J; Noyes, Eilidh; Dowsett, Andrew J; Jenkins, Rob; Burton, A Mike

    2016-01-01

    Face recognition is used to prove identity across a wide variety of settings. Despite this, research consistently shows that people are typically rather poor at matching faces to photos. Some professional groups, such as police and passport officers, have been shown to perform just as poorly as the general public on standard tests of face recognition. However, face recognition skills are subject to wide individual variation, with some people showing exceptional ability-a group that has come to be known as 'super-recognisers'. The Metropolitan Police Force (London) recruits 'super-recognisers' from within its ranks, for deployment on various identification tasks. Here we test four working super-recognisers from within this police force, and ask whether they are really able to perform at levels above control groups. We consistently find that the police 'super-recognisers' perform at well above normal levels on tests of unfamiliar and familiar face matching, with degraded as well as high quality images. Recruiting employees with high levels of skill in these areas, and allocating them to relevant tasks, is an efficient way to overcome some of the known difficulties associated with unfamiliar face recognition.

  12. Face Recognition by Metropolitan Police Super-Recognisers

    PubMed Central

    Robertson, David J.; Noyes, Eilidh; Dowsett, Andrew J.; Jenkins, Rob; Burton, A. Mike

    2016-01-01

    Face recognition is used to prove identity across a wide variety of settings. Despite this, research consistently shows that people are typically rather poor at matching faces to photos. Some professional groups, such as police and passport officers, have been shown to perform just as poorly as the general public on standard tests of face recognition. However, face recognition skills are subject to wide individual variation, with some people showing exceptional ability—a group that has come to be known as ‘super-recognisers’. The Metropolitan Police Force (London) recruits ‘super-recognisers’ from within its ranks, for deployment on various identification tasks. Here we test four working super-recognisers from within this police force, and ask whether they are really able to perform at levels above control groups. We consistently find that the police ‘super-recognisers’ perform at well above normal levels on tests of unfamiliar and familiar face matching, with degraded as well as high quality images. Recruiting employees with high levels of skill in these areas, and allocating them to relevant tasks, is an efficient way to overcome some of the known difficulties associated with unfamiliar face recognition. PMID:26918457

  13. Faces are special but not too special: spared face recognition in amnesia is based on familiarity.

    PubMed

    Aly, Mariam; Knight, Robert T; Yonelinas, Andrew P

    2010-11-01

    Most current theories of human memory are material-general in the sense that they assume that the medial temporal lobe (MTL) is important for retrieving the details of prior events, regardless of the specific type of materials. Recent studies of amnesia have challenged the material-general assumption by suggesting that the MTL may be necessary for remembering words, but is not involved in remembering faces. We examined recognition memory for faces and words in a group of amnesic patients, which included hypoxic patients and patients with extensive left or right MTL lesions. Recognition confidence judgments were used to plot receiver operating characteristics (ROCs) in order to more fully quantify recognition performance and to estimate the contributions of recollection and familiarity. Consistent with the extant literature, an analysis of overall recognition accuracy showed that the patients were impaired at word memory but had spared face memory. However, the ROC analysis indicated that the patients were generally impaired at high confidence recognition responses for faces and words, and they exhibited significant recollection impairments for both types of materials. Familiarity for faces was preserved in all patients, but extensive left MTL damage impaired familiarity for words. These results show that face recognition may appear to be spared because performance tends to rely heavily on familiarity, a process that is relatively well preserved in amnesia. In addition, the findings challenge material-general theories of memory, and suggest that both material and process are important determinants of memory performance in amnesia.

  14. The hows and whys of face memory: level of construal influences the recognition of human faces

    PubMed Central

    Wyer, Natalie A.; Hollins, Timothy J.; Pahl, Sabine; Roper, Jean

    2015-01-01

    Three experiments investigated the influence of level of construal (i.e., the interpretation of actions in terms of their meaning or their details) on different stages of face memory. We employed a standard multiple-face recognition paradigm, with half of the faces inverted at test. Construal level was manipulated prior to recognition (Experiment 1), during study (Experiment 2) or both (Experiment 3). The results support a general advantage for high-level construal over low-level construal at both study and at test, and suggest that matching processing style between study and recognition has no advantage. These experiments provide additional evidence in support of a link between semantic processing (i.e., construal) and visual (i.e., face) processing. We conclude with a discussion of implications for current theories relating to both construal and face processing. PMID:26500586

  15. Embedded wavelet-based face recognition under variable position

    NASA Astrophysics Data System (ADS)

    Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi

    2015-02-01

    For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).

  16. Learning from humans: computational modeling of face recognition.

    PubMed

    Wallraven, Christian; Schwaninger, Adrian; Bülthoff, Heinrich H

    2005-12-01

    In this paper, we propose a computational architecture of face recognition based on evidence from cognitive research. Several recent psychophysical experiments have shown that humans process faces by a combination of configural and component information. Using an appearance-based implementation of this architecture based on low-level features and their spatial relations, we were able to model aspects of human performance found in psychophysical studies. Furthermore, results from additional computational recognition experiments show that our framework is able to achieve excellent recognition performance even under large view rotations. Our interdisciplinary study is an example of how results from cognitive research can be used to construct recognition systems with increased performance. Finally, our modeling results also make new experimental predictions that will be tested in further psychophysical studies, thus effectively closing the loop between psychophysical experimentation and computational modeling.

  17. Face Encoding and Recognition in the Human Brain

    NASA Astrophysics Data System (ADS)

    Haxby, James V.; Ungerleider, Leslie G.; Horwitz, Barry; Maisog, Jose Ma.; Rapoport, Stanley I.; Grady, Cheryl L.

    1996-01-01

    A dissociation between human neural systems that participate in the encoding and later recognition of new memories for faces was demonstrated by measuring memory task-related changes in regional cerebral blood flow with positron emission tomography. There was almost no overlap between the brain structures associated with these memory functions. A region in the right hippocampus and adjacent cortex was activated during memory encoding but not during recognition. The most striking finding in neocortex was the lateralization of prefrontal participation. Encoding activated left prefrontal cortex, whereas recognition activated right prefrontal cortex. These results indicate that the hippocampus and adjacent cortex participate in memory function primarily at the time of new memory encoding. Moreover, face recognition is not mediated simply by recapitulation of operations performed at the time of encoding but, rather, involves anatomically dissociable operations.

  18. Evidence for view-invariant face recognition units in unfamiliar face learning.

    PubMed

    Etchells, David B; Brooks, Joseph L; Johnston, Robert A

    2017-05-01

    Many models of face recognition incorporate the idea of a face recognition unit (FRU), an abstracted representation formed from each experience of a face which aids recognition under novel viewing conditions. Some previous studies have failed to find evidence of this FRU representation. Here, we report three experiments which investigated this theoretical construct by modifying the face learning procedure from that in previous work. During learning, one or two views of previously unfamiliar faces were shown to participants in a serial matching task. Later, participants attempted to recognize both seen and novel views of the learned faces (recognition phase). Experiment 1 tested participants' recognition of a novel view, a day after learning. Experiment 2 was identical, but tested participants on the same day as learning. Experiment 3 repeated Experiment 1, but tested participants on a novel view that was outside the rotation of those views learned. Results revealed a significant advantage, across all experiments, for recognizing a novel view when two views had been learned compared to single view learning. The observed view invariance supports the notion that an FRU representation is established during multi-view face learning under particular learning conditions.

  19. Are portrait artists superior face recognizers? Limited impact of adult experience on face recognition ability.

    PubMed

    Tree, Jeremy J; Horry, Ruth; Riley, Howard; Wilmer, Jeremy B

    2017-04-01

    Across 2 studies, the authors asked whether extensive experience in portrait art is associated with face recognition ability. In Study 1, 64 students completed a standardized face recognition test before and after completing a year-long art course that included substantial portraiture training. They found no evidence of an improvement in face recognition after training over and above what would be expected by practice alone. In Study 2, the authors investigated the possibility that more extensive experience might be needed for such advantages to emerge, by testing a cohort of expert portrait artists (N = 28), all of whom had many years of experience. In addition to memory for faces, they also explored memory for abstract art and for words in a paired-associate recognition test. The expert portrait artists performed similarly to a large, normative comparison sample on memory for faces and words but showed a small advantage for abstract art. Taken together, the results converge with existing literature to suggest that there is relatively little plasticity in face recognition in adulthood, at which point our substantial everyday experience with faces may have pushed us to the limits of our capabilities. (PsycINFO Database Record

  20. CMOS sensor for face tracking and recognition

    NASA Astrophysics Data System (ADS)

    Ginhac, Dominique; Prasetyo, Eri; Paindavoine, Michel

    2005-03-01

    This paper describes the main principles of a vision sensor dedicated to the detecting and tracking faces in video sequences. For this purpose, a current mode CMOS active sensor has been designed using an array of pixels that are amplified by using current mirrors of column amplifier. This circuit is simulated using Mentor Graphics software with parameters of a 0.6 μm CMOS process. The circuit design is added with a sequential control unit which purpose is to realise capture of subwindows at any location and any size in the whole image.

  1. Toward Development of a Face Recognition System for Watchlist Surveillance.

    PubMed

    Kamgar-Parsi, Behrooz; Lawson, Wallace; Kamgar-Parsi, Behzad

    2011-10-01

    The interest in face recognition is moving toward real-world applications and uncontrolled sensing environments. An important application of interest is automated surveillance, where the objective is to recognize and track people who are on a watchlist. For this open world application, a large number of cameras that are increasingly being installed at many locations in shopping malls, metro systems, airports, etc., will be utilized. While a very large number of people will approach or pass by these surveillance cameras, only a small set of individuals must be recognized. That is, the system must reject every subject unless the subject happens to be on the watchlist. While humans routinely reject previously unseen faces as strangers, rejection of previously unseen faces has remained a difficult aspect of automated face recognition. In this paper, we propose an approach motivated by human perceptual ability of face recognition which can handle previously unseen faces. Our approach is based on identifying the decision region(s) in the face space which belong to the target person(s). This is done by generating two large sets of borderline images, projecting just inside and outside of the decision region. For each person on the watchlist, a dedicated classifier is trained. Results of extensive experiments support the effectiveness of our approach. In addition to extensive experiments using our algorithm and prerecorded images, we have conducted considerable live system experiments with people in realistic environments.

  2. Maximal likelihood correspondence estimation for face recognition across pose.

    PubMed

    Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang

    2014-10-01

    Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database.

  3. Generating virtual training samples for sparse representation of face images and face recognition

    NASA Astrophysics Data System (ADS)

    Du, Yong; Wang, Yu

    2016-03-01

    There are many challenges in face recognition. In real-world scenes, images of the same face vary with changing illuminations, different expressions and poses, multiform ornaments, or even altered mental status. Limited available training samples cannot convey these possible changes in the training phase sufficiently, and this has become one of the restrictions to improve the face recognition accuracy. In this article, we view the multiplication of two images of the face as a virtual face image to expand the training set and devise a representation-based method to perform face recognition. The generated virtual samples really reflect some possible appearance and pose variations of the face. By multiplying a training sample with another sample from the same subject, we can strengthen the facial contour feature and greatly suppress the noise. Thus, more human essential information is retained. Also, uncertainty of the training data is simultaneously reduced with the increase of the training samples, which is beneficial for the training phase. The devised representation-based classifier uses both the original and new generated samples to perform the classification. In the classification phase, we first determine K nearest training samples for the current test sample by calculating the Euclidean distances between the test sample and training samples. Then, a linear combination of these selected training samples is used to represent the test sample, and the representation result is used to classify the test sample. The experimental results show that the proposed method outperforms some state-of-the-art face recognition methods.

  4. Can Massive but Passive Exposure to Faces Contribute to Face Recognition Abilities?

    ERIC Educational Resources Information Center

    Yovel, Galit; Halsband, Keren; Pelleg, Michel; Farkash, Naomi; Gal, Bracha; Goshen-Gottstein, Yonatan

    2012-01-01

    Recent studies have suggested that individuation of other-race faces is more crucial for enhancing recognition performance than exposure that involves categorization of these faces to an identity-irrelevant criterion. These findings were primarily based on laboratory training protocols that dissociated exposure and individuation by using…

  5. Face recognition using composite classifier with 2DPCA

    NASA Astrophysics Data System (ADS)

    Li, Jia; Yan, Ding

    2017-01-01

    In the conventional face recognition, most researchers focused on enhancing the precision which input data was already the member of database. However, they paid less necessary attention to confirm whether the input data belonged to database. This paper proposed an approach of face recognition using two-dimensional principal component analysis (2DPCA). It designed a novel composite classifier founded by statistical technique. Moreover, this paper utilized the advantages of SVM and Logic Regression in field of classification and therefore made its accuracy improved a lot. To test the performance of the composite classifier, the experiments were implemented on the ORL and the FERET database and the result was shown and evaluated.

  6. The Role of Higher Level Adaptive Coding Mechanisms in the Development of Face Recognition

    ERIC Educational Resources Information Center

    Pimperton, Hannah; Pellicano, Elizabeth; Jeffery, Linda; Rhodes, Gillian

    2009-01-01

    DevDevelopmental improvements in face identity recognition ability are widely documented, but the source of children's immaturity in face recognition remains unclear. Differences in the way in which children and adults visually represent faces might underlie immaturities in face recognition. Recent evidence of a face identity aftereffect (FIAE),…

  7. Efficient Detection of Occlusion prior to Robust Face Recognition

    PubMed Central

    Dugelay, Jean-Luc

    2014-01-01

    While there has been an enormous amount of research on face recognition under pose/illumination/expression changes and image degradations, problems caused by occlusions attracted relatively less attention. Facial occlusions, due, for example, to sunglasses, hat/cap, scarf, and beard, can significantly deteriorate performances of face recognition systems in uncontrolled environments such as video surveillance. The goal of this paper is to explore face recognition in the presence of partial occlusions, with emphasis on real-world scenarios (e.g., sunglasses and scarf). In this paper, we propose an efficient approach which consists of first analysing the presence of potential occlusion on a face and then conducting face recognition on the nonoccluded facial regions based on selective local Gabor binary patterns. Experiments demonstrate that the proposed method outperforms the state-of-the-art works including KLD-LGBPHS, S-LNMF, OA-LBP, and RSC. Furthermore, performances of the proposed approach are evaluated under illumination and extreme facial expression changes provide also significant results. PMID:24526902

  8. Two dimensional discriminant neighborhood preserving embedding in face recognition

    NASA Astrophysics Data System (ADS)

    Pang, Meng; Jiang, Jifeng; Lin, Chuang; Wang, Binghui

    2015-03-01

    One of the key issues of face recognition is to extract the features of face images. In this paper, we propose a novel method, named two-dimensional discriminant neighborhood preserving embedding (2DDNPE), for image feature extraction and face recognition. 2DDNPE benefits from four techniques, i.e., neighborhood preserving embedding (NPE), locality preserving projection (LPP), image based projection and Fisher criterion. Firstly, NPE and LPP are two popular manifold learning techniques which can optimally preserve the local geometry structures of the original samples from different angles. Secondly, image based projection enables us to directly extract the optimal projection vectors from twodimensional image matrices rather than vectors, which avoids the small sample size problem as well as reserves useful structural information embedded in the original images. Finally, the Fisher criterion applied in 2DDNPE can boost face recognition rates by minimizing the within-class distance, while maximizing the between-class distance. To evaluate the performance of 2DDNPE, several experiments are conducted on the ORL and Yale face datasets. The results corroborate that 2DDNPE outperforms the existing 1D feature extraction methods, such as NPE, LPP, LDA and PCA across all experiments with respect to recognition rate and training time. 2DDNPE also delivers consistently promising results compared with other competing 2D methods such as 2DNPP, 2DLPP, 2DLDA and 2DPCA.

  9. Coupled kernel embedding for low resolution face image recognition.

    PubMed

    Ren, Chuan-Xian; Dai, Dao-Qing; Yan, Hong

    2012-08-01

    Practical video scene and face recognition systems are sometimes confronted with low-resolution (LR) images. The faces may be very small even if the video is clear, thus it is difficult to directly measure the similarity between the faces and the high-resolution (HR) training samples. Traditional super-resolution (SR) methods based face recognition usually have limited performance because the target of SR may not be consistent with that of classification, and time-consuming SR algorithms are not suitable for real-time applications. In this paper, a new feature extraction method called Coupled Kernel Embedding (CKE) is proposed for LR face recognition without any SR preprocessing. In this method, the final kernel matrix is constructed by concatenating two individual kernel matrices in the diagonal direction, and the (semi-)positively definite properties are preserved for optimization. CKE addresses the problem of comparing multi-modal data that are difficult for conventional methods in practice due to the lack of an efficient similarity measure. Particularly, different kernel types (e.g., linear, Gaussian, polynomial) can be integrated into an uniformed optimization objective, which cannot be achieved by simple linear methods. CKE solves this problem by minimizing the dissimilarities captured by their kernel Gram matrices in the low- and high-resolution spaces. In the implementation, the nonlinear objective function is minimized by a generalized eigenvalue decomposition. Experiments on benchmark and real databases show that our CKE method indeed improves the recognition performance.

  10. Why the long face? The importance of vertical image structure for biological "barcodes" underlying face recognition.

    PubMed

    Spence, Morgan L; Storrs, Katherine R; Arnold, Derek H

    2014-07-29

    Humans are experts at face recognition. The mechanisms underlying this complex capacity are not fully understood. Recently, it has been proposed that face recognition is supported by a coarse-scale analysis of visual information contained in horizontal bands of contrast distributed along the vertical image axis-a biological facial "barcode" (Dakin & Watt, 2009). A critical prediction of the facial barcode hypothesis is that the distribution of image contrast along the vertical axis will be more important for face recognition than image distributions along the horizontal axis. Using a novel paradigm involving dynamic image distortions, a series of experiments are presented examining famous face recognition impairments from selectively disrupting image distributions along the vertical or horizontal image axes. Results show that disrupting the image distribution along the vertical image axis is more disruptive for recognition than matched distortions along the horizontal axis. Consistent with the facial barcode hypothesis, these results suggest that human face recognition relies disproportionately on appropriately scaled distributions of image contrast along the vertical image axis.

  11. Semisupervised kernel marginal Fisher analysis for face recognition.

    PubMed

    Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun

    2013-01-01

    Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.

  12. Semisupervised Kernel Marginal Fisher Analysis for Face Recognition

    PubMed Central

    Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun

    2013-01-01

    Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm. PMID:24163638

  13. The effect of feature displacement on face recognition.

    PubMed

    Haig, Nigel D

    2013-01-01

    Human beings possess a remarkable ability to recognise familiar faces quickly and without apparent effort. In spite of this facility, the mechanisms of visual recognition remain tantalisingly obscure. An experiment is reported in which image processing equipment was used to displace slightly the features of a set of original facial images to form groups of modified images. Observers were then required to indicate whether they were being shown the "original" or a "modified" face, when shown one face at a time on a TV monitor screen. Memory reinforcement was provided by displaying the original face at another screen position, between presentations. The data show, inter alia, the very high significance of the vertical positioning of the mouth, followed by eyes, and then the nose, as well as high sensitivity to close-set eyes, coupled with marked insensitivity to wide-set eyes. Implications of the results for the use of recognition aids such as Identikit and Photofit are briefly discussed.

  14. Face recognition with histograms of fractional differential gradients

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Ma, Yan; Cao, Qi

    2014-05-01

    It has proved that fractional differentiation can enhance the edge information and nonlinearly preserve textural detailed information in an image. This paper investigates its ability for face recognition and presents a local descriptor called histograms of fractional differential gradients (HFDG) to extract facial visual features. HFDG encodes a face image into gradient patterns using multiorientation fractional differential masks, from which histograms of gradient directions are computed as the face representation. Experimental results on Yale, face recognition technology (FERET), Carnegie Mellon University pose, illumination, and expression (CMU PIE), and A. Martinez and R. Benavente (AR) databases validate the feasibility of the proposed method and show that HFDG outperforms local binary patterns (LBP), histograms of oriented gradients (HOG), enhanced local directional patterns (ELDP), and Gabor feature-based methods.

  15. Design of embedded intelligent monitoring system based on face recognition

    NASA Astrophysics Data System (ADS)

    Liang, Weidong; Ding, Yan; Zhao, Liangjin; Li, Jia; Hu, Xuemei

    2017-01-01

    In this paper, a new embedded intelligent monitoring system based on face recognition is proposed. The system uses Pi Raspberry as the central processor. A sensors group has been designed with Zigbee module in order to assist the system to work better and the two alarm modes have been proposed using the Internet and 3G modem. The experimental results show that the system can work under various light intensities to recognize human face and send alarm information in real time.

  16. Emotion-attention interactions in recognition memory for distractor faces.

    PubMed

    Srinivasan, Narayanan; Gupta, Rashmi

    2010-04-01

    Effective filtering of distractor information has been shown to be dependent on perceptual load. Given the salience of emotional information and the presence of emotion-attention interactions, we wanted to explore the recognition memory for emotional distractors especially as a function of focused attention and distributed attention by manipulating load and the spatial spread of attention. We performed two experiments to study emotion-attention interactions by measuring recognition memory performance for distractor neutral and emotional faces. Participants performed a color discrimination task (low-load) or letter identification task (high-load) with a letter string display in Experiment 1 and a high-load letter identification task with letters presented in a circular array in Experiment 2. The stimuli were presented against a distractor face background. The recognition memory results show that happy faces were recognized better than sad faces under conditions of less focused or distributed attention. When attention is more spatially focused, sad faces were recognized better than happy faces. The study provides evidence for emotion-attention interactions in which specific emotional information like sad or happy is associated with focused or distributed attention respectively. Distractor processing with emotional information also has implications for theories of attention.

  17. Neural and genetic foundations of face recognition and prosopagnosia.

    PubMed

    Grüter, Thomas; Grüter, Martina; Carbon, Claus-Christian

    2008-03-01

    Faces are of essential importance for human social life. They provide valuable information about the identity, expression, gaze, health, and age of a person. Recent face-processing models assume highly interconnected neural structures between different temporal, occipital, and frontal brain areas with several feedback loops. A selective deficit in the visual learning and recognition of faces is known as prosopagnosia, which can be found both in acquired and congenital form. Recently, a hereditary sub-type of congenital prosopagnosia with a very high prevalence rate of 2.5% has been identified. Recent research results show that hereditary prosopagnosia is a clearly circumscribed face-processing deficit with a characteristic set of clinical symptoms. Comparing face processing of people of prosopagnosia with that of controls can help to develop a more conclusive and integrated model of face processing. Here, we provide a summary of the current state of face processing research. We also describe the different types of prosopagnosia and present the set of typical symptoms found in the hereditary type. Finally, we will discuss the implications for future face recognition research.

  18. Efficient live face detection to counter spoof attack in face recognition systems

    NASA Astrophysics Data System (ADS)

    Biswas, Bikram Kumar; Alam, Mohammad S.

    2015-03-01

    Face recognition is a critical tool used in almost all major biometrics based security systems. But recognition, authentication and liveness detection of the face of an actual user is a major challenge because an imposter or a non-live face of the actual user can be used to spoof the security system. In this research, a robust technique is proposed which detects liveness of faces in order to counter spoof attacks. The proposed technique uses a three-dimensional (3D) fast Fourier transform to compare spectral energies of a live face and a fake face in a mathematically selective manner. The mathematical model involves evaluation of energies of selective high frequency bands of average power spectra of both live and non-live faces. It also carries out proper recognition and authentication of the face of the actual user using the fringe-adjusted joint transform correlation technique, which has been found to yield the highest correlation output for a match. Experimental tests show that the proposed technique yields excellent results for identifying live faces.

  19. Orienting to face expression during encoding improves men's recognition of own gender faces.

    PubMed

    Fulton, Erika K; Bulluck, Megan; Hertzog, Christopher

    2015-10-01

    It is unclear why women have superior episodic memory of faces, but the benefit may be partially the result of women engaging in superior processing of facial expressions. Therefore, we hypothesized that orienting instructions to attend to facial expression at encoding would significantly improve men's memory of faces and possibly reduce gender differences. We directed 203 college students (122 women) to study 120 faces under instructions to orient to either the person's gender or their emotional expression. They later took a recognition test of these faces by either judging whether they had previously studied the same person or that person with the exact same expression; the latter test evaluated recollection of specific facial details. Orienting to facial expressions during encoding significantly improved men's recognition of own-gender faces and eliminated the advantage that women had for male faces under gender orienting instructions. Although gender differences in spontaneous strategy use when orienting to faces cannot fully account for gender differences in face recognition, orienting men to facial expression during encoding is one way to significantly improve their episodic memory for male faces.

  20. Evolutionary-Rough Feature Selection for Face Recognition

    NASA Astrophysics Data System (ADS)

    Mazumdar, Debasis; Mitra, Soma; Mitra, Sushmita

    Elastic Bunch Graph Matching is a feature-based face recognition algorithm which has been used to determine facial attributes from an image. However the dimension of the feature vectors, in case of EBGM, is quite high. Feature selection is a useful preprocessing step for reducing dimensionality, removing irrelevant data, improving learning accuracy and enhancing output comprehensibility.

  1. An Inner Face Advantage in Children's Recognition of Familiar Peers

    ERIC Educational Resources Information Center

    Ge, Liezhong; Anzures, Gizelle; Wang, Zhe; Kelly, David J.; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; Yang, Zhiliang; Lee, Kang

    2008-01-01

    Children's recognition of familiar own-age peers was investigated. Chinese children (4-, 8-, and 14-year-olds) were asked to identify their classmates from photographs showing the entire face, the internal facial features only, the external facial features only, or the eyes, nose, or mouth only. Participants from all age groups were familiar with…

  2. Emotional Recognition in Autism Spectrum Conditions from Voices and Faces

    ERIC Educational Resources Information Center

    Stewart, Mary E.; McAdam, Clair; Ota, Mitsuhiko; Peppe, Sue; Cleland, Joanne

    2013-01-01

    The present study reports on a new vocal emotion recognition task and assesses whether people with autism spectrum conditions (ASC) perform differently from typically developed individuals on tests of emotional identification from both the face and the voice. The new test of vocal emotion contained trials in which the vocal emotion of the sentence…

  3. Impact of Intention on the ERP Correlates of Face Recognition

    ERIC Educational Resources Information Center

    Guillaume, Fabrice; Tiberghien, Guy

    2013-01-01

    The present study investigated the impact of study-test similarity on face recognition by manipulating, in the same experiment, the expression change (same vs. different) and the task-processing context (inclusion vs. exclusion instructions) as within-subject variables. Consistent with the dual-process framework, the present results showed that…

  4. Effect of severe image compression on face recognition algorithms

    NASA Astrophysics Data System (ADS)

    Zhao, Peilong; Dong, Jiwen; Li, Hengjian

    2015-10-01

    In today's information age, people will depend more and more on computers to obtain and make use of information, there is a big gap between the multimedia information after digitization that has large data and the current hardware technology that can provide the computer storage resources and network band width. For example, there is a large amount of image storage and transmission problem. Image compression becomes useful in cases when images need to be transmitted across networks in a less costly way by increasing data volume while reducing transmission time. This paper discusses image compression to effect on face recognition system. For compression purposes, we adopted the JPEG, JPEG2000, JPEG XR coding standard. The face recognition algorithms studied are SIFT. As a form of an extensive research, Experimental results show that it still maintains a high recognition rate under the high compression ratio, and JPEG XR standards is superior to other two kinds in terms of performance and complexity.

  5. Face Recognition System for Set-Top Box-Based Intelligent TV

    PubMed Central

    Lee, Won Oh; Kim, Yeong Gon; Hong, Hyung Gil; Park, Kang Ryoung

    2014-01-01

    Despite the prevalence of smart TVs, many consumers continue to use conventional TVs with supplementary set-top boxes (STBs) because of the high cost of smart TVs. However, because the processing power of a STB is quite low, the smart TV functionalities that can be implemented in a STB are very limited. Because of this, negligible research has been conducted regarding face recognition for conventional TVs with supplementary STBs, even though many such studies have been conducted with smart TVs. In terms of camera sensors, previous face recognition systems have used high-resolution cameras, cameras with high magnification zoom lenses, or camera systems with panning and tilting devices that can be used for face recognition from various positions. However, these cameras and devices cannot be used in intelligent TV environments because of limitations related to size and cost, and only small, low cost web-cameras can be used. The resulting face recognition performance is degraded because of the limited resolution and quality levels of the images. Therefore, we propose a new face recognition system for intelligent TVs in order to overcome the limitations associated with low resource set-top box and low cost web-cameras. We implement the face recognition system using a software algorithm that does not require special devices or cameras. Our research has the following four novelties: first, the candidate regions in a viewer's face are detected in an image captured by a camera connected to the STB via low processing background subtraction and face color filtering; second, the detected candidate regions of face are transmitted to a server that has high processing power in order to detect face regions accurately; third, in-plane rotations of the face regions are compensated based on similarities between the left and right half sub-regions of the face regions; fourth, various poses of the viewer's face region are identified using five templates obtained during the initial user

  6. Anti Theft Mechanism Through Face recognition Using FPGA

    NASA Astrophysics Data System (ADS)

    Sundari, Y. B. T.; Laxminarayana, G.; Laxmi, G. Vijaya

    2012-11-01

    The use of vehicle is must for everyone. At the same time, protection from theft is also very important. Prevention of vehicle theft can be done remotely by an authorized person. The location of the car can be found by using GPS and GSM controlled by FPGA. In this paper, face recognition is used to identify the persons and comparison is done with the preloaded faces for authorization. The vehicle will start only when the authorized personís face is identified. In the event of theft attempt or unauthorized personís trial to drive the vehicle, an MMS/SMS will be sent to the owner along with the location. Then the authorized person can alert the security personnel for tracking and catching the vehicle. For face recognition, a Principal Component Analysis (PCA) algorithm is developed using MATLAB. The control technique for GPS and GSM is developed using VHDL over SPTRAN 3E FPGA. The MMS sending method is written in VB6.0. The proposed application can be implemented with some modifications in the systems wherever the face recognition or detection is needed like, airports, international borders, banking applications etc.

  7. Multi-stream face recognition for crime-fighting

    NASA Astrophysics Data System (ADS)

    Jassim, Sabah A.; Sellahewa, Harin

    2007-04-01

    Automatic face recognition (AFR) is a challenging task that is increasingly becoming the preferred biometric trait for identification and has the potential of becoming an essential tool in the fight against crime and terrorism. Closed-circuit television (CCTV) cameras have increasingly been used over the last few years for surveillance in public places such as airports, train stations and shopping centers. They are used to detect and prevent crime, shoplifting, public disorder and terrorism. The work of law-enforcing and intelligence agencies is becoming more reliant on the use of databases of biometric data for large section of the population. Face is one of the most natural biometric traits that can be used for identification and surveillance. However, variations in lighting conditions, facial expressions, face size and pose are a great obstacle to AFR. This paper is concerned with using waveletbased face recognition schemes in the presence of variations of expressions and illumination. In particular, we will investigate the use of a combination of wavelet frequency channels for a multi-stream face recognition using various wavelet subbands as different face signal streams. The proposed schemes extend our recently developed face veri.cation scheme for implementation on mobile devices. We shall present experimental results on the performance of our proposed schemes for a number of face databases including a new AV database recorded on a PDA. By analyzing the various experimental data, we shall demonstrate that the multi-stream approach is robust against variations in illumination and facial expressions than the previous single-stream approach.

  8. Face-blind for other-race faces: Individual differences in other-race recognition impairments.

    PubMed

    Wan, Lulu; Crookes, Kate; Dawel, Amy; Pidcock, Madeleine; Hall, Ashleigh; McKone, Elinor

    2017-01-01

    We report the existence of a previously undescribed group of people, namely individuals who are so poor at recognition of other-race faces that they meet criteria for clinical-level impairment (i.e., they are "face-blind" for other-race faces). Testing 550 participants, and using the well-validated Cambridge Face Memory Test for diagnosing face blindness, results show the rate of other-race face blindness to be nontrivial, specifically 8.1% of Caucasians and Asians raised in majority own-race countries. Results also show risk factors for other-race face blindness to include: a lack of interracial contact; and being at the lower end of the normal range of general face recognition ability (i.e., even for own-race faces); but not applying less individuating effort to other-race than own-race faces. Findings provide a potential resolution of contradictory evidence concerning the importance of the other-race effect (ORE), by explaining how it is possible for the mean ORE to be modest in size (suggesting a genuine but minor problem), and simultaneously for individuals to suffer major functional consequences in the real world (e.g., eyewitness misidentification of other-race offenders leading to wrongful imprisonment). Findings imply that, in legal settings, evaluating an eyewitness's chance of having made an other-race misidentification requires information about the underlying face recognition abilities of the individual witness. Additionally, analogy with prosopagnosia (inability to recognize even own-race faces) suggests everyday social interactions with other-race people, such as those between colleagues in the workplace, will be seriously impacted by the ORE in some people. (PsycINFO Database Record

  9. Uncorrelated and discriminative graph embedding for face recognition

    NASA Astrophysics Data System (ADS)

    Peng, Chengyu; Li, Jianwei; Huang, Hong

    2011-07-01

    We present a novel feature extraction algorithm for face recognition called the uncorrelated and discriminative graph embedding (UDGE) algorithm, which incorporates graph embedding and local scaling method and obtains uncorrelated discriminative vectors in the projected subspace. An optimization objective function is herein defined to make the discriminative projections preserve the intrinsic neighborhood geometry of the within-class samples while enlarging the margins of between-class samples near to the class boundaries. UDGE efficiently dispenses with a prespecified parameter which is data-dependent to balance the objective of the within-class locality and the between-class locality in comparison with the linear extension of graph embedding in a face recognition scenario. Moreover, it can address the small sample-size problem, and its classification accuracy is not sensitive to neighbor samples size and weight value, as well. Extensive experiments on extended YaleB, CMU PIE, and Indian face databases demonstrate the effectiveness of UDGE.

  10. Face recognition with the Karhunen-Loeve transform

    NASA Astrophysics Data System (ADS)

    Suarez, Pedro F.

    1991-12-01

    The major goal of this research was to investigate machine recognition of faces. The approach taken to achieve this goal was to investigate the use of Karhunen-Loe've Transform (KLT) by implementing flexible and practical code. The KLT utilizes the eigenvectors of the covariance matrix as a basis set. Faces were projected onto the eigenvectors, called eigenfaces, and the resulting projection coefficients were used as features. Face recognition accuracies for the KLT coefficients were superior to Fourier based techniques. Additionally, this thesis demonstrated the image compression and reconstruction capabilities of the KLT. This theses also developed the use of the KLT as a facial feature detector. The ability to differentiate between facial features provides a computer communications interface for non-vocal people with cerebral palsy. Lastly, this thesis developed a KLT based axis system for laser scanner data of human heads. The scanner data axis system provides the anthropometric community a more precise method of fitting custom helmets.

  11. Are Face and Object Recognition Independent? A Neurocomputational Modeling Exploration.

    PubMed

    Wang, Panqu; Gauthier, Isabel; Cottrell, Garrison

    2016-04-01

    Are face and object recognition abilities independent? Although it is commonly believed that they are, Gauthier et al. [Gauthier, I., McGugin, R. W., Richler, J. J., Herzmann, G., Speegle, M., & VanGulick, A. E. Experience moderates overlap between object and face recognition, suggesting a common ability. Journal of Vision, 14, 7, 2014] recently showed that these abilities become more correlated as experience with nonface categories increases. They argued that there is a single underlying visual ability, v, that is expressed in performance with both face and nonface categories as experience grows. Using the Cambridge Face Memory Test and the Vanderbilt Expertise Test, they showed that the shared variance between Cambridge Face Memory Test and Vanderbilt Expertise Test performance increases monotonically as experience increases. Here, we address why a shared resource across different visual domains does not lead to competition and to an inverse correlation in abilities? We explain this conundrum using our neurocomputational model of face and object processing ["The Model", TM, Cottrell, G. W., & Hsiao, J. H. Neurocomputational models of face processing. In A. J. Calder, G. Rhodes, M. Johnson, & J. Haxby (Eds.), The Oxford handbook of face perception. Oxford, UK: Oxford University Press, 2011]. We model the domain general ability v as the available computational resources (number of hidden units) in the mapping from input to label and experience as the frequency of individual exemplars in an object category appearing during network training. Our results show that, as in the behavioral data, the correlation between subordinate level face and object recognition accuracy increases as experience grows. We suggest that different domains do not compete for resources because the relevant features are shared between faces and objects. The essential power of experience is to generate a "spreading transform" for faces (separating them in representational space) that

  12. Always on My Mind? Recognition of Attractive Faces May Not Depend on Attention

    PubMed Central

    Silva, André; Macedo, António F.; Albuquerque, Pedro B.; Arantes, Joana

    2016-01-01

    Little research has examined what happens to attention and memory as a whole when humans see someone attractive. Hence, we investigated whether attractive stimuli gather more attention and are better remembered than unattractive stimuli. Participants took part in an attention task – in which matrices containing attractive and unattractive male naturalistic photographs were presented to 54 females, and measures of eye-gaze location and fixation duration using an eye-tracker were taken – followed by a recognition task. Eye-gaze was higher for the attractive stimuli compared to unattractive stimuli. Also, attractive photographs produced more hits and false recognitions than unattractive photographs which may indicate that regardless of attention allocation, attractive photographs produce more correct but also more false recognitions. We present an evolutionary explanation for this, as attending to more attractive faces but not always remembering them accurately and differentially compared with unseen attractive faces, may help females secure mates with higher reproductive value. PMID:26858683

  13. Spatial location in brief, free-viewing face encoding modulates contextual face recognition

    PubMed Central

    Felisberti, Fatima M.; McDermott, Mark R.

    2013-01-01

    The effect of the spatial location of faces in the visual field during brief, free-viewing encoding in subsequent face recognition is not known. This study addressed this question by tagging three groups of faces with cheating, cooperating or neutral behaviours and presenting them for encoding in two visual hemifields (upper vs. lower or left vs. right). Participants then had to indicate if a centrally presented face had been seen before or not. Head and eye movements were free in all phases. Findings showed that the overall recognition of cooperators was significantly better than cheaters, and it was better for faces encoded in the upper hemifield than in the lower hemifield, both in terms of a higher d′ and faster reaction time (RT). The d′ for any given behaviour in the left and right hemifields was similar. The RT in the left hemifield did not vary with tagged behaviour, whereas the RT in the right hemifield was longer for cheaters than for cooperators. The results showed that memory biases in contextual face recognition were modulated by the spatial location of briefly encoded faces and are discussed in terms of scanning reading habits, top-left bias in lighting preference and peripersonal space. PMID:24349694

  14. Neural Mechanism for Mirrored Self-face Recognition.

    PubMed

    Sugiura, Motoaki; Miyauchi, Carlos Makoto; Kotozaki, Yuka; Akimoto, Yoritaka; Nozawa, Takayuki; Yomogida, Yukihito; Hanawa, Sugiko; Yamamoto, Yuki; Sakuma, Atsushi; Nakagawa, Seishu; Kawashima, Ryuta

    2015-09-01

    Self-face recognition in the mirror is considered to involve multiple processes that integrate 2 perceptual cues: temporal contingency of the visual feedback on one's action (contingency cue) and matching with self-face representation in long-term memory (figurative cue). The aim of this study was to examine the neural bases of these processes by manipulating 2 perceptual cues using a "virtual mirror" system. This system allowed online dynamic presentations of real-time and delayed self- or other facial actions. Perception-level processes were identified as responses to only a single perceptual cue. The effect of the contingency cue was identified in the cuneus. The regions sensitive to the figurative cue were subdivided by the response to a static self-face, which was identified in the right temporal, parietal, and frontal regions, but not in the bilateral occipitoparietal regions. Semantic- or integration-level processes, including amodal self-representation and belief validation, which allow modality-independent self-recognition and the resolution of potential conflicts between perceptual cues, respectively, were identified in distinct regions in the right frontal and insular cortices. The results are supportive of the multicomponent notion of self-recognition and suggest a critical role for contingency detection in the co-emergence of self-recognition and empathy in infants.

  15. A new theoretical approach to improving face recognition in disorders of central vision: face caricaturing.

    PubMed

    Irons, Jessica; McKone, Elinor; Dumbleton, Rachael; Barnes, Nick; He, Xuming; Provis, Jan; Ivanovici, Callin; Kwa, Alisa

    2014-02-17

    Damage to central vision, of which age-related macular degeneration (AMD) is the most common cause, leaves patients with only blurred peripheral vision. Previous approaches to improving face recognition in AMD have employed image manipulations designed to enhance early-stage visual processing (e.g., magnification, increased HSF contrast). Here, we argue that further improvement may be possible by targeting known properties of mid- and/or high-level face processing. We enhance identity-related shape information in the face by caricaturing each individual away from an average face. We simulate early- through late-stage AMD-blur by filtering spatial frequencies to mimic the amount of blurring perceived at approximately 10° through 30° into the periphery (assuming a face seen premagnified on a tablet computer). We report caricature advantages for all blur levels, for face viewpoints from front view to semiprofile, and in tasks involving perceiving differences in facial identity between pairs of people, remembering previously learned faces, and rejecting new faces as unknown. Results provide a proof of concept that caricaturing may assist in improving face recognition in AMD and other disorders of central vision.

  16. Face familiarity promotes stable identity recognition: exploring face perception using serial dependence

    PubMed Central

    Kok, Rebecca; Van der Burg, Erik; Rhodes, Gillian; Alais, David

    2017-01-01

    Studies suggest that familiar faces are processed in a manner distinct from unfamiliar faces and that familiarity with a face confers an advantage in identity recognition. Our visual system seems to capitalize on experience to build stable face representations that are impervious to variation in retinal input that may occur due to changes in lighting, viewpoint, viewing distance, eye movements, etc. Emerging evidence also suggests that our visual system maintains a continuous perception of a face's identity from one moment to the next despite the retinal input variations through serial dependence. This study investigates whether interactions occur between face familiarity and serial dependence. In two experiments, participants used a continuous scale to rate attractiveness of unfamiliar and familiar faces (either experimentally learned or famous) presented in rapid sequences. Both experiments revealed robust inter-trial effects in which attractiveness ratings for a given face depended on the preceding face's attractiveness. This inter-trial attractiveness effect was most pronounced for unfamiliar faces. Indeed, when participants were familiar with a given face, attractiveness ratings showed significantly less serial dependence. These results represent the first evidence that familiar faces can resist the temporal integration seen in sequential dependencies and highlight the importance of familiarity to visual cognition.

  17. Enhanced retinal modeling for face recognition and facial feature point detection under complex illumination conditions

    NASA Astrophysics Data System (ADS)

    Cheng, Yong; Li, Zuoyong; Jiao, Liangbao; Lu, Hong; Cao, Xuehong

    2016-07-01

    We improved classic retinal modeling to alleviate the adverse effect of complex illumination on face recognition and extracted robust image features. Our improvements on classic retinal modeling included three aspects. First, a combined filtering scheme was applied to simulate functions of horizontal and amacrine cells for accurate local illumination estimation. Second, we developed an optimal threshold method for illumination classification. Finally, we proposed an adaptive factor acquisition model based on the arctangent function. Experimental results on the combined Yale B; the Carnegie Mellon University poses, illumination, and expression; and the Labeled Face Parts in the Wild databases show that the proposed method can effectively alleviate illumination difference of images under complex illumination conditions, which is helpful for improving the accuracy of face recognition and that of facial feature point detection.

  18. Deep learning and face recognition: the state of the art

    NASA Astrophysics Data System (ADS)

    Balaban, Stephen

    2015-05-01

    Deep Neural Networks (DNNs) have established themselves as a dominant technique in machine learning. DNNs have been top performers on a wide variety of tasks including image classification, speech recognition, and face recognition.1-3 Convolutional neural networks (CNNs) have been used in nearly all of the top performing methods on the Labeled Faces in the Wild (LFW) dataset.3-6 In this talk and accompanying paper, I attempt to provide a review and summary of the deep learning techniques used in the state-of-the-art. In addition, I highlight the need for both larger and more challenging public datasets to benchmark these systems. Despite the ability of DNNs and autoencoders to perform unsupervised feature learning, modern facial recognition pipelines still require domain specific engineering in the form of re-alignment. For example, in Facebook's recent DeepFace paper, a 3D "frontalization" step lies at the beginning of the pipeline. This step creates a 3D face model for the incoming image and then uses a series of affine transformations of the fiducial points to "frontalize" the image. This step enables the DeepFace system to use a neural network architecture with locally connected layers without weight sharing as opposed to standard convolutional layers.6 Deep learning techniques combined with large datasets have allowed research groups to surpass human level performance on the LFW dataset.3, 5 The high accuracy (99.63% for FaceNet at the time of publishing) and utilization of outside data (hundreds of millions of images in the case of Google's FaceNet) suggest that current face verification benchmarks such as LFW may not be challenging enough, nor provide enough data, for current techniques.3, 5 There exist a variety of organizations with mobile photo sharing applications that would be capable of releasing a very large scale and highly diverse dataset of facial images captured on mobile devices. Such an "ImageNet for Face Recognition" would likely receive a warm

  19. Friends with Faces: How Social Networks Can Enhance Face Recognition and Vice Versa

    NASA Astrophysics Data System (ADS)

    Mavridis, Nikolaos; Kazmi, Wajahat; Toulis, Panos

    The "friendship" relation, a social relation among individuals, is one of the primary relations modeled in some of the world's largest online social networking sites, such as "FaceBook." On the other hand, the "co-occurrence" relation, as a relation among faces appearing in pictures, is one that is easily detectable using modern face detection techniques. These two relations, though appearing in different realms (social vs. visual sensory), have a strong correlation: faces that co-occur in photos often belong to individuals who are friends. Using real-world data gathered from "Facebook," which were gathered as part of the "FaceBots" project, the world's first physical face-recognizing and conversing robot that can utilize and publish information on "Facebook" was established. We present here methods as well as results for utilizing this correlation in both directions. Both algorithms for utilizing knowledge of the social context for faster and better face recognition are given, as well as algorithms for estimating the friendship network of a number of individuals given photos containing their faces. The results are quite encouraging. In the primary example, doubling of the recognition accuracy as well as a sixfold improvement in speed is demonstrated. Various improvements, interesting statistics, as well as an empirical investigation leading to predictions of scalability to much bigger data sets are discussed.

  20. Still-to-video face recognition in unconstrained environments

    NASA Astrophysics Data System (ADS)

    Wang, Haoyu; Liu, Changsong; Ding, Xiaoqing

    2015-02-01

    Face images from video sequences captured in unconstrained environments usually contain several kinds of variations, e.g. pose, facial expression, illumination, image resolution and occlusion. Motion blur and compression artifacts also deteriorate recognition performance. Besides, in various practical systems such as law enforcement, video surveillance and e-passport identification, only a single still image per person is enrolled as the gallery set. Many existing methods may fail to work due to variations in face appearances and the limit of available gallery samples. In this paper, we propose a novel approach for still-to-video face recognition in unconstrained environments. By assuming that faces from still images and video frames share the same identity space, a regularized least squares regression method is utilized to tackle the multi-modality problem. Regularization terms based on heuristic assumptions are enrolled to avoid overfitting. In order to deal with the single image per person problem, we exploit face variations learned from training sets to synthesize virtual samples for gallery samples. We adopt a learning algorithm combining both affine/convex hull-based approach and regularizations to match image sets. Experimental results on a real-world dataset consisting of unconstrained video sequences demonstrate that our method outperforms the state-of-the-art methods impressively.

  1. An integrated modeling approach to age invariant face recognition

    NASA Astrophysics Data System (ADS)

    Alvi, Fahad Bashir; Pears, Russel

    2015-03-01

    This Research study proposes a novel method for face recognition based on Anthropometric features that make use of an integrated approach comprising of a global and personalized models. The system is aimed to at situations where lighting, illumination, and pose variations cause problems in face recognition. A Personalized model covers the individual aging patterns while a Global model captures general aging patterns in the database. We introduced a de-aging factor that de-ages each individual in the database test and training sets. We used the k nearest neighbor approach for building a personalized model and global model. Regression analysis was applied to build the models. During the test phase, we resort to voting on different features. We used FG-Net database for checking the results of our technique and achieved 65 percent Rank 1 identification rate.

  2. Determination of candidate subjects for better recognition of faces

    NASA Astrophysics Data System (ADS)

    Wang, Xuansheng; Chen, Zhen; Teng, Zhongming

    2016-05-01

    In order to improve the accuracy of face recognition and to solve the problem of various poses, we present an improved collaborative representation classification (CRC) algorithm using original training samples and the corresponding mirror images. First, the mirror images are generated from the original training samples. Second, both original training samples and their mirror images are simultaneously used to represent the test sample via improved collaborative representation. Then, some classes which are "close" to the test sample are coarsely selected as candidate classes. At last, the candidate classes are used to represent the test sample again, and then the class most similar to the test sample can be determined finely. The experimental results show our proposed algorithm has more robustness than the original CRC algorithm and can effectively improve the accuracy of face recognition.

  3. On the usefulness of hyperspectral imaging for face recognition

    NASA Astrophysics Data System (ADS)

    Bianco, Simone

    2016-11-01

    Hyperspectral cameras provide additional information in terms of multiple sampling of the visible spectrum, holding information that could be potentially useful for biometric applications. This paper investigates whether the performance of hyperspectral face recognition algorithms can be improved by considering single and multiple one-dimensional (1-D) projections of the whole spectral data along the spectral dimension. Three different projections are investigated and found by optimization: single-spectral band selection, nonnegative spectral band combination, and unbounded spectral band combination. Since 1-D projections can be performed directly on the imaging device with color filters, projections are also restricted to be physically plausible. The experiments are performed on a standard hyperspectral dataset and the obtained results outperform eight existing hyperspectral face recognition algorithms.

  4. Thermal-Polarimetric and Visible Data Collection for Face Recognition

    DTIC Science & Technology

    2016-09-01

    an online course from the Collaborative Institutional Training Initiative ( CITI ) and pass the online test. In addition, the investigators needed to...recognition researchers and the IRB. Our face data collection also not fit into the standard practices covered in the CITI training. The CITI training...coupled device CITI Collaborative Institutional Training Initiative DOLP degree of linear polarization FOV field of view FPA focal-plane array

  5. Identity Verification Through the Fusion of Face and Speaker Recognition

    DTIC Science & Technology

    1993-12-01

    the original signal into a linear combination of basis functions, which are obtained from simple dilations and translations of a " mother " wavelet (for...learning experience, as well as a pleasurable one. I also wish to thank " Mother " Dan Zambon for his unceasing efforts in keeping the computer systems...is examined for suitability in the verification task. The base face recognition system used the KLT for feature reduction and a back- propagation

  6. Design and implementation of face recognition system based on Windows

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Liu, Ting; Li, Ailan

    2015-07-01

    In view of the basic Windows login password input way lacking of safety and convenient operation, we will introduce the biometrics technology, face recognition, into the computer to login system. Not only can it encrypt the computer system, also according to the level to identify administrators at all levels. With the enhancement of the system security, user input can neither be a cumbersome nor worry about being stolen password confidential.

  7. Using Regression to Measure Holistic Face Processing Reveals a Strong Link with Face Recognition Ability

    ERIC Educational Resources Information Center

    DeGutis, Joseph; Wilmer, Jeremy; Mercado, Rogelio J.; Cohan, Sarah

    2013-01-01

    Although holistic processing is thought to underlie normal face recognition ability, widely discrepant reports have recently emerged about this link in an individual differences context. Progress in this domain may have been impeded by the widespread use of subtraction scores, which lack validity due to their contamination with control condition…

  8. Face liveness detection for face recognition based on cardiac features of skin color image

    NASA Astrophysics Data System (ADS)

    Suh, Kun Ha; Lee, Eui Chul

    2016-07-01

    With the growth of biometric technology, spoofing attacks have been emerged a threat to the security of the system. Main spoofing scenarios in the face recognition system include the printing attack, replay attack, and 3D mask attack. To prevent such attacks, techniques that evaluating liveness of the biometric data can be considered as a solution. In this paper, a novel face liveness detection method based on cardiac signal extracted from face is presented. The key point of proposed method is that the cardiac characteristic is detected in live faces but not detected in non-live faces. Experimental results showed that the proposed method can be effective way for determining printing attack or 3D mask attack.

  9. Performance of a Working Face Recognition Machine using Cortical Thought Theory

    DTIC Science & Technology

    1984-12-04

    Face Recognition . . 2-18 * Physiological Clues to Face Recognition . . 2-23 Features of a luman-Like" Face Recognition System . . . . . . . . . . 2-26... physiology as it is presently understood. The performance of the face recognition system strongly suggests CTT’s general applicability to vision, and...least 90% (an arbitrary value.) 4) All critical components and assumptions of the F system must be consistent with CTT, and the human physiology to

  10. 3D face recognition by projection-based methods

    NASA Astrophysics Data System (ADS)

    Dutagaci, Helin; Sankur, Bülent; Yemez, Yücel

    2006-02-01

    In this paper, we investigate recognition performances of various projection-based features applied on registered 3D scans of faces. Some features are data driven, such as ICA-based features or NNMF-based features. Other features are obtained using DFT or DCT-based schemes. We apply the feature extraction techniques to three different representations of registered faces, namely, 3D point clouds, 2D depth images and 3D voxel. We consider both global and local features. Global features are extracted from the whole face data, whereas local features are computed over the blocks partitioned from 2D depth images. The block-based local features are fused both at feature level and at decision level. The resulting feature vectors are matched using Linear Discriminant Analysis. Experiments using different combinations of representation types and feature vectors are conducted on the 3D-RMA dataset.

  11. The Effect of Inversion on Face Recognition in Adults with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Hedley, Darren; Brewer, Neil; Young, Robyn

    2015-01-01

    Face identity recognition has widely been shown to be impaired in individuals with autism spectrum disorders (ASD). In this study we examined the influence of inversion on face recognition in 26 adults with ASD and 33 age and IQ matched controls. Participants completed a recognition test comprising upright and inverted faces. Participants with ASD…

  12. The Role of Active Exploration of 3D Face Stimuli on Recognition Memory of Facial Information

    ERIC Educational Resources Information Center

    Liu, Chang Hong; Ward, James; Markall, Helena

    2007-01-01

    Research on face recognition has mainly relied on methods in which observers are relatively passive viewers of face stimuli. This study investigated whether active exploration of three-dimensional (3D) face stimuli could facilitate recognition memory. A standard recognition task and a sequential matching task were employed in a yoked design.…

  13. Cortical Thickness in Fusiform Face Area Predicts Face and Object Recognition Performance

    PubMed Central

    McGugin, Rankin W.; Van Gulick, Ana E.; Gauthier, Isabel

    2016-01-01

    The fusiform face area (FFA) is defined by its selectivity for faces. Several studies have shown that the response of FFA to non-face objects can predict behavioral performance for these objects. However, one possible account is that experts pay more attention to objects in their domain of expertise, driving signals up. Here we show an effect of expertise with non-face objects in FFA that cannot be explained by differential attention to objects of expertise. We explore the relationship between cortical thickness of FFA and face and object recognition using the Cambridge Face Memory Test and Vanderbilt Expertise Test, respectively. We measured cortical thickness in functionally-defined regions in a group of men who evidenced functional expertise effects for cars in FFA. Performance with faces and objects together accounted for approximately 40% of the variance in cortical thickness of several FFA patches. While subjects with a thicker FFA cortex performed better with vehicles, those with a thinner FFA cortex performed better with faces and living objects. The results point to a domain-general role of FFA in object perception and reveal an interesting double dissociation that does not contrast faces and objects, but rather living and non-living objects. PMID:26439272

  14. A motivational determinant of facial emotion recognition: regulatory focus affects recognition of emotions in faces.

    PubMed

    Sassenrath, Claudia; Sassenberg, Kai; Ray, Devin G; Scheiter, Katharina; Jarodzka, Halszka

    2014-01-01

    Two studies examined an unexplored motivational determinant of facial emotion recognition: observer regulatory focus. It was predicted that a promotion focus would enhance facial emotion recognition relative to a prevention focus because the attentional strategies associated with promotion focus enhance performance on well-learned or innate tasks - such as facial emotion recognition. In Study 1, a promotion or a prevention focus was experimentally induced and better facial emotion recognition was observed in a promotion focus compared to a prevention focus. In Study 2, individual differences in chronic regulatory focus were assessed and attention allocation was measured using eye tracking during the facial emotion recognition task. Results indicated that the positive relation between a promotion focus and facial emotion recognition is mediated by shorter fixation duration on the face which reflects a pattern of attention allocation matched to the eager strategy in a promotion focus (i.e., striving to make hits). A prevention focus did not have an impact neither on perceptual processing nor on facial emotion recognition. Taken together, these findings demonstrate important mechanisms and consequences of observer motivational orientation for facial emotion recognition.

  15. A Motivational Determinant of Facial Emotion Recognition: Regulatory Focus Affects Recognition of Emotions in Faces

    PubMed Central

    Sassenrath, Claudia; Sassenberg, Kai; Ray, Devin G.; Scheiter, Katharina; Jarodzka, Halszka

    2014-01-01

    Two studies examined an unexplored motivational determinant of facial emotion recognition: observer regulatory focus. It was predicted that a promotion focus would enhance facial emotion recognition relative to a prevention focus because the attentional strategies associated with promotion focus enhance performance on well-learned or innate tasks - such as facial emotion recognition. In Study 1, a promotion or a prevention focus was experimentally induced and better facial emotion recognition was observed in a promotion focus compared to a prevention focus. In Study 2, individual differences in chronic regulatory focus were assessed and attention allocation was measured using eye tracking during the facial emotion recognition task. Results indicated that the positive relation between a promotion focus and facial emotion recognition is mediated by shorter fixation duration on the face which reflects a pattern of attention allocation matched to the eager strategy in a promotion focus (i.e., striving to make hits). A prevention focus did not have an impact neither on perceptual processing nor on facial emotion recognition. Taken together, these findings demonstrate important mechanisms and consequences of observer motivational orientation for facial emotion recognition. PMID:25380247

  16. Individual differences in adaptive coding of face identity are linked to individual differences in face recognition ability.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise

    2014-06-01

    Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms.

  17. Recognition and eye movements with partially hidden pictures of faces and cars in different orientations.

    PubMed

    Wade, Nicholas J; Tatler, Benjamin W

    2010-01-01

    Inverted faces are more difficult to identify than upright ones. This even applies when pictures of faces are partially hidden in geometrical designs so that it takes some seconds to recognise them. Similar, though not as pronounced, orientation preferences apply to familiar objects. We compared the recognition times and patterns of eye movements for two sets of familiar symmetrical objects. Pictures of faces and of cars were embedded in patterns of concentric circles in order to render them difficult to recognise. They were presented in four orientations, at 90° intervals from upright. Two experiments were conducted with the same set of stimuli; experiment 1 required participants to respond in terms of faces or cars, and in experiment 2 responses were made to the orientation of the embedded image independently of its class. Upright faces were recognised more accurately and faster than those in other orientations; fixation durations were longer for upright faces even before recognition. These results applied to both experiments. Orientation effects for cars were not pronounced and distinctions between 90°, 180°, and 270° embedded images were not consistent; this was the case in both experiments.

  18. Log-Gabor Weber descriptor for face recognition

    NASA Astrophysics Data System (ADS)

    Li, Jing; Sang, Nong; Gao, Changxin

    2015-09-01

    The Log-Gabor transform, which is suitable for analyzing gradually changing data such as in iris and face images, has been widely used in image processing, pattern recognition, and computer vision. In most cases, only the magnitude or phase information of the Log-Gabor transform is considered. However, the complementary effect taken by combining magnitude and phase information simultaneously for an image-feature extraction problem has not been systematically explored in the existing works. We propose a local image descriptor for face recognition, called Log-Gabor Weber descriptor (LGWD). The novelty of our LGWD is twofold: (1) to fully utilize the information from the magnitude or phase feature of multiscale and orientation Log-Gabor transform, we apply the Weber local binary pattern operator to each transform response. (2) The encoded Log-Gabor magnitude and phase information are fused at the feature level by utilizing kernel canonical correlation analysis strategy, considering that feature level information fusion is effective when the modalities are correlated. Experimental results on the AR, Extended Yale B, and UMIST face databases, compared with those available from recent experiments reported in the literature, show that our descriptor yields a better performance than state-of-the art methods.

  19. Face recognition using spatially smoothed discriminant structure-preserved projections

    NASA Astrophysics Data System (ADS)

    Yi, Yugen; Zhou, Wei; Wang, Jianzhong; Shi, Yanjiao; Kong, Jun

    2014-03-01

    Recently, structure-preserved projections (SPP) were proposed as a local matching-based algorithm for face recognition. Compared with other methods, the main advantage of SPP is that it can preserve the configural structure of subpatterns in each face image. However, the SPP algorithm ignores the information among samples from different classes, which may weaken its recognition performances. Moreover, the relationships of nearby pixels in the subpattern are also neglected in SPP. In order to address these limitations, a new algorithm termed spatially smoothed discriminant structure-preserved projections (SS-DSPP) is proposed. SS-DSPP takes advantage of the class information to characterize the discrimination structure of subpatterns from different classes, and a new spatially smooth constraint is also derived to preserve the intrinsic two-dimensional structure of each subpattern. The feasibility and effectiveness of the proposed algorithm are evaluated on four standard face databases (Yale, extended YaleB, CMU PIE, and AR). Experimental results demonstrate that our SS-DSPP outperforms the original SPP and several state-of-the-art algorithms.

  20. Toward More Accurate Iris Recognition Using Cross-Spectral Matching.

    PubMed

    Nalla, Pattabhi Ramaiah; Kumar, Ajay

    2017-01-01

    Iris recognition systems are increasingly deployed for large-scale applications such as national ID programs, which continue to acquire millions of iris images to establish identity among billions. However, with the availability of variety of iris sensors that are deployed for the iris imaging under different illumination/environment, significant performance degradation is expected while matching such iris images acquired under two different domains (either sensor-specific or wavelength-specific). This paper develops a domain adaptation framework to address this problem and introduces a new algorithm using Markov random fields model to significantly improve cross-domain iris recognition. The proposed domain adaptation framework based on the naive Bayes nearest neighbor classification uses a real-valued feature representation, which is capable of learning domain knowledge. Our approach to estimate corresponding visible iris patterns from the synthesis of iris patches in the near infrared iris images achieves outperforming results for the cross-spectral iris recognition. In this paper, a new class of bi-spectral iris recognition system that can simultaneously acquire visible and near infra-red images with pixel-to-pixel correspondences is proposed and evaluated. This paper presents experimental results from three publicly available databases; PolyU cross-spectral iris image database, IIITD CLI and UND database, and achieve outperforming results for the cross-sensor and cross-spectral iris matching.

  1. Facial emotion recognition deficits: The new face of schizophrenia

    PubMed Central

    Behere, Rishikesh V.

    2015-01-01

    Schizophrenia has been classically described to have positive, negative, and cognitive symptom dimension. Emerging evidence strongly supports a fourth dimension of social cognitive symptoms with facial emotion recognition deficits (FERD) representing a new face in our understanding of this complex disorder. FERD have been described to be one among the important deficits in schizophrenia and could be trait markers for the disorder. FERD are associated with socio-occupational dysfunction and hence are of important clinical relevance. This review discusses FERD in schizophrenia, challenges in its assessment in our cultural context, its implications in understanding neurobiological mechanisms and clinical applications. PMID:26600574

  2. Quality labeled faces in the wild (QLFW): a database for studying face recognition in real-world environments

    NASA Astrophysics Data System (ADS)

    Karam, Lina J.; Zhu, Tong

    2015-03-01

    The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.

  3. Face recognition: Eigenface, elastic matching, and neural nets

    SciTech Connect

    Zhang, J.; Yan, Y.; Lades, M.

    1997-09-01

    This paper is a comparative study of three recently proposed algorithms for face recognition: eigenface, autoassociation and classification neural nets, and elastic matching. After these algorithms were analyzed under a common statistical decision framework, they were evaluated experimentally on four individual data bases, each with a moderate subject size, and a combined data base with more than a hundred different subjects. Analysis and experimental results indicate that the eigenface algorithm, which is essentially a minimum distance classifier, works well when lighting variation is small. Its performance deteriorates significantly as lighting variation increases. The elastic matching algorithm, on the other hand, is insensitive to lighting, face position, and expression variations and therefore is more versatile. The performance of the autoassociation and classification nets is upper bounded by that of the eigenface but is more difficult to implement in practice.

  4. Effects of distance on face recognition: implications for eyewitness identification.

    PubMed

    Lampinen, James Michael; Erickson, William Blake; Moore, Kara N; Hittson, Aaron

    2014-12-01

    Eyewitnesses sometimes view faces from a distance, but little research has examined the accuracy of witnesses as a function of distance. The purpose to the present project is to examine the relationship between identification accuracy and distance under carefully controlled conditions. This is one of the first studies to examine the ability to recognize faces of strangers at a distance under free-field conditions. Participants viewed eight live human targets, displayed at one of six outdoor distances that varied between 5 and 40 yards. Participants were shown 16 photographs, 8 of the previously viewed targets and 8 of nonviewed foils that matched a verbal description of the target counterpart. Participants rated their confidence of having seen or not having seen each individual on an 8-point scale. Long distances were associated with poor recognition memory and response bias shifts.

  5. A Comparative Study of 2D PCA Face Recognition Method with Other Statistically Based Face Recognition Methods

    NASA Astrophysics Data System (ADS)

    Senthilkumar, R.; Gnanamurthy, R. K.

    2016-09-01

    In this paper, two-dimensional principal component analysis (2D PCA) is compared with other algorithms like 1D PCA, Fisher discriminant analysis (FDA), independent component analysis (ICA) and Kernel PCA (KPCA) which are used for image representation and face recognition. As opposed to PCA, 2D PCA is based on 2D image matrices rather than 1D vectors, so the image matrix does not need to be transformed into a vector prior to feature extraction. Instead, an image covariance matrix is constructed directly using the original image matrices and its Eigen vectors are derived for image feature extraction. To test 2D PCA and evaluate its performance, a series of experiments are performed on three face image databases: ORL, Senthil, and Yale face databases. The recognition rate across all trials higher using 2D PCA than PCA, FDA, ICA and KPCA. The experimental results also indicated that the extraction of image features is computationally more efficient using 2D PCA than PCA.

  6. Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition.

    PubMed

    Wood, Adrienne; Rychlowska, Magdalena; Korb, Sebastian; Niedenthal, Paula

    2016-03-01

    When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain.

  7. Accurate invariant pattern recognition for perspective camera model

    NASA Astrophysics Data System (ADS)

    Serikova, Mariya G.; Pantyushina, Ekaterina N.; Zyuzin, Vadim V.; Korotaev, Valery V.; Rodrigues, Joel J. P. C.

    2015-05-01

    In this work we present a pattern recognition method based on geometry analysis of a flat pattern. The method provides reliable detection of the pattern in the case when significant perspective deformation is present in the image. The method is based on the fact that collinearity of the lines remains unchanged under perspective transformation. So the recognition feature is the presence of two lines, containing four points each. Eight points form two squares for convenience of applying corner detection algorithms. The method is suitable for automatic pattern detection in a dense environment of false objects. In this work we test the proposed method for statistics of detection and algorithm's performance. For estimation of pattern detection quality we performed image simulation process with random size and spatial frequency of background clutter while both translational (range varied from 200 mm to 1500 mm) and rotational (up to 60°) deformations in given pattern position were added. Simulated measuring system included a camera (4000x4000 sensor with 25 mm lens) and a flat pattern. Tests showed that the proposed method demonstrates no more than 1% recognition error when number of false targets is up to 40.

  8. Pose invariant face recognition: 3D model from single photo

    NASA Astrophysics Data System (ADS)

    Napoléon, Thibault; Alfalou, Ayman

    2017-02-01

    Face recognition is widely studied in the literature for its possibilities in surveillance and security. In this paper, we report a novel algorithm for the identification task. This technique is based on an optimized 3D modeling allowing to reconstruct faces in different poses from a limited number of references (i.e. one image by class/person). Particularly, we propose to use an active shape model to detect a set of keypoints on the face necessary to deform our synthetic model with our optimized finite element method. Indeed, in order to improve our deformation, we propose a regularization by distances on graph. To perform the identification we use the VanderLugt correlator well know to effectively address this task. On the other hand we add a difference of Gaussian filtering step to highlight the edges and a description step based on the local binary patterns. The experiments are performed on the PHPID database enhanced with our 3D reconstructed faces of each person with an azimuth and an elevation ranging from -30° to +30°. The obtained results prove the robustness of our new method with 88.76% of good identification when the classic 2D approach (based on the VLC) obtains just 44.97%.

  9. Simultaneous Versus Sequential Presentation in Testing Recognition Memory for Faces.

    PubMed

    Finley, Jason R; Roediger, Henry L; Hughes, Andrea D; Wahlheim, Christopher N; Jacoby, Larry L

    2015-01-01

    Three experiments examined the issue of whether faces could be better recognized in a simul- taneous test format (2-alternative forced choice [2AFC]) or a sequential test format (yes-no). All experiments showed that when target faces were present in the test, the simultaneous procedure led to superior performance (area under the ROC curve), whether lures were high or low in similarity to the targets. However, when a target-absent condition was used in which no lures resembled the targets but the lures were similar to each other, the simultaneous procedure yielded higher false alarm rates (Experiments 2 and 3) and worse overall performance (Experi- ment 3). This pattern persisted even when we excluded responses that participants opted to withhold rather than volunteer. We conclude that for the basic recognition procedures used in these experiments, simultaneous presentation of alternatives (2AFC) generally leads to better discriminability than does sequential presentation (yes-no) when a target is among the alterna- tives. However, our results also show that the opposite can occur when there is no target among the alternatives. An important future step is to see whether these patterns extend to more realistic eyewitness lineup procedures. The pictures used in the experiment are available online at http://www.press.uillinois.edu/journals/ajp/media/testing_recognition/.

  10. 3D Multi-Spectrum Sensor System with Face Recognition

    PubMed Central

    Kim, Joongrock; Yu, Sunjin; Kim, Ig-Jae; Lee, Sangyoun

    2013-01-01

    This paper presents a novel three-dimensional (3D) multi-spectrum sensor system, which combines a 3D depth sensor and multiple optical sensors for different wavelengths. Various image sensors, such as visible, infrared (IR) and 3D sensors, have been introduced into the commercial market. Since each sensor has its own advantages under various environmental conditions, the performance of an application depends highly on selecting the correct sensor or combination of sensors. In this paper, a sensor system, which we will refer to as a 3D multi-spectrum sensor system, which comprises three types of sensors, visible, thermal-IR and time-of-flight (ToF), is proposed. Since the proposed system integrates information from each sensor into one calibrated framework, the optimal sensor combination for an application can be easily selected, taking into account all combinations of sensors information. To demonstrate the effectiveness of the proposed system, a face recognition system with light and pose variation is designed. With the proposed sensor system, the optimal sensor combination, which provides new effectively fused features for a face recognition system, is obtained. PMID:24072025

  11. Face recognition using 4-PSK joint transform correlation

    NASA Astrophysics Data System (ADS)

    Moniruzzaman, Md.; Alam, Mohammad S.

    2016-04-01

    This paper presents an efficient phase-encoded and 4-phase shift keying (PSK)-based fringe-adjusted joint transform correlation (FJTC) technique for face recognition applications. The proposed technique uses phase encoding and a 4- channel phase shifting method on the reference image which can be pre-calculated without affecting the system processing speed. The 4-channel PSK step eliminates the unwanted zero-order term, autocorrelation among multiple similar input scene objects while yield enhanced cross-correlation output. For each channel, discrete wavelet decomposition preprocessing has been used to accommodate the impact of various 3D facial expressions, effects of noise, and illumination variations. The performance of the proposed technique has been tested using various image datasets such as Yale, and extended Yale B under different environments such as illumination variation and 3D changes in facial expressions. The test results show that the proposed technique yields significantly better performance when compared to existing JTC-based face recognition techniques.

  12. 3D FaceCam: a fast and accurate 3D facial imaging device for biometrics applications

    NASA Astrophysics Data System (ADS)

    Geng, Jason; Zhuang, Ping; May, Patrick; Yi, Steven; Tunnell, David

    2004-08-01

    Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes). Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.

  13. Boosting color feature selection for color face recognition.

    PubMed

    Choi, Jae Young; Ro, Yong Man; Plataniotis, Konstantinos N

    2011-05-01

    This paper introduces the new color face recognition (FR) method that makes effective use of boosting learning as color-component feature selection framework. The proposed boosting color-component feature selection framework is designed for finding the best set of color-component features from various color spaces (or models), aiming to achieve the best FR performance for a given FR task. In addition, to facilitate the complementary effect of the selected color-component features for the purpose of color FR, they are combined using the proposed weighted feature fusion scheme. The effectiveness of our color FR method has been successfully evaluated on the following five public face databases (DBs): CMU-PIE, Color FERET, XM2VTSDB, SCface, and FRGC 2.0. Experimental results show that the results of the proposed method are impressively better than the results of other state-of-the-art color FR methods over different FR challenges including highly uncontrolled illumination, moderate pose variation, and small resolution face images.

  14. The impact of specular highlights on 3D-2D face recognition

    NASA Astrophysics Data System (ADS)

    Christlein, Vincent; Riess, Christian; Angelopoulou, Elli; Evangelopoulos, Georgios; Kakadiaris, Ioannis

    2013-05-01

    One of the most popular form of biometrics is face recognition. Face recognition techniques typically assume that a face exhibits Lambertian reectance. However, a face often exhibits prominent specularities, especially in outdoor environments. These specular highlights can compromise an identity authentication. In this work, we analyze the impact of such highlights on a 3D-2D face recognition system. First, we investigate three different specularity removal methods as preprocessing steps for face recognition. Then, we explicitly model facial specularities within the face detection system with the Cook-Torrance reflectance model. In our experiments, specularity removal increases the recognition rate on an outdoor face database by about 5% at a false alarm rate of 10-3. The integration of the Cook-Torrance model further improves these results, increasing the verification rate by 19% at a FAR of 10-3.

  15. Online and Unsupervised Face Recognition for Humanoid Robot: Toward Relationship with People

    DTIC Science & Technology

    2001-01-01

    under Continuous Video Stream. Fourth International Conference on Automatic Face and Gesture Recognition , Grenoble, France (2000). [11] A. Pentland, T...Yacoob, H. Lam, L. Davis, Recognizing Faces With Expression. International Workshop on Automatic Face and Gesture Recognition , Zurich (1995). [14] P

  16. Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition

    ERIC Educational Resources Information Center

    Freitag, Claudia; Schwarzer, Gudrun

    2011-01-01

    Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…

  17. The Effects of Inversion and Familiarity on Face versus Body Cues to Person Recognition

    ERIC Educational Resources Information Center

    Robbins, Rachel A.; Coltheart, Max

    2012-01-01

    Extensive research has focused on face recognition, and much is known about this topic. However, much of this work seems to be based on an assumption that faces are the most important aspect of person recognition. Here we test this assumption in two experiments. We show that when viewers are forced to choose, they "do" use the face more than the…

  18. The effect of inversion on face recognition in adults with autism spectrum disorder.

    PubMed

    Hedley, Darren; Brewer, Neil; Young, Robyn

    2015-05-01

    Face identity recognition has widely been shown to be impaired in individuals with autism spectrum disorders (ASD). In this study we examined the influence of inversion on face recognition in 26 adults with ASD and 33 age and IQ matched controls. Participants completed a recognition test comprising upright and inverted faces. Participants with ASD performed worse than controls on the recognition task but did not show an advantage for inverted face recognition. Both groups directed more visual attention to the eye than the mouth region and gaze patterns were not found to be associated with recognition performance. These results provide evidence of a normal effect of inversion on face recognition in adults with ASD.

  19. A framework for the recognition of 3D faces and expressions

    NASA Astrophysics Data System (ADS)

    Li, Chao; Barreto, Armando

    2006-04-01

    Face recognition technology has been a focus both in academia and industry for the last couple of years because of its wide potential applications and its importance to meet the security needs of today's world. Most of the systems developed are based on 2D face recognition technology, which uses pictures for data processing. With the development of 3D imaging technology, 3D face recognition emerges as an alternative to overcome the difficulties inherent with 2D face recognition, i.e. sensitivity to illumination conditions and orientation positioning of the subject. But 3D face recognition still needs to tackle the problem of deformation of facial geometry that results from the expression changes of a subject. To deal with this issue, a 3D face recognition framework is proposed in this paper. It is composed of three subsystems: an expression recognition system, a system for the identification of faces with expression, and neutral face recognition system. A system for the recognition of faces with one type of expression (happiness) and neutral faces was implemented and tested on a database of 30 subjects. The results proved the feasibility of this framework.

  20. Local ICA for the Most Wanted face recognition

    NASA Astrophysics Data System (ADS)

    Guan, Xin; Szu, Harold H.; Markowitz, Zvi

    2000-04-01

    Facial disguises of FBI Most Wanted criminals are inevitable and anticipated in our design of automatic/aided target recognition (ATR) imaging systems. For example, man's facial hairs may hide his mouth and chin but not necessarily the nose and eyes. Sunglasses will cover the eyes but not the nose, mouth, and chins. This fact motivates us to build sets of the independent component analyses bases separately for each facial region of the entire alleged criminal group. Then, given an alleged criminal face, collective votes are obtained from all facial regions in terms of 'yes, no, abstain' and are tallied for a potential alarm. Moreover, and innocent outside shall fall below the alarm threshold and is allowed to pass the checkpoint. Such a PD versus FAR called ROC curve is obtained.

  1. New nonlinear features for inspection, robotics, and face recognition

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Talukder, Ashit

    1999-10-01

    Classification of real-time X-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items (pistachio nuts). This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work, the MRDF is applied to standard features (rather than iconic data). The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC (receiver operating characteristic) data. Other applications of these new feature spaces in robotics and face recognition are also noted.

  2. Adaptive feature-specific imaging: a face recognition example.

    PubMed

    Baheti, Pawan K; Neifeld, Mark A

    2008-04-01

    We present an adaptive feature-specific imaging (AFSI) system and consider its application to a face recognition task. The proposed system makes use of previous measurements to adapt the projection basis at each step. Using sequential hypothesis testing, we compare AFSI with static-FSI (SFSI) and static or adaptive conventional imaging in terms of the number of measurements required to achieve a specified probability of misclassification (Pe). The AFSI system exhibits significant improvement compared to SFSI and conventional imaging at low signal-to-noise ratio (SNR). It is shown that for M=4 hypotheses and desired Pe=10(-2), AFSI requires 100 times fewer measurements than the adaptive conventional imager at SNR= -20 dB. We also show a trade-off, in terms of average detection time, between measurement SNR and adaptation advantage, resulting in an optimal value of integration time (equivalent to SNR) per measurement.

  3. Face learning and the emergence of view-independent face recognition: an event-related brain potential study.

    PubMed

    Zimmermann, Friederike G S; Eimer, Martin

    2013-06-01

    Recognizing unfamiliar faces is more difficult than familiar face recognition, and this has been attributed to qualitative differences in the processing of familiar and unfamiliar faces. Familiar faces are assumed to be represented by view-independent codes, whereas unfamiliar face recognition depends mainly on view-dependent low-level pictorial representations. We employed an electrophysiological marker of visual face recognition processes in order to track the emergence of view-independence during the learning of previously unfamiliar faces. Two face images showing either the same or two different individuals in the same or two different views were presented in rapid succession, and participants had to perform an identity-matching task. On trials where both faces showed the same view, repeating the face of the same individual triggered an N250r component at occipito-temporal electrodes, reflecting the rapid activation of visual face memory. A reliable N250r component was also observed on view-change trials. Crucially, this view-independence emerged as a result of face learning. In the first half of the experiment, N250r components were present only on view-repetition trials but were absent on view-change trials, demonstrating that matching unfamiliar faces was initially based on strictly view-dependent codes. In the second half, the N250r was triggered not only on view-repetition trials but also on view-change trials, indicating that face recognition had now become more view-independent. This transition may be due to the acquisition of abstract structural codes of individual faces during face learning, but could also reflect the formation of associative links between sets of view-specific pictorial representations of individual faces.

  4. Facial expression influences face identity recognition during the attentional blink.

    PubMed

    Bach, Dominik R; Schmidt-Daffy, Martin; Dolan, Raymond J

    2014-12-01

    Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry-suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another.

  5. Non-intrusive gesture recognition system combining with face detection based on Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Jin, Jing; Wang, Yuanqing; Xu, Liujing; Cao, Liqun; Han, Lei; Zhou, Biye; Li, Minggao

    2014-11-01

    A non-intrusive gesture recognition human-machine interaction system is proposed in this paper. In order to solve the hand positioning problem which is a difficulty in current algorithms, face detection is used for the pre-processing to narrow the search area and find user's hand quickly and accurately. Hidden Markov Model (HMM) is used for gesture recognition. A certain number of basic gesture units are trained as HMM models. At the same time, an improved 8-direction feature vector is proposed and used to quantify characteristics in order to improve the detection accuracy. The proposed system can be applied in interaction equipments without special training for users, such as household interactive television

  6. A reciprocal model of face recognition and autistic traits: evidence from an individual differences perspective.

    PubMed

    Halliday, Drew W R; MacDonald, Stuart W S; Scherf, K Suzanne; Sherf, Suzanne K; Tanaka, James W

    2014-01-01

    Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ)). We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals.

  7. Face recognition based on matching of local features on 3D dynamic range sequences

    NASA Astrophysics Data System (ADS)

    Echeagaray-Patrón, B. A.; Kober, Vitaly

    2016-09-01

    3D face recognition has attracted attention in the last decade due to improvement of technology of 3D image acquisition and its wide range of applications such as access control, surveillance, human-computer interaction and biometric identification systems. Most research on 3D face recognition has focused on analysis of 3D still data. In this work, a new method for face recognition using dynamic 3D range sequences is proposed. Experimental results are presented and discussed using 3D sequences in the presence of pose variation. The performance of the proposed method is compared with that of conventional face recognition algorithms based on descriptors.

  8. Face recognition: database acquisition, hybrid algorithms, and human studies

    NASA Astrophysics Data System (ADS)

    Gutta, Srinivas; Huang, Jeffrey R.; Singh, Dig; Wechsler, Harry

    1997-02-01

    One of the most important technologies absent in traditional and emerging frontiers of computing is the management of visual information. Faces are accessible `windows' into the mechanisms that govern our emotional and social lives. The corresponding face recognition tasks considered herein include: (1) Surveillance, (2) CBIR, and (3) CBIR subject to correct ID (`match') displaying specific facial landmarks such as wearing glasses. We developed robust matching (`classification') and retrieval schemes based on hybrid classifiers and showed their feasibility using the FERET database. The hybrid classifier architecture consist of an ensemble of connectionist networks--radial basis functions-- and decision trees. The specific characteristics of our hybrid architecture include (a) query by consensus as provided by ensembles of networks for coping with the inherent variability of the image formation and data acquisition process, and (b) flexible and adaptive thresholds as opposed to ad hoc and hard thresholds. Experimental results, proving the feasibility of our approach, yield (i) 96% accuracy, using cross validation (CV), for surveillance on a data base consisting of 904 images (ii) 97% accuracy for CBIR tasks, on a database of 1084 images, and (iii) 93% accuracy, using CV, for CBIR subject to correct ID match tasks on a data base of 200 images.

  9. Supervised orthogonal discriminant subspace projects learning for face recognition.

    PubMed

    Chen, Yu; Xu, Xiao-Hong

    2014-02-01

    In this paper, a new linear dimension reduction method called supervised orthogonal discriminant subspace projection (SODSP) is proposed, which addresses high-dimensionality of data and the small sample size problem. More specifically, given a set of data points in the ambient space, a novel weight matrix that describes the relationship between the data points is first built. And in order to model the manifold structure, the class information is incorporated into the weight matrix. Based on the novel weight matrix, the local scatter matrix as well as non-local scatter matrix is defined such that the neighborhood structure can be preserved. In order to enhance the recognition ability, we impose an orthogonal constraint into a graph-based maximum margin analysis, seeking to find a projection that maximizes the difference, rather than the ratio between the non-local scatter and the local scatter. In this way, SODSP naturally avoids the singularity problem. Further, we develop an efficient and stable algorithm for implementing SODSP, especially, on high-dimensional data set. Moreover, the theoretical analysis shows that LPP is a special instance of SODSP by imposing some constraints. Experiments on the ORL, Yale, Extended Yale face database B and FERET face database are performed to test and evaluate the proposed algorithm. The results demonstrate the effectiveness of SODSP.

  10. Automatic recognition of facial movement for paralyzed face.

    PubMed

    Wang, Ting; Dong, Junyu; Sun, Xin; Zhang, Shu; Wang, Shengke

    2014-01-01

    Facial nerve paralysis is a common disease due to nerve damage. Most approaches for evaluating the degree of facial paralysis rely on a set of different facial movements as commanded by doctors. Therefore, automatic recognition of the patterns of facial movement is fundamental to the evaluation of the degree of facial paralysis. In this paper, a novel method named Active Shape Models plus Local Binary Patterns (ASMLBP) is presented for recognizing facial movement patterns. Firstly, the Active Shape Models (ASMs) are used in the method to locate facial key points. According to these points, the face is divided into eight local regions. Then the descriptors of these regions are extracted by using Local Binary Patterns (LBP) to recognize the patterns of facial movement. The proposed ASMLBP method is tested on both the collected facial paralysis database with 57 patients and another publicly available database named the Japanese Female Facial Expression (JAFFE). Experimental results demonstrate that the proposed method is efficient for both paralyzed and normal faces.

  11. Perception and motivation in face recognition: a critical review of theories of the Cross-Race Effect.

    PubMed

    Young, Steven G; Hugenberg, Kurt; Bernstein, Michael J; Sacco, Donald F

    2012-05-01

    Although humans possess well-developed face processing expertise, face processing is nevertheless subject to a variety of biases. Perhaps the best known of these biases is the Cross-Race Effect--the tendency to have more accurate recognition for same-race than cross-race faces. The current work reviews the evidence for and provides a critical review of theories of the Cross-Race Effect, including perceptual expertise and social cognitive accounts of the bias. The authors conclude that recent hybrid models of the Cross-Race Effect, which combine elements of both perceptual expertise and social cognitive frameworks, provide an opportunity for theoretical synthesis and advancement not afforded by independent expertise or social cognitive models. Finally, the authors suggest future research directions intended to further develop a comprehensive and integrative understanding of biases in face recognition.

  12. Kernel Learning of Histogram of Local Gabor Phase Patterns for Face Recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Baochang; Wang, Zongli; Zhong, Bineng

    2008-12-01

    This paper proposes a new face recognition method, named kernel learning of histogram of local Gabor phase pattern (K-HLGPP), which is based on Daugman's method for iris recognition and the local XOR pattern (LXP) operator. Unlike traditional Gabor usage exploiting the magnitude part in face recognition, we encode the Gabor phase information for face classification by the quadrant bit coding (QBC) method. Two schemes are proposed for face recognition. One is based on the nearest-neighbor classifier with chi-square as the similarity measurement, and the other makes kernel discriminant analysis for HLGPP (K-HLGPP) using histogram intersection and Gaussian-weighted chi-square kernels. The comparative experiments show that K-HLGPP achieves a higher recognition rate than other well-known face recognition systems on the large-scale standard FERET, FERET200, and CAS-PEAL-R1 databases.

  13. A robust face recognition algorithm under varying illumination using adaptive retina modeling

    NASA Astrophysics Data System (ADS)

    Cheong, Yuen Kiat; Yap, Vooi Voon; Nisar, Humaira

    2013-10-01

    Variation in illumination has a drastic effect on the appearance of a face image. This may hinder the automatic face recognition process. This paper presents a novel approach for face recognition under varying lighting conditions. The proposed algorithm uses adaptive retina modeling based illumination normalization. In the proposed approach, retina modeling is employed along with histogram remapping following normal distribution. Retina modeling is an approach that combines two adaptive nonlinear equations and a difference of Gaussians filter. Two databases: extended Yale B database and CMU PIE database are used to verify the proposed algorithm. For face recognition Gabor Kernel Fisher Analysis method is used. Experimental results show that the recognition rate for the face images with different illumination conditions has improved by the proposed approach. Average recognition rate for Extended Yale B database is 99.16%. Whereas, the recognition rate for CMU-PIE database is 99.64%.

  14. Face recognition ability matures late: evidence from individual differences in young adults.

    PubMed

    Susilo, Tirta; Germine, Laura; Duchaine, Bradley

    2013-10-01

    Does face recognition ability mature early in childhood (early maturation hypothesis) or does it continue to develop well into adulthood (late maturation hypothesis)? This fundamental issue in face recognition is typically addressed by comparing child and adult participants. However, the interpretation of such studies is complicated by children's inferior test-taking abilities and general cognitive functions. Here we examined the developmental trajectory of face recognition ability in an individual differences study of 18-33 year-olds (n = 2,032), an age interval in which participants are competent test takers with comparable general cognitive functions. We found a positive association between age and face recognition, controlling for nonface visual recognition, verbal memory, sex, and own-race bias. Our study supports the late maturation hypothesis in face recognition, and illustrates how individual differences investigations of young adults can address theoretical issues concerning the development of perceptual and cognitive abilities.

  15. The Cambridge Face Memory Test for Children (CFMT-C): a new tool for measuring face recognition skills in childhood.

    PubMed

    Croydon, Abigail; Pimperton, Hannah; Ewing, Louise; Duchaine, Brad C; Pellicano, Elizabeth

    2014-09-01

    Face recognition ability follows a lengthy developmental course, not reaching maturity until well into adulthood. Valid and reliable assessments of face recognition memory ability are necessary to examine patterns of ability and disability in face processing, yet there is a dearth of such assessments for children. We modified a well-known test of face memory in adults, the Cambridge Face Memory Test (Duchaine & Nakayama, 2006, Neuropsychologia, 44, 576-585), to make it developmentally appropriate for children. To establish its utility, we administered either the upright or inverted versions of the computerised Cambridge Face Memory Test - Children (CFMT-C) to 401 children aged between 5 and 12 years. Our results show that the CFMT-C is sufficiently sensitive to demonstrate age-related gains in the recognition of unfamiliar upright and inverted faces, does not suffer from ceiling or floor effects, generates robust inversion effects, and is capable of detecting difficulties in face memory in children diagnosed with autism. Together, these findings indicate that the CFMT-C constitutes a new valid assessment tool for children's face recognition skills.

  16. Orientation and Affective Expression Effects on Face Recognition in Williams Syndrome and Autism

    ERIC Educational Resources Information Center

    Rose, Fredric E.; Lincoln, Alan J.; Lai, Zona; Ene, Michaela; Searcy, Yvonne M.; Bellugi, Ursula

    2007-01-01

    We sought to clarify the nature of the face processing strength commonly observed in individuals with Williams syndrome (WS) by comparing the face recognition ability of persons with WS to that of persons with autism and to healthy controls under three conditions: Upright faces with neutral expressions, upright faces with varying affective…

  17. The effect of gaze direction on three-dimensional face recognition in infants.

    PubMed

    Yamashita, Wakayo; Kanazawa, So; Yamaguchi, Masami K

    2012-09-01

    Eye gaze is an important tool for social contact. In this study, we investigated whether direct gaze facilitates the recognition of three-dimensional face images in infants. We presented artificially produced face images in rotation to 6-8 month-old infants. The eye gaze of the face images was either direct or averted. Sixty-one sequential images of each face were created by rotating the vertical axis of the face from frontal view to ± 30°. The recognition performances of the infants were then compared between faces with direct gaze and faces with averted gaze. Infants showed evidence that they were able to discriminate the novel from familiarized face by 8 months of age and only when gaze is direct. These results suggest that gaze direction may affect three-dimensional face recognition in infants.

  18. Accurate Iris Recognition at a Distance Using Stabilized Iris Encoding and Zernike Moments Phase Features.

    PubMed

    Tan, Chun-Wei; Kumar, Ajay

    2014-07-10

    Accurate iris recognition from the distantly acquired face or eye images requires development of effective strategies which can account for significant variations in the segmented iris image quality. Such variations can be highly correlated with the consistency of encoded iris features and the knowledge that such fragile bits can be exploited to improve matching accuracy. A non-linear approach to simultaneously account for both local consistency of iris bit and also the overall quality of the weight map is proposed. Our approach therefore more effectively penalizes the fragile bits while simultaneously rewarding more consistent bits. In order to achieve more stable characterization of local iris features, a Zernike moment-based phase encoding of iris features is proposed. Such Zernike moments-based phase features are computed from the partially overlapping regions to more effectively accommodate local pixel region variations in the normalized iris images. A joint strategy is adopted to simultaneously extract and combine both the global and localized iris features. The superiority of the proposed iris matching strategy is ascertained by providing comparison with several state-of-the-art iris matching algorithms on three publicly available databases: UBIRIS.v2, FRGC, CASIA.v4-distance. Our experimental results suggest that proposed strategy can achieve significant improvement in iris matching accuracy over those competing approaches in the literature, i.e., average improvement of 54.3%, 32.7% and 42.6% in equal error rates, respectively for UBIRIS.v2, FRGC, CASIA.v4-distance.

  19. The "parts and wholes" of face recognition: A review of the literature.

    PubMed

    Tanaka, James W; Simonyi, Diana

    2016-10-01

    It has been claimed that faces are recognized as a "whole" rather than by the recognition of individual parts. In a paper published in the Quarterly Journal of Experimental Psychology in 1993, Martha Farah and I attempted to operationalize the holistic claim using the part/whole task. In this task, participants studied a face and then their memory presented in isolation and in the whole face. Consistent with the holistic view, recognition of the part was superior when tested in the whole-face condition compared to when it was tested in isolation. The "whole face" or holistic advantage was not found for faces that were inverted, or scrambled, nor for non-face objects, suggesting that holistic encoding was specific to normal, intact faces. In this paper, we reflect on the part/whole paradigm and how it has contributed to our understanding of what it means to recognize a face as a "whole" stimulus. We describe the value of part/whole task for developing theories of holistic and non-holistic recognition of faces and objects. We discuss the research that has probed the neural substrates of holistic processing in healthy adults and people with prosopagnosia and autism. Finally, we examine how experience shapes holistic face recognition in children and recognition of own- and other-race faces in adults. The goal of this article is to summarize the research on the part/whole task and speculate on how it has informed our understanding of holistic face processing.

  20. Development of Face Recognition in 5- to 15-Year-Olds

    ERIC Educational Resources Information Center

    Kinnunen, Suna; Korkman, Marit; Laasonen, Marja; Lahti-Nuuttila, Pekka

    2013-01-01

    This study focuses on the development of face recognition in typically developing preschool- and school-aged children (aged 5 to 15 years old, "n" = 611, 336 girls). Social predictors include sex differences and own-sex bias. At younger ages, the development of face recognition was rapid and became more gradual as the age increased up…

  1. Peak Shift but Not Range Effects in Recognition of Faces

    ERIC Educational Resources Information Center

    Spetch, Marcia L.; Cheng, Ken; Clifford, Colin W. G.

    2004-01-01

    University students were trained to discriminate between two gray-scale images of faces that varied along a continuum from a unique face to an average face created by morphing. Following training, participants were tested without feedback for their ability to recognize the positive face (S+) within a range of faces along the continuum. In…

  2. Size determines whether specialized expert processes are engaged for recognition of faces.

    PubMed

    Yang, Nan; Shafai, Fakhri; Oruc, Ipek

    2014-07-22

    Many influential models of face recognition postulate specialized expert processes that are engaged when viewing upright, own-race faces, as opposed to a general-purpose recognition route used for nonface objects and inverted or other-race faces. In contrast, others have argued that empirical differences do not stem from qualitatively distinct processing. We offer a potential resolution to this ongoing controversy. We hypothesize that faces engage specialized processes at large sizes only. To test this, we measured recognition efficiencies for a wide range of sizes. Upright face recognition efficiency increased with size. This was not due to better visibility of basic image features at large sizes. We ensured this by calculating efficiency relative to a specialized ideal observer unique to each individual that incorporated size-related changes in visibility and by measuring inverted efficiencies across the same range of face sizes. Inverted face recognition efficiencies did not change with size. A qualitative face inversion effect, defined as the ratio of relative upright and inverted efficiencies, showed a complete lack of inversion effects for small sizes up to 6°. In contrast, significant face inversion effects were found for all larger sizes. Size effects may stem from predominance of larger faces in the overall exposure to faces, which occur at closer viewing distances typical of social interaction. Our results offer a potential explanation for the contradictory findings in the literature regarding the special status of faces.

  3. Understanding gender bias in face recognition: effects of divided attention at encoding.

    PubMed

    Palmer, Matthew A; Brewer, Neil; Horry, Ruth

    2013-03-01

    Prior research has demonstrated a female own-gender bias in face recognition, with females better at recognizing female faces than male faces. We explored the basis for this effect by examining the effect of divided attention during encoding on females' and males' recognition of female and male faces. For female participants, divided attention impaired recognition performance for female faces to a greater extent than male faces in a face recognition paradigm (Study 1; N=113) and an eyewitness identification paradigm (Study 2; N=502). Analysis of remember-know judgments (Study 2) indicated that divided attention at encoding selectively reduced female participants' recollection of female faces at test. For male participants, divided attention selectively reduced recognition performance (and recollection) for male stimuli in Study 2, but had similar effects on recognition of male and female faces in Study 1. Overall, the results suggest that attention at encoding contributes to the female own-gender bias by facilitating the later recollection of female faces.

  4. Emotion processing in chimeric faces: hemispheric asymmetries in expression and recognition of emotions.

    PubMed

    Indersmitten, Tim; Gur, Ruben C

    2003-05-01

    Since the discovery of facial asymmetries in emotional expressions of humans and other primates, hypotheses have related the greater left-hemiface intensity to right-hemispheric dominance in emotion processing. However, the difficulty of creating true frontal views of facial expressions in two-dimensional photographs has confounded efforts to better understand the phenomenon. We have recently described a method for obtaining three-dimensional photographs of posed and evoked emotional expressions and used these stimuli to investigate both intensity of expression and accuracy of recognizing emotion in chimeric faces constructed from only left- or right-side composites. The participant population included 38 (19 male, 19 female) African-American, Caucasian, and Asian adults. They were presented with chimeric composites generated from faces of eight actors and eight actresses showing four emotions: happiness, sadness, anger, and fear, each in posed and evoked conditions. We replicated the finding that emotions are expressed more intensely in the left hemiface for all emotions and conditions, with the exception of evoked anger, which was expressed more intensely in the right hemiface. In contrast, the results indicated that emotional expressions are recognized more efficiently in the right hemiface, indicating that the right hemiface expresses emotions more accurately. The double dissociation between the laterality of expression intensity and that of recognition efficiency supports the notion that the two kinds of processes may have distinct neural substrates. Evoked anger is uniquely expressed more intensely and accurately on the side of the face that projects to the viewer's right hemisphere, dominant in emotion recognition.

  5. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants.

    PubMed

    Xiao, Naiqi G; Quinn, Paul C; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-06-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was tested with static face images. Eye-tracking methodology was used to record eye movements during the familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better their face recognition was, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development.

  6. Face Recognition Is Affected by Similarity in Spatial Frequency Range to a Greater Degree Than Within-Category Object Recognition

    ERIC Educational Resources Information Center

    Collin, Charles A.; Liu, Chang Hong; Troje, Nikolaus F.; McMullen, Patricia A.; Chaudhuri, Avi

    2004-01-01

    Previous studies have suggested that face identification is more sensitive to variations in spatial frequency content than object recognition, but none have compared how sensitive the 2 processes are to variations in spatial frequency overlap (SFO). The authors tested face and object matching accuracy under varying SFO conditions. Their results…

  7. Applying local Gabor ternary pattern for video-based illumination variable face recognition

    NASA Astrophysics Data System (ADS)

    Wang, Huafeng; Han, Yong; Zhang, Zhaoxiang

    2011-12-01

    The illumination variation problem is one of the well-known problems in face recognition in uncontrolled environment. Due to that both Gabor feature and LTP(local ternary pattern) are testified to be robust to illumination variations, we proposed a new approach which achieved illumination variable face recognition by combining Gabor filters with LTP operator. The experimental results compared with the published results on Yale-B and CMU PIE face database of changing illumination verify the validity of the proposed method.

  8. Applying local Gabor ternary pattern for video-based illumination variable face recognition

    NASA Astrophysics Data System (ADS)

    Wang, Huafeng; Han, Yong; Zhang, Zhaoxiang

    2012-01-01

    The illumination variation problem is one of the well-known problems in face recognition in uncontrolled environment. Due to that both Gabor feature and LTP(local ternary pattern) are testified to be robust to illumination variations, we proposed a new approach which achieved illumination variable face recognition by combining Gabor filters with LTP operator. The experimental results compared with the published results on Yale-B and CMU PIE face database of changing illumination verify the validity of the proposed method.

  9. Robust Representations for Face Recognition: The Power of Averages

    ERIC Educational Resources Information Center

    Burton, A. Mike; Jenkins, Rob; Hancock, Peter J. B.; White, David

    2005-01-01

    We are able to recognise familiar faces easily across large variations in image quality, though our ability to match unfamiliar faces is strikingly poor. Here we ask how the representation of a face changes as we become familiar with it. We use a simple image-averaging technique to derive abstract representations of known faces. Using Principal…

  10. Experience moderates overlap between object and face recognition, suggesting a common ability.

    PubMed

    Gauthier, Isabel; McGugin, Rankin W; Richler, Jennifer J; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E

    2014-07-03

    Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience.

  11. Experience moderates overlap between object and face recognition, suggesting a common ability

    PubMed Central

    Gauthier, Isabel; McGugin, Rankin W.; Richler, Jennifer J.; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E.

    2014-01-01

    Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. PMID:24993021

  12. Tensor-based AAM with continuous variation estimation: application to variation-robust face recognition.

    PubMed

    Lee, Hyung-Soo; Kim, Daijin

    2009-06-01

    The Active appearance model (AAM) is a well-known model that can represent a non-rigid object effectively. However, the fitting result is often unsatisfactory when an input image deviates from the training images due to its fixed shape and appearance model. To obtain more robust AAM fitting, we propose a tensor-based AAM that can handle a variety of subjects, poses, expressions, and illuminations in the tensor algebra framework, which consists of an image tensor and a model tensor. The image tensor estimates image variations such as pose, expression, and illumination of the input image using two different variation estimation techniques: discrete and continuous variation estimation. The model tensor generates variation-specific AAM basis vectors from the estimated image variations, which leads to more accurate fitting results. To validate the usefulness of the tensor-based AAM, we performed variation-robust face recognition using the tensor-based AAM fitting results. To do, we propose indirect AAM feature transformation. Experimental results show that tensor-based AAM with continuous variation estimation outperforms that with discrete variation estimation and conventional AAM in terms of the average fitting error and the face recognition rate.

  13. Robust Nuclear Norm-based Matrix Regression with Applications to Robust Face Recognition.

    PubMed

    Xie, Jianchun; Yang, Jian; Qian, Jianjun; Tai, Ying; Zhang, Hengmin

    2017-02-01

    Face recognition (FR) via regression analysis based classification has been widely studied in the past several years. Most existing regression analysis methods characterize the pixelwise representation error via l1-norm or l2-norm, which overlook the two-dimensional structure of the error image. Recently, the nuclear norm based matrix regression (NMR) model is proposed to characterize low-rank structure of the error image. However, the nuclear norm cannot accurately describe the lowrank structural noise when the incoherence assumptions on the singular values does not hold, since it over-penalizes several much larger singular values. To address this problem, this paper presents the robust nuclear norm to characterize the structural error image and then extends it to deal with the mixed noise. The majorization-minimization (MM) method is applied to derive a iterative scheme for minimization of the robust nuclear norm optimization problem. Then, an efficiently alternating direction method of multipliers (ADMM) method is used to solve the proposed models. We use weighted nuclear norm as classification criterion to obtain the final recognition results. Experiments on several public face databases demonstrate the effectiveness of our models in handling with variations of structural noise (occlusion, illumination, etc.) and mixed noise.

  14. Face recognition by applying wavelet subband representation and kernel associative memory.

    PubMed

    Zhang, Bai-Ling; Zhang, Haihong; Ge, Shuzhi Sam

    2004-01-01

    In this paper, we propose an efficient face recognition scheme which has two features: 1) representation of face images by two-dimensional (2-D) wavelet subband coefficients and 2) recognition by a modular, personalised classification method based on kernel associative memory models. Compared to PCA projections and low resolution "thumb-nail" image representations, wavelet subband coefficients can efficiently capture substantial facial features while keeping computational complexity low. As there are usually very limited samples, we constructed an associative memory (AM) model for each person and proposed to improve the performance of AM models by kernel methods. Specifically, we first applied kernel transforms to each possible training pair of faces sample and then mapped the high-dimensional feature space back to input space. Our scheme using modular autoassociative memory for face recognition is inspired by the same motivation as using autoencoders for optical character recognition (OCR), for which the advantages has been proven. By associative memory, all the prototypical faces of one particular person are used to reconstruct themselves and the reconstruction error for a probe face image is used to decide if the probe face is from the corresponding person. We carried out extensive experiments on three standard face recognition datasets, the FERET data, the XM2VTS data, and the ORL data. Detailed comparisons with earlier published results are provided and our proposed scheme offers better recognition accuracy on all of the face datasets.

  15. Reversibility of the other-race effect in face recognition during childhood.

    PubMed

    Sangrigoli, S; Pallier, C; Argenti, A-M; Ventureyra, V A G; de Schonen, S

    2005-06-01

    Early experience with faces of a given racial type facilitates visual recognition for this type of face relative to others. To assess whether this so-called other-race effect can be reversed by subsequent experience with new types of faces, we tested adults of Korean origin who were adopted by European Caucasian families when they were between the ages of 3 to 9. The adoptees performed a face recognition task with photographs of Caucasian and Asian faces. They performed exactly like a control group of French participants, identifying the Caucasian faces better than the Asiatic ones. In contrast, a control group of Koreans showed the reverse pattern. This result indicates that the face recognition system remains plastic enough during childhood to reverse the other-race effect.

  16. Component Structure of Individual Differences in True and False Recognition of Faces

    ERIC Educational Resources Information Center

    Bartlett, James C.; Shastri, Kalyan K.; Abdi, Herve; Neville-Smith, Marsha

    2009-01-01

    Principal-component analyses of 4 face-recognition studies uncovered 2 independent components. The first component was strongly related to false-alarm errors with new faces as well as to facial "conjunctions" that recombine features of previously studied faces. The second component was strongly related to hits as well as to the conjunction/new…

  17. Using eye movements as an index of implicit face recognition in autism spectrum disorder.

    PubMed

    Hedley, Darren; Young, Robyn; Brewer, Neil

    2012-10-01

    Individuals with an autism spectrum disorder (ASD) typically show impairment on face recognition tasks. Performance has usually been assessed using overt, explicit recognition tasks. Here, a complementary method involving eye tracking was used to examine implicit face recognition in participants with ASD and in an intelligence quotient-matched non-ASD control group. Differences in eye movement indices between target and foil faces were used as an indicator of implicit face recognition. Explicit face recognition was assessed using old-new discrimination and reaction time measures. Stimuli were faces of studied (target) or unfamiliar (foil) persons. Target images at test were either identical to the images presented at study or altered by changing the lighting, pose, or by masking with visual noise. Participants with ASD performed worse than controls on the explicit recognition task. Eye movement-based measures, however, indicated that implicit recognition may not be affected to the same degree as explicit recognition. Autism Res 2012, 5: 363-379. © 2012 International Society for Autism Research, Wiley Periodicals, Inc.

  18. Fearful contextual expression impairs the encoding and recognition of target faces: an ERP study

    PubMed Central

    Lin, Huiyan; Schulz, Claudia; Straube, Thomas

    2015-01-01

    Previous event-related potential (ERP) studies have shown that the N170 to faces is modulated by the emotion of the face and its context. However, it is unclear how the encoding of emotional target faces as reflected in the N170 is modulated by the preceding contextual facial expression when temporal onset and identity of target faces are unpredictable. In addition, no study as yet has investigated whether contextual facial expression modulates later recognition of target faces. To address these issues, participants in the present study were asked to identify target faces (fearful or neutral) that were presented after a sequence of fearful or neutral contextual faces. The number of sequential contextual faces was random and contextual and target faces were of different identities so that temporal onset and identity of target faces were unpredictable. Electroencephalography (EEG) data was recorded during the encoding phase. Subsequently, participants had to perform an unexpected old/new recognition task in which target face identities were presented in either the encoded or the non-encoded expression. ERP data showed a reduced N170 to target faces in fearful as compared to neutral context regardless of target facial expression. In the later recognition phase, recognition rates were reduced for target faces in the encoded expression when they had been encountered in fearful as compared to neutral context. The present findings suggest that fearful compared to neutral contextual faces reduce the allocation of attentional resources towards target faces, which results in limited encoding and recognition of target faces. PMID:26388751

  19. Fusion of Visible and Thermal Descriptors Using Genetic Algorithms for Face Recognition Systems.

    PubMed

    Hermosilla, Gabriel; Gallardo, Francisco; Farias, Gonzalo; San Martin, Cesar

    2015-07-23

    The aim of this article is to present a new face recognition system based on the fusion of visible and thermal features obtained from the most current local matching descriptors by maximizing face recognition rates through the use of genetic algorithms. The article considers a comparison of the performance of the proposed fusion methodology against five current face recognition methods and classic fusion techniques used commonly in the literature. These were selected by considering their performance in face recognition. The five local matching methods and the proposed fusion methodology are evaluated using the standard visible/thermal database, the Equinox database, along with a new database, the PUCV-VTF, designed for visible-thermal studies in face recognition and described for the first time in this work. The latter is created considering visible and thermal image sensors with different real-world conditions, such as variations in illumination, facial expression, pose, occlusion, etc. The main conclusions of this article are that two variants of the proposed fusion methodology surpass current face recognition methods and the classic fusion techniques reported in the literature, attaining recognition rates of over 97% and 99% for the Equinox and PUCV-VTF databases, respectively. The fusion methodology is very robust to illumination and expression changes, as it combines thermal and visible information efficiently by using genetic algorithms, thus allowing it to choose optimal face areas where one spectrum is more representative than the other.

  20. Fusion of Visible and Thermal Descriptors Using Genetic Algorithms for Face Recognition Systems

    PubMed Central

    Hermosilla, Gabriel; Gallardo, Francisco; Farias, Gonzalo; San Martin, Cesar

    2015-01-01

    The aim of this article is to present a new face recognition system based on the fusion of visible and thermal features obtained from the most current local matching descriptors by maximizing face recognition rates through the use of genetic algorithms. The article considers a comparison of the performance of the proposed fusion methodology against five current face recognition methods and classic fusion techniques used commonly in the literature. These were selected by considering their performance in face recognition. The five local matching methods and the proposed fusion methodology are evaluated using the standard visible/thermal database, the Equinox database, along with a new database, the PUCV-VTF, designed for visible-thermal studies in face recognition and described for the first time in this work. The latter is created considering visible and thermal image sensors with different real-world conditions, such as variations in illumination, facial expression, pose, occlusion, etc. The main conclusions of this article are that two variants of the proposed fusion methodology surpass current face recognition methods and the classic fusion techniques reported in the literature, attaining recognition rates of over 97% and 99% for the Equinox and PUCV-VTF databases, respectively. The fusion methodology is very robust to illumination and expression changes, as it combines thermal and visible information efficiently by using genetic algorithms, thus allowing it to choose optimal face areas where one spectrum is more representative than the other. PMID:26213932

  1. Recognition memory for emotional faces in amnestic mild cognitive impairment: an event-related potential study.

    PubMed

    Schefter, Maria; Werheid, Katja; Almkvist, Ove; Lönnqvist-Akenine, Ulrika; Kathmann, Norbert; Winblad, Bengt

    2013-01-01

    This study examined the temporal course of emotional face recognition in amnestic mild cognitive impairment (aMCI). Patients and healthy controls (HC) performed a face recognition task, giving old/new responses to previously studied and novel faces displaying a negative or neutral expression. In aMCI patients, recognition accuracy was preserved for negative faces. Event-related potentials (ERPs) revealed disease-related changes in early perceptual components but not in ERP indices of explicit recognition. Specifically, aMCI patients showed impaired recognition effects for negative faces on the amplitudes of N170 and P2, suggesting deficient memory-related processing of negative faces at the stage of structural encoding and during an early recognition stage at which faces are individuated, respectively. Moreover, while a right-lateralized emotion effect specifically observed for correctly recognized faces on the amplitude of N170 was absent in aMCI, a similar emotion effect for successfully recognized faces on P2 was preserved in the patients, albeit with a different distribution. This suggests that in aMCI facilitated processing of successfully recognized emotional faces starts later in the processing sequence. Nonetheless, an early frontal old/new effect confined to negative faces and a parietal old/new effect unaffected by facial emotion were observed in both groups. This indicates that familiarity and conceptual priming processes may specifically contribute to recognition of negative faces in older adults and that aMCI patients can recruit the same retrieval mechanisms as controls, despite disease-related changes on early perceptual ERP components.

  2. Solving the Border Control Problem: Evidence of Enhanced Face Matching in Individuals with Extraordinary Face Recognition Skills

    PubMed Central

    Bobak, Anna Katarzyna; Dowsett, Andrew James; Bate, Sarah

    2016-01-01

    Photographic identity documents (IDs) are commonly used despite clear evidence that unfamiliar face matching is a difficult and error-prone task. The current study set out to examine the performance of seven individuals with extraordinary face recognition memory, so called “super recognisers” (SRs), on two face matching tasks resembling border control identity checks. In Experiment 1, the SRs as a group outperformed control participants on the “Glasgow Face Matching Test”, and some case-by-case comparisons also reached significance. In Experiment 2, a perceptually difficult face matching task was used: the “Models Face Matching Test”. Once again, SRs outperformed controls both on group and mostly in case-by-case analyses. These findings suggest that SRs are considerably better at face matching than typical perceivers, and would make proficient personnel for border control agencies. PMID:26829321

  3. Implementation of The LDA Algorithm for Online Validation Based on Face Recognition

    NASA Astrophysics Data System (ADS)

    Zainuddin, Z.; Laswi, A. S.

    2017-01-01

    This paper report work in implementation of computer vision application in face recognition to the on-line validation for distance learning. Face recognition is chosen among many other alternatives of validation because its robustness. The problem with basic validation such as password cannot validate the student in distance learning. This cannot be accepted especially is distance examination. Face recognition algorithm used in this research is Linear Discriminant Analysis (LDA). By using this algorithm, the system capable of recognize the authorized persons about 93% and reject the unauthorized persons 100%.

  4. Description and recognition of faces from 3D data

    NASA Astrophysics Data System (ADS)

    Coombes, Anne M.; Richards, Robin; Linney, Alfred D.; Bruce, Vicki; Fright, Rick

    1992-12-01

    A method based on differential geometry, is presented for mathematically describing the shape of the facial surface. Three-dimensional data for the face are collected by optical surface scanning. The method allows the segmentation of the face into regions of a particular `surface type,' according to the surface curvature. Eight different surface types are produced which all have perceptually meaningful interpretations. The correspondence of the surface type regions to the facial features are easily visualized, allowing a qualitative assessment of the face. A quantitative description of the face in terms of the surface type regions can be produced and the variation of the description between faces is demonstrated. A set of optical surface scans can be registered together and averages to produce an average male and average female face. Thus an assessment of how individuals vary from the average can be made as well as a general statement about the differences between male and female faces. This method will enable an investigation to be made as to how reliably faces can be individuated by their surface shape which, if feasible, may be the basis of an automatic system for recognizing faces. It also has applications in physical anthropology, for classification of the face, facial reconstructive surgery, to quantify the changes in a face altered by reconstructive surgery and growth, and in visual perception, to assess the recognizability of faces. Examples of some of these applications are presented.

  5. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance

    PubMed Central

    Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.

    2015-01-01

    To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT (“face patches”) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a

  6. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ("face patches") did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. Significance statement: We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a

  7. Face recognition systems in monkey and human: are they the same thing?

    PubMed

    Yovel, Galit; Freiwald, Winrich A

    2013-01-01

    Primate societies are based on face recognition. Face recognition mechanisms have been studied most extensively in humans and macaque monkeys. In both species, multiple brain areas specialized for face processing have been found, and their functional properties are characterized with increasing detail, so we can now begin to address questions about similarities and differences of face-recognition systems across species with 25 million years of separate evolution. Both systems are organized into multiple face-selective cortical areas in spatial arrangements and with functional specializations, implying both hierarchical and parallel modes of information processing. Yet open questions about homologies remain. To address these, future studies employing similar techniques and experimental designs across multiple species are needed to identify a putative core primate face processing system and to understand its differentiations into the multiple branches of the primate order.

  8. Face recognition systems in monkey and human: are they the same thing?

    PubMed Central

    2013-01-01

    Primate societies are based on face recognition. Face recognition mechanisms have been studied most extensively in humans and macaque monkeys. In both species, multiple brain areas specialized for face processing have been found, and their functional properties are characterized with increasing detail, so we can now begin to address questions about similarities and differences of face-recognition systems across species with 25 million years of separate evolution. Both systems are organized into multiple face-selective cortical areas in spatial arrangements and with functional specializations, implying both hierarchical and parallel modes of information processing. Yet open questions about homologies remain. To address these, future studies employing similar techniques and experimental designs across multiple species are needed to identify a putative core primate face processing system and to understand its differentiations into the multiple branches of the primate order. PMID:23585928

  9. Impairments in Monkey and Human Face Recognition in 2-Year-Old Toddlers with Autism Spectrum Disorder and Developmental Delay

    ERIC Educational Resources Information Center

    Chawarska, Katarzyna; Volkmar, Fred

    2007-01-01

    Face recognition impairments are well documented in older children with Autism Spectrum Disorders (ASD); however, the developmental course of the deficit is not clear. This study investigates the progressive specialization of face recognition skills in children with and without ASD. Experiment 1 examines human and monkey face recognition in…

  10. Score Fusion and Decision Fusion for the Performance Improvement of Face Recognition

    DTIC Science & Technology

    2013-07-01

    a face recognition system, we propose a fusion solution consisting of score fusion of multispectral images and decision fusion of stereo images...Univ. MultiSpectral Stereo face dataset that currently consists of the stereo face images of two spectral bands from 105 subjects. The experimental... consists of two stereo imaging cameras (Left and Right). Each side has two spectral bands, visible and thermal. The face scores from multiple matchers are

  11. Effects of exposure to facial expression variation in face learning and recognition.

    PubMed

    Liu, Chang Hong; Chen, Wenfeng; Ward, James

    2015-11-01

    Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions.

  12. Holistic processing, contact, and the other-race effect in face recognition.

    PubMed

    Zhao, Mintao; Hayward, William G; Bülthoff, Isabelle

    2014-12-01

    Face recognition, holistic processing, and processing of configural and featural facial information are known to be influenced by face race, with better performance for own- than other-race faces. However, whether these various other-race effects (OREs) arise from the same underlying mechanisms or from different processes remains unclear. The present study addressed this question by measuring the OREs in a set of face recognition tasks, and testing whether these OREs are correlated with each other. Participants performed different tasks probing (1) face recognition, (2) holistic processing, (3) processing of configural information, and (4) processing of featural information for both own- and other-race faces. Their contact with other-race people was also assessed with a questionnaire. The results show significant OREs in tasks testing face memory and processing of configural information, but not in tasks testing either holistic processing or processing of featural information. Importantly, there was no cross-task correlation between any of the measured OREs. Moreover, the level of other-race contact predicted only the OREs obtained in tasks testing face memory and processing of configural information. These results indicate that these various cross-race differences originate from different aspects of face processing, in contrary to the view that the ORE in face recognition is due to cross-race differences in terms of holistic processing.

  13. High and low performers differ in the use of shape information for face recognition.

    PubMed

    Kaufmann, Jürgen M; Schulz, Claudia; Schweinberger, Stefan R

    2013-06-01

    Previous findings demonstrated that increasing facial distinctiveness by means of spatial caricaturing improves face learning and results in modulations of event-related-potential (ERP) components associated with the processing of typical shape information (P200) and with face learning and recognition (N250). The current study investigated performance-based differences in the effects of spatial caricaturing: a modified version of the Bielefelder famous faces test (BFFT) was applied to subdivide a non-clinical group of 28 participants into better and worse face recognizers. Overall, a learning benefit was seen for caricatured compared to veridical faces. In addition, for learned faces we found larger caricaturing effects in response times, inverse efficiency scores as well as in P200 and N250 amplitudes in worse face recognizers, indicating that these individuals profited disproportionately from exaggerated idiosyncratic face shape. During learning and for novel faces at test, better and worse recognizers showed similar caricaturing effects. We suggest that spatial caricaturing helps better and worse face recognizers accessing critical idiosyncratic shape information that supports identity processing and learning of unfamiliar faces. For familiarized faces, better face recognizers might depend less on exaggerated shape and make better use of texture information than worse recognizers. These results shed light on the transition from unfamiliar to familiar face processing and may also be relevant for developing training-programmes for people with difficulties in face recognition.

  14. Effects of acute psychosocial stress on neural activity to emotional and neutral faces in a face recognition memory paradigm.

    PubMed

    Li, Shijia; Weerda, Riklef; Milde, Christopher; Wolf, Oliver T; Thiel, Christiane M

    2014-12-01

    Previous studies have shown that acute psychosocial stress impairs recognition of declarative memory and that emotional material is especially sensitive to this effect. Animal studies suggest a central role of the amygdala which modulates memory processes in hippocampus, prefrontal cortex and other brain areas. We used functional magnetic resonance imaging (fMRI) to investigate neural correlates of stress-induced modulation of emotional recognition memory in humans. Twenty-seven healthy, right-handed, non-smoker male volunteers performed an emotional face recognition task. During encoding, participants were presented with 50 fearful and 50 neutral faces. One hour later, they underwent either a stress (Trier Social Stress Test) or a control procedure outside the scanner which was followed immediately by the recognition session inside the scanner, where participants had to discriminate between 100 old and 50 new faces. Stress increased salivary cortisol, blood pressure and pulse, and decreased the mood of participants but did not impact recognition memory. BOLD data during recognition revealed a stress condition by emotion interaction in the left inferior frontal gyrus and right hippocampus which was due to a stress-induced increase of neural activity to fearful and a decrease to neutral faces. Functional connectivity analyses revealed a stress-induced increase in coupling between the right amygdala and the right fusiform gyrus, when processing fearful as compared to neutral faces. Our results provide evidence that acute psychosocial stress affects medial temporal and frontal brain areas differentially for neutral and emotional items, with a stress-induced privileged processing of emotional stimuli.

  15. Is emotion recognition the only problem in ADHD? effects of pharmacotherapy on face and emotion recognition in children with ADHD.

    PubMed

    Demirci, Esra; Erdogan, Ayten

    2016-12-01

    The objectives of this study were to evaluate both face and emotion recognition, to detect differences among attention deficit and hyperactivity disorder (ADHD) subgroups, to identify effects of the gender and to assess the effects of methylphenidate and atomoxetine treatment on both face and emotion recognition in patients with ADHD. The study sample consisted of 41 male, 29 female patients, 8-15 years of age, who were diagnosed as having combined type ADHD (N = 26), hyperactive/impulsive type ADHD (N = 21) or inattentive type ADHD (N = 23) but had not previously used any medication for ADHD and 35 male, 25 female healthy individuals. Long-acting methylphenidate (OROS-MPH) was prescribed to 38 patients, whereas atomoxetine was prescribed to 32 patients. The reading the mind in the eyes test (RMET) and Benton face recognition test (BFRT) were applied to all participants before and after treatment. The patients with ADHD had a significantly lower number of correct answers in child and adolescent RMET and in BFRT than the healthy controls. Among the ADHD subtypes, the hyperactive/impulsive subtype had a lower number of correct answers in the RMET than the inattentive subtypes, and the hyperactive/impulsive subtype had a lower number of correct answers in short and long form of BFRT than the combined and inattentive subtypes. Male and female patients with ADHD did not differ significantly with respect to the number of correct answers on the RMET and BFRT. The patients showed significant improvement in RMET and BFRT after treatment with OROS-MPH or atomoxetine. Patients with ADHD have difficulties in face recognition as well as emotion recognition. Both OROS-MPH and atomoxetine affect emotion recognition. However, further studies on the face and emotion recognition are needed in ADHD.

  16. Image-invariant responses in face-selective regions do not explain the perceptual advantage for familiar face recognition.

    PubMed

    Davies-Thompson, Jodie; Newling, Katherine; Andrews, Timothy J

    2013-02-01

    The ability to recognize familiar faces across different viewing conditions contrasts with the inherent difficulty in the perception of unfamiliar faces across similar image manipulations. It is widely believed that this difference in perception and recognition is based on the neural representation for familiar faces being less sensitive to changes in the image than it is for unfamiliar faces. Here, we used an functional magnetic resonance-adaptation paradigm to investigate image invariance in face-selective regions of the human brain. We found clear evidence for a degree of image-invariant adaptation to facial identity in face-selective regions, such as the fusiform face area. However, contrary to the predictions of models of face processing, comparable levels of image invariance were evident for both familiar and unfamiliar faces. This suggests that the marked differences in the perception of familiar and unfamiliar faces may not depend on differences in the way multiple images are represented in core face-selective regions of the human brain.

  17. Color Face Recognition Based on Steerable Pyramid Transform and Extreme Learning Machines

    PubMed Central

    Uçar, Ayşegül

    2014-01-01

    This paper presents a novel color face recognition algorithm by means of fusing color and local information. The proposed algorithm fuses the multiple features derived from different color spaces. Multiorientation and multiscale information relating to the color face features are extracted by applying Steerable Pyramid Transform (SPT) to the local face regions. In this paper, the new three hybrid color spaces, YSCr, ZnSCr, and BnSCr, are firstly constructed using the Cb and Cr component images of the YCbCr color space, the S color component of the HSV color spaces, and the Zn and Bn color components of the normalized XYZ color space. Secondly, the color component face images are partitioned into the local patches. Thirdly, SPT is applied to local face regions and some statistical features are extracted. Fourthly, all features are fused according to decision fusion frame and the combinations of Extreme Learning Machines classifiers are applied to achieve color face recognition with fast and high correctness. The experiments show that the proposed Local Color Steerable Pyramid Transform (LCSPT) face recognition algorithm improves seriously face recognition performance by using the new color spaces compared to the conventional and some hybrid ones. Furthermore, it achieves faster recognition compared with state-of-the-art studies. PMID:24558319

  18. Color face recognition based on steerable pyramid transform and extreme learning machines.

    PubMed

    Uçar, Ayşegül

    2014-01-01

    This paper presents a novel color face recognition algorithm by means of fusing color and local information. The proposed algorithm fuses the multiple features derived from different color spaces. Multiorientation and multiscale information relating to the color face features are extracted by applying Steerable Pyramid Transform (SPT) to the local face regions. In this paper, the new three hybrid color spaces, YSCr, ZnSCr, and BnSCr, are firstly constructed using the Cb and Cr component images of the YCbCr color space, the S color component of the HSV color spaces, and the Zn and Bn color components of the normalized XYZ color space. Secondly, the color component face images are partitioned into the local patches. Thirdly, SPT is applied to local face regions and some statistical features are extracted. Fourthly, all features are fused according to decision fusion frame and the combinations of Extreme Learning Machines classifiers are applied to achieve color face recognition with fast and high correctness. The experiments show that the proposed Local Color Steerable Pyramid Transform (LCSPT) face recognition algorithm improves seriously face recognition performance by using the new color spaces compared to the conventional and some hybrid ones. Furthermore, it achieves faster recognition compared with state-of-the-art studies.

  19. Face Recognition and Processing in a Mini Brain

    DTIC Science & Technology

    2007-09-28

    brain containing less than 1 million neurons (in animal model of the honeybee ) uses to learn and subsequently recognize human faces. There were four...specific aims. Aim (i) The project has been able to identify that the miniature brain of honeybees learns to recognize faces by binding information...model of the honeybee ) uses to learn and subsequently recognize human faces. There were four specific aims (detailed below) to the project, and all of

  20. No Own-Age Advantage in Children’s Recognition of Emotion on Prototypical Faces of Different Ages

    PubMed Central

    Griffiths, Sarah; Penton-Voak, Ian S.; Jarrold, Chris; Munafò, Marcus R.

    2015-01-01

    We test whether there is an own-age advantage in emotion recognition using prototypical younger child, older child and adult faces displaying emotional expressions. Prototypes were created by averaging photographs of individuals from 6 different age and sex categories (male 5–8 years, male 9–12 years, female 5–8 years, female 9–12 years, adult male and adult female), each posing 6 basic emotional expressions. In the study 5–8 year old children (n = 33), 9–13 year old children (n = 70) and adults (n = 92) labelled these expression prototypes in a 6-alternative forced-choice task. There was no evidence that children or adults recognised expressions better on faces from their own age group. Instead, child facial expression prototypes were recognised as accurately as adult expression prototypes by all age groups. This suggests there is no substantial own-age advantage in children’s emotion recognition. PMID:25978656

  1. Using Computerized Games to Teach Face Recognition Skills to Children with Autism Spectrum Disorder: The "Let's Face It!" Program

    ERIC Educational Resources Information Center

    Tanaka, James W.; Wolf, Julie M.; Klaiman, Cheryl; Koenig, Kathleen; Cockburn, Jeffrey; Herlihy, Lauren; Brown, Carla; Stahl, Sherin; Kaiser, Martha D.; Schultz, Robert T.

    2010-01-01

    Background: An emerging body of evidence indicates that relative to typically developing children, children with autism are selectively impaired in their ability to recognize facial identity. A critical question is whether face recognition skills can be enhanced through a direct training intervention. Methods: In a randomized clinical trial,…

  2. Savings In Relearning Face-Name Associations as Evidence for Covert Recognition in Prosopagnosia

    DTIC Science & Technology

    1992-01-01

    recognition. In the present article we examine a single covert test, the face- name relearning task, with the goal of distinguishing the two hypotheses...Memory, A, 453-468. Newcombe, F., Young, A. W., & de Haan, E. H. F. (1989). Prosopagnosia and object agnosia without covert recognition

  3. Correlations between psychometric schizotypy, scan path length, fixations on the eyes and face recognition.

    PubMed

    Hills, Peter J; Eaton, Elizabeth; Pake, J Michael

    2016-01-01

    Psychometric schizotypy in the general population correlates negatively with face recognition accuracy, potentially due to deficits in inhibition, social withdrawal, or eye-movement abnormalities. We report an eye-tracking face recognition study in which participants were required to match one of two faces (target and distractor) to a cue face presented immediately before. All faces could be presented with or without paraphernalia (e.g., hats, glasses, facial hair). Results showed that paraphernalia distracted participants, and that the most distracting condition was when the cue and the distractor face had paraphernalia but the target face did not, while there was no correlation between distractibility and participants' scores on the Schizotypal Personality Questionnaire (SPQ). Schizotypy was negatively correlated with proportion of time fixating on the eyes and positively correlated with not fixating on a feature. It was negatively correlated with scan path length and this variable correlated with face recognition accuracy. These results are interpreted as schizotypal traits being associated with a restricted scan path leading to face recognition deficits.

  4. Capturing specific abilities as a window into human individuality: The example of face recognition

    PubMed Central

    Wilmer, Jeremy B.; Germine, Laura; Chabris, Christopher F.; Chatterjee, Garga; Gerbasi, Margaret; Nakayama, Ken

    2013-01-01

    Proper characterization of each individual's unique pattern of strengths and weaknesses requires good measures of diverse abilities. Here, we advocate combining our growing understanding of neural and cognitive mechanisms with modern psychometric methods in a renewed effort to capture human individuality through a consideration of specific abilities. We articulate five criteria for the isolation and measurement of specific abilities, then apply these criteria to face recognition. We cleanly dissociate face recognition from more general visual and verbal recognition. This dissociation stretches across ability as well as disability, suggesting that specific developmental face recognition deficits are a special case of a broader specificity that spans the entire spectrum of human face recognition performance. Item-by-item results from 1,471 web-tested participants, included as supplementary information, fuel item analyses, validation, norming, and item response theory (IRT) analyses of our three tests: (a) the widely used Cambridge Face Memory Test (CFMT); (b) an Abstract Art Memory Test (AAMT), and (c) a Verbal Paired-Associates Memory Test (VPMT). The availability of this data set provides a solid foundation for interpreting future scores on these tests. We argue that the allied fields of experimental psychology, cognitive neuroscience, and vision science could fuel the discovery of additional specific abilities to add to face recognition, thereby providing new perspectives on human individuality. PMID:23428079

  5. Capturing specific abilities as a window into human individuality: the example of face recognition.

    PubMed

    Wilmer, Jeremy B; Germine, Laura; Chabris, Christopher F; Chatterjee, Garga; Gerbasi, Margaret; Nakayama, Ken

    2012-01-01

    Proper characterization of each individual's unique pattern of strengths and weaknesses requires good measures of diverse abilities. Here, we advocate combining our growing understanding of neural and cognitive mechanisms with modern psychometric methods in a renewed effort to capture human individuality through a consideration of specific abilities. We articulate five criteria for the isolation and measurement of specific abilities, then apply these criteria to face recognition. We cleanly dissociate face recognition from more general visual and verbal recognition. This dissociation stretches across ability as well as disability, suggesting that specific developmental face recognition deficits are a special case of a broader specificity that spans the entire spectrum of human face recognition performance. Item-by-item results from 1,471 web-tested participants, included as supplementary information, fuel item analyses, validation, norming, and item response theory (IRT) analyses of our three tests: (a) the widely used Cambridge Face Memory Test (CFMT); (b) an Abstract Art Memory Test (AAMT), and (c) a Verbal Paired-Associates Memory Test (VPMT). The availability of this data set provides a solid foundation for interpreting future scores on these tests. We argue that the allied fields of experimental psychology, cognitive neuroscience, and vision science could fuel the discovery of additional specific abilities to add to face recognition, thereby providing new perspectives on human individuality.

  6. Use of 3D faces facilitates facial expression recognition in children

    PubMed Central

    Wang, Lamei; Chen, Wenfeng; Li, Hong

    2017-01-01

    This study assessed whether presenting 3D face stimuli could facilitate children’s facial expression recognition. Seventy-one children aged between 3 and 6 participated in the study. Their task was to judge whether a face presented in each trial showed a happy or fearful expression. Half of the face stimuli were shown with 3D representations, whereas the other half of the images were shown as 2D pictures. We compared expression recognition under these conditions. The results showed that the use of 3D faces improved the speed of facial expression recognition in both boys and girls. Moreover, 3D faces improved boys’ recognition accuracy for fearful expressions. Since fear is the most difficult facial expression for children to recognize, the facilitation effect of 3D faces has important practical implications for children with difficulties in facial expression recognition. The potential benefits of 3D representation for other expressions also have implications for developing more realistic assessments of children’s expression recognition. PMID:28368008

  7. Recognition Memory Measures Yield Disproportionate Effects of Aging on Learning Face-Name Associations

    PubMed Central

    James, Lori E.; Fogler, Kethera A.; Tauber, Sarah K.

    2008-01-01

    No previous research has tested whether the specific age-related deficit in learning face-name associations that has been identified using recall tasks also occurs for recognition memory measures. Young and older participants saw pictures of unfamiliar people with a name and an occupation for each person, and were tested on a matching (in Experiment 1) or multiple-choice (in Experiment 2) recognition memory test. For both recognition measures, the pattern of effects was the same as that obtained using a recall measure: more face-occupation associations were remembered than face-name associations, young adults remembered more associated information than older adults overall, and older adults had disproportionately poorer memory for face-name associations. Findings implicate age-related difficulty in forming and retrieving the association between the face and the name as the primary cause of obtained deficits in previous name learning studies. PMID:18808254

  8. Recognition memory measures yield disproportionate effects of aging on learning face-name associations.

    PubMed

    James, Lori E; Fogler, Kethera A; Tauber, Sarah K

    2008-09-01

    No previous research has tested whether the specific age-related deficit in learning face-name associations that has been identified using recall tasks also occurs for recognition memory measures. Young and older participants saw pictures of unfamiliar people with a name and an occupation for each person, and were tested on a matching (in Experiment 1) or multiple-choice (in Experiment 2) recognition memory test. For both recognition measures, the pattern of effects was the same as that obtained using a recall measure: More face-occupation associations were remembered than face-name associations, young adults remembered more associated information than older adults overall, and older adults had disproportionately poorer memory for face-name associations. Findings implicate age-related difficulty in forming and retrieving the association between the face and the name as the primary cause of obtained deficits in previous name learning studies.

  9. Atypical Development of Face and Greeble Recognition in Autism

    ERIC Educational Resources Information Center

    Scherf, K. Suzanne; Behrmann, Marlene; Minshew, Nancy; Luna, Beatriz

    2008-01-01

    Background: Impaired face processing is a widely documented deficit in autism. Although the origin of this deficit is unclear, several groups have suggested that a lack of perceptual expertise is contributory. We investigated whether individuals with autism develop expertise in visuoperceptual processing of faces and whether any deficiency in such…

  10. Effect of Partial Occlusion on Newborns' Face Preference and Recognition

    ERIC Educational Resources Information Center

    Gava, Lucia; Valenza, Eloisa; Turati, Chiara; de Schonen, Scania

    2008-01-01

    Many studies have shown that newborns prefer (e.g. Goren, Sarty & Wu, 1975 ; Valenza, Simion, Macchi Cassia & Umilta, 1996) and recognize (e.g. Bushnell, Say & Mullin, 1989; Pascalis & de Schonen, 1994) faces. However, it is not known whether, at birth, faces are still preferred and recognized when some of their parts are not visible because…

  11. Training of familiar face recognition and visual scan paths for faces in a child with congenital prosopagnosia.

    PubMed

    Schmalzl, Laura; Palermo, Romina; Green, Melissa; Brunsdon, Ruth; Coltheart, Max

    2008-07-01

    In the current report we describe a successful training study aimed at improving recognition of a set of familiar face photographs in K., a 4-year-old girl with congenital prosopagnosia (CP). A detailed assessment of K.'s face-processing skills showed a deficit in structural encoding, most pronounced in the processing of facial features within the face. In addition, eye movement recordings revealed that K.'s scan paths for faces were characterized by a large percentage of fixations directed to areas outside the internal core features (i.e., eyes, nose, and mouth), in particular by poor attendance to the eye region. Following multiple baseline assessments, training focused on teaching K. to reliably recognize a set of familiar face photographs by directing visual attention to specific characteristics of the internal features of each face. The training significantly improved K.'s ability to recognize the target faces, with her performance being flawless immediately after training as well as at a follow-up assessment 1 month later. In addition, eye movement recordings following training showed a significant change in K.'s scan paths, with a significant increase in the percentage of fixations directed to the internal features, particularly the eye region. Encouragingly, not only was the change in scan paths observed for the set of familiar trained faces, but it generalized to a set of faces that was not presented during training. In addition to documenting significant training effects, our study raises the intriguing question of whether abnormal scan paths for faces may be a common factor underlying face recognition impairments in childhood CP, an issue that has not been explored so far.

  12. Organization of face and object recognition in modular neural network models.

    PubMed

    Dailey, M N.; Cottrell, G W.

    1999-10-01

    There is strong evidence that face processing in the brain is localized. The double dissociation between prosopagnosia, a face recognition deficit occurring after brain damage, and visual object agnosia, difficulty recognizing other kinds of complex objects, indicates that face and non-face object recognition may be served by partially independent neural mechanisms. In this paper, we use computational models to show how the face processing specialization apparently underlying prosopagnosia and visual object agnosia could be attributed to (1) a relatively simple competitive selection mechanism that, during development, devotes neural resources to the tasks they are best at performing, (2) the developing infant's need to perform subordinate classification (identification) of faces early on, and (3) the infant's low visual acuity at birth. Inspired by de Schonen, Mancini and Liegeois' arguments (1998) [de Schonen, S., Mancini, J., Liegeois, F. (1998). About functional cortical specialization: the development of face recognition. In: F. Simon & G. Butterworth, The development of sensory, motor, and cognitive capacities in early infancy (pp. 103-116). Hove, UK: Psychology Press] that factors like these could bias the visual system to develop a processing subsystem particularly useful for face recognition, and Jacobs and Kosslyn's experiments (1994) [Jacobs, R. A., & Kosslyn, S. M. (1994). Encoding shape and spatial relations-the role of receptive field size in coordination complementary representations. Cognitive Science, 18(3), 361-368] in the mixtures of experts (ME) modeling paradigm, we provide a preliminary computational demonstration of how this theory accounts for the double dissociation between face and object processing. We present two feed-forward computational models of visual processing. In both models, the selection mechanism is a gating network that mediates a competition between modules attempting to classify input stimuli. In Model I, when the modules

  13. A new face of sleep: The impact of post-learning sleep on recognition memory for face-name associations.

    PubMed

    Maurer, Leonie; Zitting, Kirsi-Marja; Elliott, Kieran; Czeisler, Charles A; Ronda, Joseph M; Duffy, Jeanne F

    2015-12-01

    Sleep has been demonstrated to improve consolidation of many types of new memories. However, few prior studies have examined how sleep impacts learning of face-name associations. The recognition of a new face along with the associated name is an important human cognitive skill. Here we investigated whether post-presentation sleep impacts recognition memory of new face-name associations in healthy adults. Fourteen participants were tested twice. Each time, they were presented 20 photos of faces with a corresponding name. Twelve hours later, they were shown each face twice, once with the correct and once with an incorrect name, and asked if each face-name combination was correct and to rate their confidence. In one condition the 12-h interval between presentation and recall included an 8-h nighttime sleep opportunity ("Sleep"), while in the other condition they remained awake ("Wake"). There were more correct and highly confident correct responses when the interval between presentation and recall included a sleep opportunity, although improvement between the "Wake" and "Sleep" conditions was not related to duration of sleep or any sleep stage. These data suggest that a nighttime sleep opportunity improves the ability to correctly recognize face-name associations. Further studies investigating the mechanism of this improvement are important, as this finding has implications for individuals with sleep disturbances and/or memory impairments.

  14. A new face of sleep: The impact of post-learning sleep on recognition memory for face-name associations

    PubMed Central

    Maurer, Leonie; Zitting, Kirsi-Marja; Elliott, Kieran; Czeisler, Charles A.; Ronda, Joseph M.; Duffy, Jeanne F.

    2015-01-01

    Sleep has been demonstrated to improve consolidation of many types of new memories. However, few prior studies have examined how sleep impacts learning of face-name associations. The recognition of a new face along with the associated name is an important human cognitive skill. Here we investigated whether post-presentation sleep impacts recognition memory of new face-name associations in healthy adults. Fourteen participants were tested twice. Each time, they were presented 20 photos of faces with a corresponding name. Twelve hours later, they were shown each face twice, once with the correct and once with an incorrect name, and asked if each face-name combination was correct and to rate their confidence. In one condition the 12-hour interval between presentation and recall included an 8-hour nighttime sleep opportunity (“Sleep”), while in the other condition they remained awake (“Wake”). There were more correct and highly confident correct responses when the interval between presentation and recall included a sleep opportunity, although improvement between the “Wake” and “Sleep” conditions was not related to duration of sleep or any sleep stage. These data suggest that a nighttime sleep opportunity improves the ability to correctly recognize face-name associations. Further studies investigating the mechanism of this improvement are important, as this finding has implications for individuals with sleep disturbances and/or memory impairments. PMID:26549626

  15. From face to interface recognition: a differential geometric approach to distinguish DNA from RNA binding surfaces

    PubMed Central

    Shazman, Shula; Elber, Gershon; Mandel-Gutfreund, Yael

    2011-01-01

    Protein nucleic acid interactions play a critical role in all steps of the gene expression pathway. Nucleic acid (NA) binding proteins interact with their partners, DNA or RNA, via distinct regions on their surface that are characterized by an ensemble of chemical, physical and geometrical properties. In this study, we introduce a novel methodology based on differential geometry, commonly used in face recognition, to characterize and predict NA binding surfaces on proteins. Applying the method on experimentally solved three-dimensional structures of proteins we successfully classify double-stranded DNA (dsDNA) from single-stranded RNA (ssRNA) binding proteins, with 83% accuracy. We show that the method is insensitive to conformational changes that occur upon binding and can be applicable for de novo protein-function prediction. Remarkably, when concentrating on the zinc finger motif, we distinguish successfully between RNA and DNA binding interfaces possessing the same binding motif even within the same protein, as demonstrated for the RNA polymerase transcription-factor, TFIIIA. In conclusion, we present a novel methodology to characterize protein surfaces, which can accurately tell apart dsDNA from an ssRNA binding interfaces. The strength of our method in recognizing fine-tuned differences on NA binding interfaces make it applicable for many other molecular recognition problems, with potential implications for drug design. PMID:21693557

  16. When family looks strange and strangers look normal: a case of impaired face perception and recognition after stroke.

    PubMed

    Heutink, Joost; Brouwer, Wiebo H; Kums, Evelien; Young, Andy; Bouma, Anke

    2012-02-01

    We describe a patient (JS) with impaired recognition and distorted visual perception of faces after an ischemic stroke. Strikingly, JS reports that the faces of family members look distorted, while faces of other people look normal. After neurological and neuropsychological examination, we assessed response accuracy, response times, and skin conductance responses on a face recognition task in which photographs of close family members, celebrities and unfamiliar people were presented. JS' performance was compared to the performance of three healthy control participants. Results indicate that three aspects of face perception appear to be impaired in JS. First, she has impaired recognition of basic emotional expressions. Second, JS has poor recognition of familiar faces in general, but recognition of close family members is disproportionally impaired compared to faces of celebrities. Third, JS perceives faces of family members as distorted. In this paper we consider whether these impairments can be interpreted in terms of previously described disorders of face perception and recent models for face perception.

  17. What drives social in-group biases in face recognition memory? ERP evidence from the own-gender bias

    PubMed Central

    Kemter, Kathleen; Schweinberger, Stefan R.; Wiese, Holger

    2014-01-01

    It is well established that memory is more accurate for own-relative to other-race faces (own-race bias), which has been suggested to result from larger perceptual expertise for own-race faces. Previous studies also demonstrated better memory for own-relative to other-gender faces, which is less likely to result from differences in perceptual expertise, and rather may be related to social in-group vs out-group categorization. We examined neural correlates of the own-gender bias using event-related potentials (ERP). In a recognition memory experiment, both female and male participants remembered faces of their respective own gender more accurately compared with other-gender faces. ERPs during learning yielded significant differences between the subsequent memory effects (subsequently remembered – subsequently forgotten) for own-gender compared with other-gender faces in the occipito-temporal P2 and the central N200, whereas neither later subsequent memory effects nor ERP old/new effects at test reflected a neural correlate of the own-gender bias. We conclude that the own-gender bias is mainly related to study phase processes, which is in line with sociocognitive accounts. PMID:23474824

  18. Eye-tracking the own-race bias in face recognition: revealing the perceptual and socio-cognitive mechanisms.

    PubMed

    Hills, Peter J; Pake, J Michael

    2013-12-01

    Own-race faces are recognised more accurately than other-race faces and may even be viewed differently as measured by an eye-tracker (Goldinger, Papesh, & He, 2009). Alternatively, observer race might direct eye-movements (Blais, Jack, Scheepers, Fiset, & Caldara, 2008). Observer differences in eye-movements are likely to be based on experience of the physiognomic characteristics that are differentially discriminating for Black and White faces. Two experiments are reported that employed standard old/new recognition paradigms in which Black and White observers viewed Black and White faces with their eye-movements recorded. Experiment 1 showed that there were observer race differences in terms of the features scanned but observers employed the same strategy across different types of faces. Experiment 2 demonstrated that other-race faces could be recognised more accurately if participants had their first fixation directed to more diagnostic features using fixation crosses. These results are entirely consistent with those presented by Blais et al. (2008) and with the perceptual interpretation that the own-race bias is due to inappropriate attention allocated to the facial features (Hills & Lewis, 2006, 2011).

  19. The faces of Moebius syndrome: recognition and anticipatory guidance.

    PubMed

    Broussard, Anne Bienvenu; Borazjani, June G

    2008-01-01

    Moebius syndrome is a rare congenital disorder characterized mainly by the inability to move the eyes laterally or produce facial expressions such as smiling. Moebius syndrome creates physical problems for the affected individual that may, in some cases, lead to emotional or social adjustment issues, yet the syndrome is relatively unknown among healthcare professionals. Because early recognition of Moebius syndrome can lead to early diagnosis and treatment, education of nurses in perinatal, pediatric, midwifery, and neonatal specialties is crucial. Through early recognition, maternal-child nurses can offer anticipatory guidance and provide or recommend resources to parents of children with this neurological condition.

  20. Robust and discriminating method for face recognition based on correlation technique and independent component analysis model.

    PubMed

    Alfalou, A; Brosseau, C

    2011-03-01

    We demonstrate a novel technique for face recognition. Our approach relies on the performances of a strongly discriminating optical correlation method along with the robustness of the independent component analysis (ICA) model. Simulations were performed to illustrate how this algorithm can identify a face with images from the Pointing Head Pose Image Database. While maintaining algorithmic simplicity, this approach based on ICA representation significantly increases the true recognition rate compared to that obtained using our previously developed all-numerical ICA identity recognition method and another method based on optical correlation and a standard composite filter.

  1. Effects of Repetition and Configural Changes on the Development of Face Recognition Processes

    ERIC Educational Resources Information Center

    Itier, Roxane J.; Taylor, Margot J.

    2004-01-01

    We investigated the effect of repetition on recognition of upright, inverted and contrast-reversed target faces in children from 8 to 15 years when engaged in a learning phase/test phase paradigm with target and distractor faces. Early (P1, N170) and late ERP components were analysed. Children across age groups performed equally well, and were…

  2. Face identity recognition in autism spectrum disorders: a review of behavioral studies.

    PubMed

    Weigelt, Sarah; Koldewyn, Kami; Kanwisher, Nancy

    2012-03-01

    Face recognition--the ability to recognize a person from their facial appearance--is essential for normal social interaction. Face recognition deficits have been implicated in the most common disorder of social interaction: autism. Here we ask: is face identity recognition in fact impaired in people with autism? Reviewing behavioral studies we find no strong evidence for a qualitative difference in how facial identity is processed between those with and without autism: markers of typical face identity recognition, such as the face inversion effect, seem to be present in people with autism. However, quantitatively--i.e., how well facial identity is remembered or discriminated--people with autism perform worse than typical individuals. This impairment is particularly clear in face memory and in face perception tasks in which a delay intervenes between sample and test, and less so in tasks with no memory demand. Although some evidence suggests that this deficit may be specific to faces, further evidence on this question is necessary.

  3. An Own-Race Advantage for Components as Well as Configurations in Face Recognition

    ERIC Educational Resources Information Center

    Hayward, William G.; Rhodes, Gillian; Schwaninger, Adrian

    2008-01-01

    The own-race advantage in face recognition has been hypothesized as being due to a superiority in the processing of configural information for own-race faces. Here we examined the contributions of both configural and component processing to the own-race advantage. We recruited 48 Caucasian participants in Australia and 48 Chinese participants in…

  4. Brief Report: Face-Specific Recognition Deficits in Young Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Bradshaw, Jessica; Shic, Frederick; Chawarska, Katarzyna

    2011-01-01

    This study used eyetracking to investigate the ability of young children with autism spectrum disorders (ASD) to recognize social (faces) and nonsocial (simple objects and complex block patterns) stimuli using the visual paired comparison (VPC) paradigm. Typically developing (TD) children showed evidence for recognition of faces and simple…

  5. Brief Report: Developing Spatial Frequency Biases for Face Recognition in Autism and Williams Syndrome

    ERIC Educational Resources Information Center

    Leonard, Hayley C.; Annaz, Dagmara; Karmiloff-Smith, Annette; Johnson, Mark H.

    2011-01-01

    The current study investigated whether contrasting face recognition abilities in autism and Williams syndrome could be explained by different spatial frequency biases over developmental time. Typically-developing children and groups with Williams syndrome and autism were asked to recognise faces in which low, middle and high spatial frequency…

  6. Face Processing and Facial Emotion Recognition in Adults with Down Syndrome

    ERIC Educational Resources Information Center

    Barisnikov, Koviljka; Hippolyte, Loyse; Van der Linden, Martial

    2008-01-01

    Face processing and facial expression recognition was investigated in 17 adults with Down syndrome, and results were compared with those of a child control group matched for receptive vocabulary. On the tasks involving faces without emotional content, the adults with Down syndrome performed significantly worse than did the controls. However, their…

  7. The design and implementation of effective face detection and recognition system

    NASA Astrophysics Data System (ADS)

    Sun, Yigui

    2011-06-01

    In the paper, a face detection and recognition system (FDRS) based on video sequences and still image is proposed. It uses the AdaBoost algorithm to detect human face in the image or frame, adopts Discrete Cosine Transforms (DCT) for feature extraction and recognition in face image. The related technologies are firstly outlined. Then, the system requirements and UML use case diagram are described. In addition, the paper mainly introduces the design solution and key procedures. The FDRS's source-code is built in VC++, Standard Template Library (STL) and Intel Open Source Computer Vision Library (OpenCV).

  8. Catechol-O-methyltransferase val(158)met Polymorphism Interacts with Sex to Affect Face Recognition Ability.

    PubMed

    Lamb, Yvette N; McKay, Nicole S; Singh, Shrimal S; Waldie, Karen E; Kirk, Ian J

    2016-01-01

    The catechol-O-methyltransferase (COMT) val158met polymorphism affects the breakdown of synaptic dopamine. Consequently, this polymorphism has been associated with a variety of neurophysiological and behavioral outcomes. Some of the effects have been found to be sex-specific and it appears estrogen may act to down-regulate the activity of the COMT enzyme. The dopaminergic system has been implicated in face recognition, a form of cognition for which a female advantage has typically been reported. This study aimed to investigate potential joint effects of sex and COMT genotype on face recognition. A sample of 142 university students was genotyped and assessed using the Faces I subtest of the Wechsler Memory Scale - Third Edition (WMS-III). A significant two-way interaction between sex and COMT genotype on face recognition performance was found. Of the male participants, COMT val homozygotes and heterozygotes had significantly lower scores than met homozygotes. Scores did not differ between genotypes for female participants. While male val homozygotes had significantly lower scores than female val homozygotes, no sex differences were observed in the heterozygotes and met homozygotes. This study contributes to the accumulating literature documenting sex-specific effects of the COMT polymorphism by demonstrating a COMT-sex interaction for face recognition, and is consistent with a role for dopamine in face recognition.

  9. Oxytocin increases bias, but not accuracy, in face recognition line-ups.

    PubMed

    Bate, Sarah; Bennetts, Rachel; Parris, Benjamin A; Bindemann, Markus; Udale, Robert; Bussunt, Amanda

    2015-07-01

    Previous work indicates that intranasal inhalation of oxytocin improves face recognition skills, raising the possibility that it may be used in security settings. However, it is unclear whether oxytocin directly acts upon the core face-processing system itself or indirectly improves face recognition via affective or social salience mechanisms. In a double-blind procedure, 60 participants received either an oxytocin or placebo nasal spray before completing the One-in-Ten task-a standardized test of unfamiliar face recognition containing target-present and target-absent line-ups. Participants in the oxytocin condition outperformed those in the placebo condition on target-present trials, yet were more likely to make false-positive errors on target-absent trials. Signal detection analyses indicated that oxytocin induced a more liberal response bias, rather than increasing accuracy per se. These findings support a social salience account of the effects of oxytocin on face recognition and indicate that oxytocin may impede face recognition in certain scenarios.

  10. Blurred face recognition by fusing blur-invariant texture and structure features

    NASA Astrophysics Data System (ADS)

    Zhu, Mengyu; Cao, Zhiguo; Xiao, Yang; Xie, Xiaokang

    2015-10-01

    Blurred face recognition is still remaining as a challenge task, but with wide applications. Image blur can largely affect recognition performance. The local phase quantization (LPQ) was proposed to extract the blur-invariant texture information. It was used for blurred face recognition and achieved good performance. However, LPQ considers only the phase blur-invariant texture information, which is not sufficient. In addition, LPQ is extracted holistically, which cannot fully explore its discriminative power on local spatial properties. In this paper, we propose a novel method for blurred face recognition. The texture and structure blur-invariant features are extracted and fused to generate a more complete description on blurred image. For texture blur-invariant feature, LPQ is extracted in a densely sampled way and vector of locally aggregated descriptors (VLAD) is employed to enhance its performance. For structure blur-invariant feature, the histogram of oriented gradient (HOG) is used. To further enhance its blur invariance, we improve HOG by eliminating weak gradient magnitude which is more sensitive to image blur than the strong gradient. The improved HOG is then fused with the original HOG by canonical correlation analysis (CCA). At last, we fuse them together by CCA to form the final blur-invariant representation of the face image. The experiments are performed on three face datasets. The results demonstrate that our improvements and our proposition can have a good performance in blurred face recognition.

  11. Emotional facial expressions differentially influence predictions and performance for face recognition.

    PubMed

    Nomi, Jason S; Rhodes, Matthew G; Cleary, Anne M

    2013-01-01

    This study examined how participants' predictions of future memory performance are influenced by emotional facial expressions. Participants made judgements of learning (JOLs) predicting the likelihood that they would correctly identify a face displaying a happy, angry, or neutral emotional expression in a future two-alternative forced-choice recognition test of identity (i.e., recognition that a person's face was seen before). JOLs were higher for studied faces with happy and angry emotional expressions than for neutral faces. However, neutral test faces with studied neutral expressions had significantly higher identity recognition rates than neutral test faces studied with happy or angry expressions. Thus, these data are the first to demonstrate that people believe happy and angry emotional expressions will lead to better identity recognition in the future relative to neutral expressions. This occurred despite the fact that neutral expressions elicited better identity recognition than happy and angry expressions. These findings contribute to the growing literature examining the interaction of cognition and emotion.

  12. Catechol-O-methyltransferase val158met Polymorphism Interacts with Sex to Affect Face Recognition Ability

    PubMed Central

    Lamb, Yvette N.; McKay, Nicole S.; Singh, Shrimal S.; Waldie, Karen E.; Kirk, Ian J.

    2016-01-01

    The catechol-O-methyltransferase (COMT) val158met polymorphism affects the breakdown of synaptic dopamine. Consequently, this polymorphism has been associated with a variety of neurophysiological and behavioral outcomes. Some of the effects have been found to be sex-specific and it appears estrogen may act to down-regulate the activity of the COMT enzyme. The dopaminergic system has been implicated in face recognition, a form of cognition for which a female advantage has typically been reported. This study aimed to investigate potential joint effects of sex and COMT genotype on face recognition. A sample of 142 university students was genotyped and assessed using the Faces I subtest of the Wechsler Memory Scale – Third Edition (WMS-III). A significant two-way interaction between sex and COMT genotype on face recognition performance was found. Of the male participants, COMT val homozygotes and heterozygotes had significantly lower scores than met homozygotes. Scores did not differ between genotypes for female participants. While male val homozygotes had significantly lower scores than female val homozygotes, no sex differences were observed in the heterozygotes and met homozygotes. This study contributes to the accumulating literature documenting sex-specific effects of the COMT polymorphism by demonstrating a COMT-sex interaction for face recognition, and is consistent with a role for dopamine in face recognition. PMID:27445927

  13. The Own-Age Bias in Face Recognition: A Meta-Analytic and Theoretical Review

    ERIC Educational Resources Information Center

    Rhodes, Matthew G.; Anastasi, Jeffrey S.

    2012-01-01

    A large number of studies have examined the finding that recognition memory for faces of one's own age group is often superior to memory for faces of another age group. We examined this "own-age bias" (OAB) in the meta-analyses reported. These data showed that hits were reliably greater for same-age relative to other-age faces (g = 0.23) and that…

  14. Self-Face Recognition in Schizophrenia: An Eye-Tracking Study

    PubMed Central

    Bortolon, Catherine; Capdevielle, Delphine; Salesse, Robin N.; Raffard, Stéphane

    2016-01-01

    Self-face recognition has been shown to be impaired in schizophrenia (SZ), according to studies using behavioral tasks implicating cognitive demands. Here, we employed an eye-tracking methodology, which is a relevant tool to understand impairments in self-face recognition deficits in SZ because it provides a natural, continuous and online record of face processing. Moreover, it allows collecting the most relevant and informative features each individual looks at during the self-face recognition. These advantages are especially relevant considering the fundamental role played by the patterns of visual exploration on face processing. Thus, this paper aims to investigate self-face recognition deficits in SZ using eye-tracking methodology. Visual scan paths were monitored in 20 patients with SZ and 20 healthy controls. Self, famous, and unknown faces were morphed in steps of 20%. Location, number, and duration of fixations on relevant areas were recorded with an eye-tracking system. Participants performed a passive exploration task (no specific instruction was provided), followed by an active decision making task (individuals were explicitly requested to recognize the different faces). Results showed that patients with SZ had fewer and longer fixations compared to controls. Nevertheless, both groups focused their attention on relevant facial features in a similar way. No significant difference was found between groups when participants were requested to recognize the faces (active task). In conclusion, using an eye tracking methodology and two tasks with low levels of cognitive demands, our results suggest that patients with SZ are able to: (1) explore faces and focus on relevant features of the face in a similar way as controls; and (2) recognize their own face. PMID:26903833

  15. ERP investigation of study-test background mismatch during face recognition in schizophrenia.

    PubMed

    Guillaume, Fabrice; Guillem, François; Tiberghien, Guy; Stip, Emmanuel

    2012-01-01

    Old/new effects on event-related potentials (ERP) were explored in 20 patients with schizophrenia and 20 paired comparison subjects during unfamiliar face recognition. Extrinsic perceptual changes - which influence the overall familiarity of an item while retaining face-intrinsic features for use in structural face encoding - were manipulated between the study phase and the test. The question raised here concerns whether these perceptual incongruities would have a different effect on the sense of familiarity and the corresponding behavioral and ERP measures in the two groups. The results showed that schizophrenia patients were more inclined to consider old faces shown against a new background as distractors. This drop in face familiarity was accompanied by the disappearance of ERP old/new effects in this condition, i.e., FN400 and parietal old/new effects. Indeed, while ERP old/new recognition effects were found in both groups when the picture of the face was physically identical to the one presented for study, the ERP correlates of recognition disappeared among patients when the background behind the face was different. This difficulty in disregarding a background change suggests that recognition among patients with schizophrenia is based on a global perceptual matching strategy rather than on the extraction of configural information from the face. The correlations observed between FN400 amplitude, the rejection of faces with a different background, and the reality-distortion scores support the idea that the recognition deficit found in schizophrenia results from early anomalies that are carried over onto the parietal ERP old/new effect. Face-extrinsic perceptual variations provide an opportune situation for gaining insight into the social difficulties that patients encounter throughout their lives.

  16. On the particular vulnerability of face recognition to aging: a review of three hypotheses

    PubMed Central

    Boutet, Isabelle; Taler, Vanessa; Collin, Charles A.

    2015-01-01

    Age-related face recognition deficits are characterized by high false alarms to unfamiliar faces, are not as pronounced for other complex stimuli, and are only partially related to general age-related impairments in cognition. This paper reviews some of the underlying processes likely to be implicated in theses deficits by focusing on areas where contradictions abound as a means to highlight avenues for future research. Research pertaining to the three following hypotheses is presented: (i) perceptual deterioration, (ii) encoding of configural information, and (iii) difficulties in recollecting contextual information. The evidence surveyed provides support for the idea that all three factors are likely to contribute, under certain conditions, to the deficits in face recognition seen in older adults. We discuss how these different factors might interact in the context of a generic framework of the different stages implicated in face recognition. Several suggestions for future investigations are outlined. PMID:26347670

  17. Infrared face recognition based on LBP histogram and KW feature selection

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua

    2014-07-01

    The conventional LBP-based feature as represented by the local binary pattern (LBP) histogram still has room for performance improvements. This paper focuses on the dimension reduction of LBP micro-patterns and proposes an improved infrared face recognition method based on LBP histogram representation. To extract the local robust features in infrared face images, LBP is chosen to get the composition of micro-patterns of sub-blocks. Based on statistical test theory, Kruskal-Wallis (KW) feature selection method is proposed to get the LBP patterns which are suitable for infrared face recognition. The experimental results show combination of LBP and KW features selection improves the performance of infrared face recognition, the proposed method outperforms the traditional methods based on LBP histogram, discrete cosine transform(DCT) or principal component analysis(PCA).

  18. The fusiform face area is not sufficient for face recognition: evidence from a patient with dense prosopagnosia and no occipital face area.

    PubMed

    Steeves, Jennifer K E; Culham, Jody C; Duchaine, Bradley C; Pratesi, Cristiana Cavina; Valyear, Kenneth F; Schindler, Igor; Humphrey, G Keith; Milner, A David; Goodale, Melvyn A

    2006-01-01

    We tested functional activation for faces in patient D.F., who following acquired brain damage has a profound deficit in object recognition based on form (visual form agnosia) and also prosopagnosia that is undocumented to date. Functional imaging demonstrated that like our control observers, D.F. shows significantly more activation when passively viewing face compared to scene images in an area that is consistent with the fusiform face area (FFA) (p < 0.01). Control observers also show occipital face area (OFA) activation; however, whereas D.F.'s lesions appear to overlap the OFA bilaterally. We asked, given that D.F. shows FFA activation for faces, to what extent is she able to recognize faces? D.F. demonstrated a severe impairment in higher level face processing--she could not recognize face identity, gender or emotional expression. In contrast, she performed relatively normally on many face categorization tasks. D.F. can differentiate faces from non-faces given sufficient texture information and processing time, and she can do this is independent of color and illumination information. D.F. can use configural information for categorizing faces when they are presented in an upright but not a sideways orientation and given that she also cannot discriminate half-faces she may rely on a spatially symmetric feature arrangement. Faces appear to be a unique category, which she can classify even when she has no advance knowledge that she will be shown face images. Together, these imaging and behavioral data support the importance of the integrity of a complex network of regions for face identification, including more than just the FFA--in particular the OFA, a region believed to be associated with low-level processing.

  19. Face recognition using tridiagonal matrix enhanced multivariance products representation

    NASA Astrophysics Data System (ADS)

    Ã-zay, Evrim Korkmaz

    2017-01-01

    This study aims to retrieve face images from a database according to a target face image. For this purpose, Tridiagonal Matrix Enhanced Multivariance Products Representation (TMEMPR) is taken into consideration. TMEMPR is a recursive algorithm based on Enhanced Multivariance Products Representation (EMPR). TMEMPR decomposes a matrix into three components which are a matrix of left support terms, a tridiagonal matrix of weight parameters for each recursion, and a matrix of right support terms, respectively. In this sense, there is an analogy between Singular Value Decomposition (SVD) and TMEMPR. However TMEMPR is a more flexible algorithm since its initial support terms (or vectors) can be chosen as desired. Low computational complexity is another advantage of TMEMPR because the algorithm has been constructed with recursions of certain arithmetic operations without requiring any iteration. The algorithm has been trained and tested with ORL face image database with 400 different grayscale images of 40 different people. TMEMPR's performance has been compared with SVD's performance as a result.

  20. Functional dissociation of the left and right fusiform gyrus in self-face recognition.

    PubMed

    Ma, Yina; Han, Shihui

    2012-10-01

    It is well known that the fusiform gyrus is engaged in face perception, such as the processes of face familiarity and identity. However, the functional role of the fusiform gyrus in face processing related to high-level social cognition remains unclear. The current study assessed the functional role of individually defined fusiform face area (FFA) in the processing of self-face physical properties and self-face identity. We used functional magnetic resonance imaging to monitor neural responses to rapidly presented face stimuli drawn from morph continua between self-face (Morph 100%) and a gender-matched friend's face (Morph 0%) in a face recognition task. Contrasting Morph 100% versus Morph 60% that differed in self-face physical properties but were both recognized as the self uncovered neural activity sensitive to self-face physical properties in the left FFA. Contrasting Morphs 50% that were recognized as the self versus a friend on different trials revealed neural modulations associated with self-face identity in the right FFA. Moreover, the right FFA activity correlated with the frequency of recognizing Morphs 50% as the self. Our results provide evidence for functional dissociations of the left and right FFAs in the representations of self-face physical properties and self-face identity.

  1. 3D Face Recognition Based on Multiple Keypoint Descriptors and Sparse Representation

    PubMed Central

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  2. The effect of gaze direction on three-dimensional face recognition in infant brain activity.

    PubMed

    Yamashita, Wakayo; Kanazawa, So; Yamaguchi, Masami K; Kakigi, Ryusuke

    2012-09-12

    In three-dimensional face recognition studies, it is well known that viewing rotating faces enhance face recognition. For infants, our previous study indicated that 8-month-old infants showed recognition of three-dimensional rotating faces with a direct gaze, and they did not learn with an averted gaze. This suggests that gaze direction may affect three-dimensional face recognition in infants. In this experiment, we used near-infrared spectroscopy to measure infants' hemodynamic responses to averted gaze and direct gaze. We hypothesized that infants would show different neural activity for averted and direct gazes. The responses were compared with the baseline activation during the presentation of non-face objects. We found that the concentration of oxyhemoglobin increased in the temporal cortex on both sides only during the presentation of averted gaze compared with that of the baseline period. This is the first study to show that infants' brain activity in three-dimensional face processing is different between averted gaze and direct gaze.

  3. 3D face recognition based on multiple keypoint descriptors and sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm.

  4. Unconstrained face detection and recognition based on RGB-D camera for the visually impaired

    NASA Astrophysics Data System (ADS)

    Zhao, Xiangdong; Wang, Kaiwei; Yang, Kailun; Hu, Weijian

    2017-02-01

    It is highly important for visually impaired people (VIP) to be aware of human beings around themselves, so correctly recognizing people in VIP assisting apparatus provide great convenience. However, in classical face recognition technology, faces used in training and prediction procedures are usually frontal, and the procedures of acquiring face images require subjects to get close to the camera so that frontal face and illumination guaranteed. Meanwhile, labels of faces are defined manually rather than automatically. Most of the time, labels belonging to different classes need to be input one by one. It prevents assisting application for VIP with these constraints in practice. In this article, a face recognition system under unconstrained environment is proposed. Specifically, it doesn't require frontal pose or uniform illumination as required by previous algorithms. The attributes of this work lie in three aspects. First, a real time frontal-face synthesizing enhancement is implemented, and frontal faces help to increase recognition rate, which is proved with experiment results. Secondly, RGB-D camera plays a significant role in our system, from which both color and depth information are utilized to achieve real time face tracking which not only raises the detection rate but also gives an access to label faces automatically. Finally, we propose to use neural networks to train a face recognition system, and Principal Component Analysis (PCA) is applied to pre-refine the input data. This system is expected to provide convenient help for VIP to get familiar with others, and make an access for them to recognize people when the system is trained enough.

  5. Face and Emotion Recognition in MCDD versus PDD-NOS

    ERIC Educational Resources Information Center

    Herba, Catherine M.; de Bruin, Esther; Althaus, Monika; Verheij, Fop; Ferdinand, Robert F.

    2008-01-01

    Previous studies indicate that Multiple Complex Developmental Disorder (MCDD) children differ from PDD-NOS and autistic children on a symptom level and on psychophysiological functioning. Children with MCDD (n = 21) and PDD-NOS (n = 62) were compared on two facets of social-cognitive functioning: identification of neutral faces and facial…

  6. Self-Face and Self-Body Recognition in Autism

    ERIC Educational Resources Information Center

    Gessaroli, Erica; Andreini, Veronica; Pellegri, Elena; Frassinetti, Francesca

    2013-01-01

    The advantage in responding to self vs. others' body and face-parts (the so called self-advantage) is considered to reflect the implicit access to the bodily self representation and has been studied in healthy and brain-damaged adults in previous studies. If the distinction of the self from others is a key aspect of social behaviour and is a…

  7. Accurate three-dimensional pose recognition from monocular images using template matched filtering

    NASA Astrophysics Data System (ADS)

    Picos, Kenia; Diaz-Ramirez, Victor H.; Kober, Vitaly; Montemayor, Antonio S.; Pantrigo, Juan J.

    2016-06-01

    An accurate algorithm for three-dimensional (3-D) pose recognition of a rigid object is presented. The algorithm is based on adaptive template matched filtering and local search optimization. When a scene image is captured, a bank of correlation filters is constructed to find the best correspondence between the current view of the target in the scene and a target image synthesized by means of computer graphics. The synthetic image is created using a known 3-D model of the target and an iterative procedure based on local search. Computer simulation results obtained with the proposed algorithm in synthetic and real-life scenes are presented and discussed in terms of accuracy of pose recognition in the presence of noise, cluttered background, and occlusion. Experimental results show that our proposal presents high accuracy for 3-D pose estimation using monocular images.

  8. Face Recognition for Access Control Systems Combining Image-Difference Features Based on a Probabilistic Model

    NASA Astrophysics Data System (ADS)

    Miwa, Shotaro; Kage, Hiroshi; Hirai, Takashi; Sumi, Kazuhiko

    We propose a probabilistic face recognition algorithm for Access Control System(ACS)s. Comparing with existing ACSs using low cost IC-cards, face recognition has advantages in usability and security that it doesn't require people to hold cards over scanners and doesn't accept imposters with authorized cards. Therefore face recognition attracts more interests in security markets than IC-cards. But in security markets where low cost ACSs exist, price competition is important, and there is a limitation on the quality of available cameras and image control. Therefore ACSs using face recognition are required to handle much lower quality images, such as defocused and poor gain-controlled images than high security systems, such as immigration control. To tackle with such image quality problems we developed a face recognition algorithm based on a probabilistic model which combines a variety of image-difference features trained by Real AdaBoost with their prior probability distributions. It enables to evaluate and utilize only reliable features among trained ones during each authentication, and achieve high recognition performance rates. The field evaluation using a pseudo Access Control System installed in our office shows that the proposed system achieves a constant high recognition performance rate independent on face image qualities, that is about four times lower EER (Equal Error Rate) under a variety of image conditions than one without any prior probability distributions. On the other hand using image difference features without any prior probabilities are sensitive to image qualities. We also evaluated PCA, and it has worse, but constant performance rates because of its general optimization on overall data. Comparing with PCA, Real AdaBoost without any prior distribution performs twice better under good image conditions, but degrades to a performance as good as PCA under poor image conditions.

  9. Histogram of Gabor phase patterns (HGPP): a novel object representation approach for face recognition.

    PubMed

    Zhang, Baochang; Shan, Shiguang; Chen, Xilin; Gao, Wen

    2007-01-01

    A novel object descriptor, histogram of Gabor phase pattern (HGPP), is proposed for robust face recognition. In HGPP, the quadrant-bit codes are first extracted from faces based on the Gabor transformation. Global Gabor phase pattern (GGPP) and local Gabor phase pattern (LGPP) are then proposed to encode the phase variations. GGPP captures the variations derived from the orientation changing of Gabor wavelet at a given scale (frequency), while LGPP encodes the local neighborhood variations by using a novel local XOR pattern (LXP) operator. They are both divided into the nonoverlapping rectangular regions, from which spatial histograms are extracted and concatenated into an extended histogram feature to represent the original image. Finally, the recognition is performed by using the nearest-neighbor classifier with histogram intersection as the similarity measurement. The features of HGPP lie in two aspects: 1) HGPP can describe the general face images robustly without the training procedure; 2) HGPP encodes the Gabor phase information, while most previous face recognition methods exploit the Gabor magnitude information. In addition, Fisher separation criterion is further used to improve the performance of HGPP by weighing the subregions of the image according to their discriminative powers. The proposed methods are successfully applied to face recognition, and the experiment results on the large-scale FERET and CAS-PEAL databases show that the proposed algorithms significantly outperform other well-known systems in terms of recognition rate.

  10. 3D fast wavelet network model-assisted 3D face recognition

    NASA Astrophysics Data System (ADS)

    Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri

    2015-12-01

    In last years, the emergence of 3D shape in face recognition is due to its robustness to pose and illumination changes. These attractive benefits are not all the challenges to achieve satisfactory recognition rate. Other challenges such as facial expressions and computing time of matching algorithms remain to be explored. In this context, we propose our 3D face recognition approach using 3D wavelet networks. Our approach contains two stages: learning stage and recognition stage. For the training we propose a novel algorithm based on 3D fast wavelet transform. From 3D coordinates of the face (x,y,z), we proceed to voxelization to get a 3D volume which will be decomposed by 3D fast wavelet transform and modeled after that with a wavelet network, then their associated weights are considered as vector features to represent each training face . For the recognition stage, an unknown identity face is projected on all the training WN to obtain a new vector features after every projection. A similarity score is computed between the old and the obtained vector features. To show the efficiency of our approach, experimental results were performed on all the FRGC v.2 benchmark.

  11. Face ethnicity and measurement reliability affect face recognition performance in developmental prosopagnosia: evidence from the Cambridge Face Memory Test-Australian.

    PubMed

    McKone, Elinor; Hall, Ashleigh; Pidcock, Madeleine; Palermo, Romina; Wilkinson, Ross B; Rivolta, Davide; Yovel, Galit; Davis, Joshua M; O'Connor, Kirsty B

    2011-03-01

    The Cambridge Face Memory Test (CFMT, Duchaine & Nakayama, 2006) provides a validated format for testing novel face learning and has been a crucial instrument in the diagnosis of developmental prosopagnosia. Yet, some individuals who report everyday face recognition symptoms consistent with prosopagnosia, and are impaired on famous face tasks, perform normally on the CFMT. Possible reasons include measurement error, CFMT assessment of memory only at short delays, and a face set whose ethnicity is matched to only some Caucasian groups. We develop the "CFMT-Australian" (CFMT-Aus), which complements the CFMT-original by using ethnicity better matched to a different European subpopulation. Results confirm reliability (.88) and validity (convergent, divergent using cars, inversion effects). We show that face ethnicity within a race has subtle but clear effects on face processing even in normal participants (includes cross-over interaction for face ethnicity by perceiver country of origin in distinctiveness ratings). We show that CFMT-Aus clarifies diagnosis of prosopagnosia in 6 previously ambiguous cases. In 3 cases, this appears due to the better ethnic match to prosopagnosics. We also show that face memory at short (<3-min), 20-min, and 24-hr delays taps overlapping processes in normal participants. There is some suggestion that a form of prosopagnosia may exist that is long delay only and/or reflects failure to benefit from face repetition.

  12. Super resolution based face recognition: do we need training image set?

    NASA Astrophysics Data System (ADS)

    Al-Hassan, Nadia; Sellahewa, Harin; Jassim, Sabah A.

    2013-05-01

    This paper is concerned with face recognition under uncontrolled condition, e.g. at a distance surveillance scenarios, and post-rioting forensic, whereby captured face images are severely degraded/blurred and of low-resolution. This is a tough challenge due to many factors including capturing conditions. We present the results of our investigations into recently developed Compressive Sensing (CS) theory to develop scalable face recognition schemes using a variety of overcomplete dictionaries that construct super-resolved face images from any input low-resolution degraded face image. We shall demonstrate that deterministic as well as non-deterministic dictionaries that do not involve the use of face image information but satisfy some form of the Restricted Isometry Property used for CS can achieve face recognition accuracy levels, as good as if not better than those achieved by dictionaries proposed in the literature, that are learned from face image databases using elaborate procedures. We shall elaborate on how this approach helps in crime fighting and terrorism.

  13. Motion as a cue to face recognition: evidence from congenital prosopagnosia.

    PubMed

    Longmore, Christopher A; Tree, Jeremy J

    2013-04-01

    Congenital prosopagnosia is a condition that, present from an early age, makes it difficult for an individual to recognise someone from his or her face. Typically, research into prosopagnosia has employed static images that do not contain the extra information we can obtain from moving faces and, as a result, very little is known about the role of facial motion for identity processing in prosopagnosia. Two experiments comparing the performance of four congenital prosopagnosics with that of age matched and younger controls on their ability to learn and recognise (Experiment 1) and match (Experiment 2) novel faces are reported. It was found that younger controls' recognition memory performance increased with dynamic presentation, however only one of the four prosopagnosics showed any improvement. Motion aided matching performance of age matched controls and all prosopagnosics. In addition, the face inversion effect, an effect that tends to be reduced in prosopagnosia, emerged when prosopagnosics matched moving faces. The results suggest that facial motion can be used as a cue to identity, but that this may be a complex and difficult cue to retain. As prosopagnosics performance improved with the dynamic presentation of faces it would appear that prosopagnosics can use motion as a cue to recognition, and the different patterns for the face inversion effect that occurred in the prosopagnosics for static and dynamic faces suggests that the mechanisms used for dynamic facial motion recognition are dissociable from static mechanisms.

  14. Adaptive weighted local textural features for illumination, expression, and occlusion invariant face recognition

    NASA Astrophysics Data System (ADS)

    Cui, Chen; Asari, Vijayan K.

    2014-03-01

    Biometric features such as fingerprints, iris patterns, and face features help to identify people and restrict access to secure areas by performing advanced pattern analysis and matching. Face recognition is one of the most promising biometric methodologies for human identification in a non-cooperative security environment. However, the recognition results obtained by face recognition systems are a affected by several variations that may happen to the patterns in an unrestricted environment. As a result, several algorithms have been developed for extracting different facial features for face recognition. Due to the various possible challenges of data captured at different lighting conditions, viewing angles, facial expressions, and partial occlusions in natural environmental conditions, automatic facial recognition still remains as a difficult issue that needs to be resolved. In this paper, we propose a novel approach to tackling some of these issues by analyzing the local textural descriptions for facial feature representation. The textural information is extracted by an enhanced local binary pattern (ELBP) description of all the local regions of the face. The relationship of each pixel with respect to its neighborhood is extracted and employed to calculate the new representation. ELBP reconstructs a much better textural feature extraction vector from an original gray level image in different lighting conditions. The dimensionality of the texture image is reduced by principal component analysis performed on each local face region. Each low dimensional vector representing a local region is now weighted based on the significance of the sub-region. The weight of each sub-region is determined by employing the local variance estimate of the respective region, which represents the significance of the region. The final facial textural feature vector is obtained by concatenating the reduced dimensional weight sets of all the modules (sub-regions) of the face image

  15. Age-Related Differences in Brain Electrical Activity during Extended Continuous Face Recognition in Younger Children, Older Children and Adults

    ERIC Educational Resources Information Center

    Van Strien, Jan W.; Glimmerveen, Johanna C.; Franken, Ingmar H. A.; Martens, Vanessa E. G.; de Bruin, Eveline A.

    2011-01-01

    To examine the development of recognition memory in primary-school children, 36 healthy younger children (8-9 years old) and 36 healthy older children (11-12 years old) participated in an ERP study with an extended continuous face recognition task (Study 1). Each face of a series of 30 faces was shown randomly six times interspersed with…

  16. Genetic improvements in illumination compensation by the discrete cosine transform and local normalization for face recognition

    NASA Astrophysics Data System (ADS)

    Perez, Claudio A.; Castillo, Luis E.

    2008-11-01

    Face detection and recognition depend strongly on illumination conditions. In this paper, we present improvements in two illumination compensation methods for face recognition. Using genetic algorithms (GA) we select parameters of the Discrete Cosine Transform (DCT) and Local Normalization (LN) methods to improve face recognition. In the DCT method all low frequency components within an isosceles triangle, of side Ddis, are eliminated. The best results were reported for Ddis=20. In the LN method it is proposed to normalize the value within a window by the mean and standard deviation. Best results were reported for window sizes of 7x7. In the case of the DCT method, we assigned weights to eliminate the coefficients of the low frequency components using a GA. In the case of the LN method for a fixed window size of 7x7, we selected the normalization method by a GA. We compare results of our proposed method to those with no illumination compensation and to those previously published for DCT and LN methods. We use three internationally available face databases Yale B, CMU PIE and FERET where the first two contain face images with significant changes in illumination conditions. We used Yale B for training and CMU PIE and FERET for testing. Our results show significant improvements in face recognition in the testing database. Our method performs similarly or slightly better than DCT or LN methods in images with non-homogeneous illumination and much better than DCT or LN in images with homogeneous illumination.

  17. Face recognition across makeup and plastic surgery from real-world images

    NASA Astrophysics Data System (ADS)

    Moeini, Ali; Faez, Karim; Moeini, Hossein

    2015-09-01

    A study for feature extraction is proposed to handle the problem of facial appearance changes including facial makeup and plastic surgery in face recognition. To extend a face recognition method robust to facial appearance changes, features are individually extracted from facial depth on which facial makeup and plastic surgery have no effect. Then facial depth features are added to facial texture features to perform feature extraction. Accordingly, a three-dimensional (3-D) face is reconstructed from only a single two-dimensional (2-D) frontal image in real-world scenarios. Then the facial depth is extracted from the reconstructed model. Afterward, the dual-tree complex wavelet transform (DT-CWT) is applied to both texture and reconstructed depth images to extract the feature vectors. Finally, the final feature vectors are generated by combining 2-D and 3-D feature vectors, and are then classified by adopting the support vector machine. Promising results have been achieved for makeup-invariant face recognition on two available image databases including YouTube makeup and virtual makeup, and plastic surgery-invariant face recognition on a plastic surgery face database is compared to several state-of-the-art feature extraction methods. Several real-world scenarios are also planned to evaluate the performance of the proposed method on a combination of these three databases with 1102 subjects.

  18. Correlation based efficient face recognition and color change detection

    NASA Astrophysics Data System (ADS)

    Elbouz, M.; Alfalou, A.; Brosseau, C.; Alam, M. S.; Qasmi, S.

    2013-01-01

    Identifying the human face via correlation is a topic attracting widespread interest. At the heart of this technique lies the comparison of an unknown target image to a known reference database of images. However, the color information in the target image remains notoriously difficult to interpret. In this paper, we report a new technique which: (i) is robust against illumination change, (ii) offers discrimination ability to detect color change between faces having similar shape, and (iii) is specifically designed to detect red colored stains (i.e. facial bleeding). We adopt the Vanderlugt correlator (VLC) architecture with a segmented phase filter and we decompose the color target image using normalized red, green, and blue (RGB), and hue, saturation, and value (HSV) scales. We propose a new strategy to effectively utilize color information in signatures for further increasing the discrimination ability. The proposed algorithm has been found to be very efficient for discriminating face subjects with different skin colors, and those having color stains in different areas of the facial image.

  19. Can the usage of human growth hormones affect facial appearance and the accuracy of face recognition systems?

    NASA Astrophysics Data System (ADS)

    Rose, Jake; Martin, Michael; Bourlai, Thirimachos

    2014-06-01

    In law enforcement and security applications, the acquisition of face images is critical in producing key trace evidence for the successful identification of potential threats. The goal of the study is to demonstrate that steroid usage significantly affects human facial appearance and hence, the performance of commercial and academic face recognition (FR) algorithms. In this work, we evaluate the performance of state-of-the-art FR algorithms on two unique face image datasets of subjects before (gallery set) and after (probe set) steroid (or human growth hormone) usage. For the purpose of this study, datasets of 73 subjects were created from multiple sources found on the Internet, containing images of men and women before and after steroid usage. Next, we geometrically pre-processed all images of both face datasets. Then, we applied image restoration techniques on the same face datasets, and finally, we applied FR algorithms in order to match the pre-processed face images of our probe datasets against the face images of the gallery set. Experimental results demonstrate that only a specific set of FR algorithms obtain the most accurate results (in terms of the rank-1 identification rate). This is because there are several factors that influence the efficiency of face matchers including (i) the time lapse between the before and after image pre-processing and restoration face photos, (ii) the usage of different drugs (e.g. Dianabol, Winstrol, and Decabolan), (iii) the usage of different cameras to capture face images, and finally, (iv) the variability of standoff distance, illumination and other noise factors (e.g. motion noise). All of the previously mentioned complicated scenarios make clear that cross-scenario matching is a very challenging problem and, thus, further investigation is required.

  20. Stereotype Priming in Face Recognition: Interactions between Semantic and Visual Information in Face Encoding

    ERIC Educational Resources Information Center

    Hills, Peter J.; Lewis, Michael B.; Honey, R. C.

    2008-01-01

    The accuracy with which previously unfamiliar faces are recognised is increased by the presentation of a stereotype-congruent occupation label [Klatzky, R. L., Martin, G. L., & Kane, R. A. (1982a). "Semantic interpretation effects on memory for faces." "Memory & Cognition," 10, 195-206; Klatzky, R. L., Martin, G. L., & Kane, R. A. (1982b).…

  1. Cross-age effect in recognition performance and memory monitoring for faces.

    PubMed

    Bryce, Margaret S; Dodson, Chad S

    2013-03-01

    The cross-age effect refers to the finding of better memory for own- than other-age faces. We examined 3 issues about this effect: (1) Does it extend to the ability to monitor the likely accuracy of memory judgments for young and old faces? (2) Does it apply to source information that is associated with young and old faces? And (3) what is a likely mechanism underlying the cross-age effect? In Experiment 1, young and older adults viewed young and old faces appearing in different contexts. Young adults exhibited a cross-age effect in their recognition of faces and in their memory-monitoring performance for these faces. Older adults, by contrast, showed no age-of-face effects. Experiment 2 examined whether young adults' cross-age effect depends on or is independent of encoding a mixture of young and old faces. Young adults encoded either a mixture of young and old faces, a set of all young faces, or a set of all old faces. In the mixed-list condition we replicated our finding of young adults' superior memory for own-age faces; in the pure-list conditions, however, there were absolutely no differences in performance between young and old faces. The fact that the pure-list design abolishes the cross-age effect supports social-cognitive theories of this phenomenon.

  2. I feel your fear: shared touch between faces facilitates recognition of fearful facial expressions.

    PubMed

    Maister, Lara; Tsiakkas, Eleni; Tsakiris, Manos

    2013-02-01

    Embodied simulation accounts of emotion recognition claim that we vicariously activate somatosensory representations to simulate, and eventually understand, how others feel. Interestingly, mirror-touch synesthetes, who experience touch when observing others being touched, show both enhanced somatosensory simulation and superior recognition of emotional facial expressions. We employed synchronous visuotactile stimulation to experimentally induce a similar experience of "mirror touch" in nonsynesthetic participants. Seeing someone else's face being touched at the same time as one's own face results in the "enfacement illusion," which has been previously shown to blur self-other boundaries. We demonstrate that the enfacement illusion also facilitates emotion recognition, and, importantly, this facilitatory effect is specific to fearful facial expressions. Shared synchronous multisensory experiences may experimentally facilitate somatosensory simulation mechanisms involved in the recognition of fearful emotional expressions.

  3. Illumination-invariant face recognition with a contrast sensitive silicon retina

    SciTech Connect

    Buhmann, J.M.; Lades, M.; Eeckman, F.

    1993-11-29

    Changes in lighting conditions strongly effect the performance and reliability of computer vision systems. We report face recognition results under drastically changing lighting conditions for a computer vision system which concurrently uses a contrast sensitive silicon retina and a conventional, gain controlled CCD camera. For both input devices the face recognition system employs an elastic matching algorithm with wavelet based features to classify unknown faces. To assess the effect of analog on-chip preprocessing by the silicon retina the CCD images have been digitally preprocessed with a bandpass filter to adjust the power spectrum. The silicon retina with its ability to adjust sensitivity increases the recognition rate up to 50 percent. These comparative experiments demonstrate that preprocessing with an analog VLSI silicon retina generates image data enriched with object-constant features.

  4. Face Recognition via Ensemble SIFT Matching of Uncorrelated Hyperspectral Bands and Spectral PCTs

    DTIC Science & Technology

    2011-06-01

    Pan et al. (2003), and Luo et al. (2007). Although it has been shown by Luo et al. (2007) that an implementation of SIFT does not work well under...section, we will expose the reader to some of the work that has already been done in this area which includes using eigenfaces for recognition by Turk...1997), and recent works in performing face recognition using hyperspectral images by Pan et al. (2003, 2005). 2.2.1. Eigenfaces Turk and Pentland

  5. EEG asymmetries in recognition of faces: comparison with a tachistoscopic technique.

    PubMed

    Rapaczynski, W; Ehrlichman, H

    1979-11-01

    Twelve field-dependent and twelve field-independent women, who had previously shown opposite superiorities in a tachistoscopic face recognition task, returned to the laboratory for a session in which FEG asymmetry was measured during two facial and two verbal recognition tasks. Although task-related EEG asymmetries were observed, there was no effect of cognitive style on either direction or amount of asymmetry. These results suggest a lack of comparability among different methods of assessing individual differences in lateral functions.

  6. Face recognition via edge-based Gabor feature representation for plastic surgery-altered images

    NASA Astrophysics Data System (ADS)

    Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.

    2014-12-01

    Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.

  7. Real-Time Measurement of Face Recognition in Rapid Serial Visual Presentation

    PubMed Central

    Touryan, Jon; Gibson, Laurie; Horne, James H.; Weber, Paul

    2011-01-01

    Event-related potentials (ERPs) have been used extensively to study the processes involved in recognition memory. In particular, the early familiarity component of recognition has been linked to the FN400 (mid-frontal negative deflection between 300 and 500 ms), whereas the recollection component has been linked to a later positive deflection over the parietal cortex (500–800 ms). In this study, we measured the ERPs elicited by faces with varying degrees of familiarity. Participants viewed a continuous sequence of faces with either low (novel faces), medium (celebrity faces), or high (faces of friends and family) familiarity while performing a separate face-identification task. We found that the level of familiarity was significantly correlated with the magnitude of both the early and late recognition components. Additionally, by using a single-trial classification technique, applied to the entire evoked response, we were able to distinguish between familiar and unfamiliar faces with a high degree of accuracy. The classification of high versus low familiarly resulted in areas under the curve of up to 0.99 for some participants. Interestingly, our classifier model (a linear discriminant function) was developed using a completely separate object categorization task on a different population of participants. PMID:21716601

  8. An in-depth cognitive examination of individuals with superior face recognition skills.

    PubMed

    Bobak, Anna K; Bennetts, Rachel J; Parris, Benjamin A; Jansari, Ashok; Bate, Sarah

    2016-09-01

    Previous work has reported the existence of "super-recognisers" (SRs), or individuals with extraordinary face recognition skills. However, the precise underpinnings of this ability have not yet been investigated. In this paper we examine (a) the face-specificity of super recognition, (b) perception of facial identity in SRs, (c) whether SRs present with enhancements in holistic processing and (d) the consistency of these findings across different SRs. A detailed neuropsychological investigation into six SRs indicated domain-specificity in three participants, with some evidence of enhanced generalised visuo-cognitive or socio-emotional processes in the remaining individuals. While superior face-processing skills were restricted to face memory in three of the SRs, enhancements to facial identity perception were observed in the others. Notably, five of the six participants showed at least some evidence of enhanced holistic processing. These findings indicate cognitive heterogeneity in the presentation of superior face recognition, and have implications for our theoretical understanding of the typical face-processing system and the identification of superior face-processing skills in applied settings.

  9. Detecting Superior Face Recognition Skills in a Large Sample of Young British Adults

    PubMed Central

    Bobak, Anna K.; Pampoulov, Philip; Bate, Sarah

    2016-01-01

    The Cambridge Face Memory Test Long Form (CFMT+) and Cambridge Face Perception Test (CFPT) are typically used to assess the face processing ability of individuals who believe they have superior face recognition skills. Previous large-scale studies have presented norms for the CFPT but not the CFMT+. However, previous research has also highlighted the necessity for establishing country-specific norms for these tests, indicating that norming data is required for both tests using young British adults. The current study addressed this issue in 254 British participants. In addition to providing the first norm for performance on the CFMT+ in any large sample, we also report the first UK specific cut-off for superior face recognition on the CFPT. Further analyses identified a small advantage for females on both tests, and only small associations between objective face recognition skills and self-report measures. A secondary aim of the study was to examine the relationship between trait or social anxiety and face processing ability, and no associations were noted. The implications of these findings for the classification of super-recognizers are discussed. PMID:27713706

  10. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    PubMed

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line

  11. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  12. Oxytocin eliminates the own-race bias in face recognition memory.

    PubMed

    Blandón-Gitlin, Iris; Pezdek, Kathy; Saldivar, Sesar; Steelman, Erin

    2014-09-11

    The neuropeptide Oxytocin influences a number of social behaviors, including processing of faces. We examined whether Oxytocin facilitates the processing of out-group faces and reduce the own-race bias (ORB). The ORB is a robust phenomenon characterized by poor recognition memory of other-race faces compared to the same-race faces. In Experiment 1, participants received intranasal solutions of Oxytocin or placebo prior to viewing White and Black faces. On a subsequent recognition test, whereas in the placebo condition the same-race faces were better recognized than other-race faces, in the Oxytocin condition Black and White faces were equally well recognized, effectively eliminating the ORB. In Experiment 2, Oxytocin was administered after the study phase. The ORB resulted, but Oxytocin did not significantly reduce the effect. This study is the first to show that Oxytocin can enhance face memory of out-group members and underscore the importance of social encoding mechanisms underlying the own-race bias. This article is part of a Special Issue entitled Oxytocin and Social Behav.

  13. Toward a unified model of face and object recognition in the human visual system

    PubMed Central

    Wallis, Guy

    2013-01-01

    Our understanding of the mechanisms and neural substrates underlying visual recognition has made considerable progress over the past 30 years. During this period, accumulating evidence has led many scientists to conclude that objects and faces are recognised in fundamentally distinct ways, and in fundamentally distinct cortical areas. In the psychological literature, in particular, this dissociation has led to a palpable disconnect between theories of how we process and represent the two classes of object. This paper follows a trend in part of the recognition literature to try to reconcile what we know about these two forms of recognition by considering the effects of learning. Taking a widely accepted, self-organizing model of object recognition, this paper explains how such a system is affected by repeated exposure to specific stimulus classes. In so doing, it explains how many aspects of recognition generally regarded as unusual to faces (holistic processing, configural processing, sensitivity to inversion, the other-race effect, the prototype effect, etc.) are emergent properties of category-specific learning within such a system. Overall, the paper describes how a single model of recognition learning can and does produce the seemingly very different types of representation associated with faces and objects. PMID:23966963

  14. An automated tool for face recognition using visual attention and active shape models analysis.

    PubMed

    Faro, A; Giordano, D; Spampinato, C

    2006-01-01

    An entirely automated approach for the recognition of the face of a people starting from her/his images is presented. The approach uses a computational attention module to find automatically the most relevant facial features using the Focus Of Attentions (FOA) These features are used to build the model of a face during the learning phase and for recognition during the testing phase. The landmarking of the features is performed by applying the active contour model (ACM) technique, whereas the active shape model (ASM) is adopted for constructing a flexible model of the selected facial features. The advantages of this approach and opportunities for further improvements are discussed.

  15. Is That Me or My Twin? Lack of Self-Face Recognition Advantage in Identical Twins

    PubMed Central

    Martini, Matteo; Bufalari, Ilaria; Stazi, Maria Antonietta; Aglioti, Salvatore Maria

    2015-01-01

    Despite the increasing interest in twin studies and the stunning amount of research on face recognition, the ability of adult identical twins to discriminate their own faces from those of their co-twins has been scarcely investigated. One’s own face is the most distinctive feature of the bodily self, and people typically show a clear advantage in recognizing their own face even more than other very familiar identities. Given the very high level of resemblance of their faces, monozygotic twins represent a unique model for exploring self-face processing. Herein we examined the ability of monozygotic twins to distinguish their own face from the face of their co-twin and of a highly familiar individual. Results show that twins equally recognize their own face and their twin’s face. This lack of self-face advantage was negatively predicted by how much they felt physically similar to their co-twin and by their anxious or avoidant attachment style. We speculate that in monozygotic twins, the visual representation of the self-face overlaps with that of the co-twin. Thus, to distinguish the self from the co-twin, monozygotic twins have to rely much more than control participants on the multisensory integration processes upon which the sense of bodily self is based. Moreover, in keeping with the notion that attachment style influences perception of self and significant others, we propose that the observed self/co-twin confusion may depend upon insecure attachment. PMID:25853249

  16. Fusiform gyrus face selectivity relates to individual differences in facial recognition ability.

    PubMed

    Furl, Nicholas; Garrido, Lúcia; Dolan, Raymond J; Driver, Jon; Duchaine, Bradley

    2011-07-01

    Regions of the occipital and temporal lobes, including a region in the fusiform gyrus (FG), have been proposed to constitute a "core" visual representation system for faces, in part because they show face selectivity and face repetition suppression. But recent fMRI studies of developmental prosopagnosics (DPs) raise questions about whether these measures relate to face processing skills. Although DPs manifest deficient face processing, most studies to date have not shown unequivocal reductions of functional responses in the proposed core regions. We scanned 15 DPs and 15 non-DP control participants with fMRI while employing factor analysis to derive behavioral components related to face identification or other processes. Repetition suppression specific to facial identities in FG or to expression in FG and STS did not show compelling relationships with face identification ability. However, we identified robust relationships between face selectivity and face identification ability in FG across our sample for several convergent measures, including voxel-wise statistical parametric mapping, peak face selectivity in individually defined "fusiform face areas" (FFAs), and anatomical extents (cluster sizes) of those FFAs. None of these measures showed associations with behavioral expression or object recognition ability. As a group, DPs had reduced face-selective responses in bilateral FFA when compared with non-DPs. Individual DPs were also more likely than non-DPs to lack expected face-selective activity in core regions. These findings associate individual differences in face processing ability with selectivity in core face processing regions. This confirms that face selectivity can provide a valid marker for neural mechanisms that contribute to face identification ability.

  17. Recognition memory for distractor faces depends on attentional load at exposure.

    PubMed

    Jenkins, Rob; Lavie, Nilli; Driver, Jon

    2005-04-01

    Incidental recognition memory for faces previously exposed as task-irrelevant distractors was assessed as a function of the attentional load of an unrelated task performed on superimposed letter strings at exposure. In Experiment 1, subjects were told to ignore the faces and either to judge the color of the letters (low load) or to search for an angular target letter among other angular letters (high load). A surprise recognition memory test revealed that despite the irrelevance of all faces at exposure, those exposed under low-load conditions were later recognized, but those exposed under high-load conditions were not. Experiment 2 found a similar pattern when both the high- and low-load tasks required shape judgments for the letters but made differing attentional demands. Finally, Experiment 3 showed that high load in a nonface task can significantly reduce even immediate recognition of a fixated face from the preceding trial. These results demonstrate that load in a nonface domain (e.g., letter shape) can reduce face recognition, in accord with Lavie's load theory. In addition to their theoretical impact, these results may have practical implications for eyewitness testimony.

  18. A kernel Gabor-based weighted region covariance matrix for face recognition.

    PubMed

    Qin, Huafeng; Qin, Lan; Xue, Lian; Li, Yantao

    2012-01-01

    This paper proposes a novel image region descriptor for face recognition, named kernel Gabor-based weighted region covariance matrix (KGWRCM). As different parts are different effectual in characterizing and recognizing faces, we construct a weighting matrix by computing the similarity of each pixel within a face sample to emphasize features. We then incorporate the weighting matrices into a region covariance matrix, named weighted region covariance matrix (WRCM), to obtain the discriminative features of faces for recognition. Finally, to further preserve discriminative features in higher dimensional space, we develop the kernel Gabor-based weighted region covariance matrix (KGWRCM). Experimental results show that the KGWRCM outperforms other algorithms including the kernel Gabor-based region covariance matrix (KGCRM).

  19. The effects of inversion and familiarity on face versus body cues to person recognition.

    PubMed

    Robbins, Rachel A; Coltheart, Max

    2012-10-01

    Extensive research has focused on face recognition, and much is known about this topic. However, much of this work seems to be based on an assumption that faces are the most important aspect of person recognition. Here we test this assumption in two experiments. We show that when viewers are forced to choose, they do use the face more than the body, both for familiar (trained) person recognition and for unfamiliar person matching. However, we also show that headless bodies are recognized and matched with very high accuracy. We further show that processing style may be similar for faces and bodies, with inversion effects found in all cases (bodies with heads, faces alone and bodies alone), and evidence that mismatching bodies and heads causes interference. We suggest that recent findings of no inversion effect when stimuli are headless bodies may have been obtained because the stimuli led viewers to focus on nonbody aspects (e.g., clothes) or because pose and identity tasks led to somewhat different processing. Our results are consistent with holistic processing for bodies as well as faces.

  20. Do congenital prosopagnosia and the other-race effect affect the same face recognition mechanisms?

    PubMed

    Esins, Janina; Schultz, Johannes; Wallraven, Christian; Bülthoff, Isabelle

    2014-01-01

    Congenital prosopagnosia (CP), an innate impairment in recognizing faces, as well as the other-race effect (ORE), a disadvantage in recognizing faces of foreign races, both affect face recognition abilities. Are the same face processing mechanisms affected in both situations? To investigate this question, we tested three groups of 21 participants: German congenital prosopagnosics, South Korean participants and German controls on three different tasks involving faces and objects. First we tested all participants on the Cambridge Face Memory Test in which they had to recognize Caucasian target faces in a 3-alternative-forced-choice task. German controls performed better than Koreans who performed better than prosopagnosics. In the second experiment, participants rated the similarity of Caucasian faces that differed parametrically in either features or second-order relations (configuration). Prosopagnosics were less sensitive to configuration changes than both other groups. In addition, while all groups were more sensitive to changes in features than in configuration, this difference was smaller in Koreans. In the third experiment, participants had to learn exemplars of artificial objects, natural objects, and faces and recognize them among distractors of the same category. Here prosopagnosics performed worse than participants in the other two groups only when they were tested on face stimuli. In sum, Koreans and prosopagnosic participants differed from German controls in different ways in all tests. This suggests that German congenital prosopagnosics perceive Caucasian faces differently than do Korean participants. Importantly, our results suggest that different processing impairments underlie the ORE and CP.

  1. Do congenital prosopagnosia and the other-race effect affect the same face recognition mechanisms?

    PubMed Central

    Esins, Janina; Schultz, Johannes; Wallraven, Christian; Bülthoff, Isabelle

    2014-01-01

    Congenital prosopagnosia (CP), an innate impairment in recognizing faces, as well as the other-race effect (ORE), a disadvantage in recognizing faces of foreign races, both affect face recognition abilities. Are the same face processing mechanisms affected in both situations? To investigate this question, we tested three groups of 21 participants: German congenital prosopagnosics, South Korean participants and German controls on three different tasks involving faces and objects. First we tested all participants on the Cambridge Face Memory Test in which they had to recognize Caucasian target faces in a 3-alternative-forced-choice task. German controls performed better than Koreans who performed better than prosopagnosics. In the second experiment, participants rated the similarity of Caucasian faces that differed parametrically in either features or second-order relations (configuration). Prosopagnosics were less sensitive to configuration changes than both other groups. In addition, while all groups were more sensitive to changes in features than in configuration, this difference was smaller in Koreans. In the third experiment, participants had to learn exemplars of artificial objects, natural objects, and faces and recognize them among distractors of the same category. Here prosopagnosics performed worse than participants in the other two groups only when they were tested on face stimuli. In sum, Koreans and prosopagnosic participants differed from German controls in different ways in all tests. This suggests that German congenital prosopagnosics perceive Caucasian faces differently than do Korean participants. Importantly, our results suggest that different processing impairments underlie the ORE and CP. PMID:25324757

  2. Sparsity preserving discriminative learning with applications to face recognition

    NASA Astrophysics Data System (ADS)

    Ren, Yingchun; Wang, Zhicheng; Chen, Yufei; Shan, Xiaoying; Zhao, Weidong

    2016-01-01

    The extraction of effective features is extremely important for understanding the intrinsic structure hidden in high-dimensional data. In recent years, sparse representation models have been widely used in feature extraction. A supervised learning method, called sparsity preserving discriminative learning (SPDL), is proposed. SPDL, which attempts to preserve the sparse representation structure of the data and simultaneously maximize the between-class separability, can be regarded as a combiner of manifold learning and sparse representation. More specifically, SPDL first creates a concatenated dictionary by class-wise principal component analysis decompositions and learns the sparse representation structure of each sample under the constructed dictionary using the least squares method. Second, a local between-class separability function is defined to characterize the scatter of the samples in the different submanifolds. Then, SPDL integrates the learned sparse representation information with the local between-class relationship to construct a discriminant function. Finally, the proposed method is transformed into a generalized eigenvalue problem. Extensive experimental results on several popular face databases demonstrate the effectiveness of the proposed approach.

  3. 3D face recognition under expressions, occlusions, and pose variations.

    PubMed

    Drira, Hassen; Ben Amor, Boulbaba; Srivastava, Anuj; Daoudi, Mohamed; Slama, Rim

    2013-09-01

    We propose a novel geometric framework for analyzing 3D faces, with the specific goals of comparing, matching, and averaging their shapes. Here we represent facial surfaces by radial curves emanating from the nose tips and use elastic shape analysis of these curves to develop a Riemannian framework for analyzing shapes of full facial surfaces. This representation, along with the elastic Riemannian metric, seems natural for measuring facial deformations and is robust to challenges such as large facial expressions (especially those with open mouths), large pose variations, missing parts, and partial occlusions due to glasses, hair, and so on. This framework is shown to be promising from both--empirical and theoretical--perspectives. In terms of the empirical evaluation, our results match or improve upon the state-of-the-art methods on three prominent databases: FRGCv2, GavabDB, and Bosphorus, each posing a different type of challenge. From a theoretical perspective, this framework allows for formal statistical inferences, such as the estimation of missing facial parts using PCA on tangent spaces and computing average shapes.

  4. Enhanced patterns of oriented edge magnitudes for face recognition and image matching.

    PubMed

    Vu, Ngoc-Son; Caplier, Alice

    2012-03-01

    A good feature descriptor is desired to be discriminative, robust, and computationally inexpensive in both terms of time and storage requirement. In the domain of face recognition, these properties allow the system to quickly deliver high recognition results to the end user. Motivated by the recent feature descriptor called Patterns of Oriented Edge Magnitudes (POEM), which balances the three concerns, this paper aims at enhancing its performance with respect to all these criteria. To this end, we first optimize the parameters of POEM and then apply the whitened principal-component-analysis dimensionality reduction technique to get a more compact, robust, and discriminative descriptor. For face recognition, the efficiency of our algorithm is proved by strong results obtained on both constrained (Face Recognition Technology, FERET) and unconstrained (Labeled Faces in the Wild, LFW) data sets in addition with the low complexity. Impressively, our algorithm is about 30 times faster than those based on Gabor filters. Furthermore, by proposing an additional technique that makes our descriptor robust to rotation, we validate its efficiency for the task of image matching.

  5. Single-Sample Face Recognition with Image Corruption and Misalignment via Sparse Illumination Transfer

    DTIC Science & Technology

    2013-06-01

    1998. [5] T. Cootes, C. Taylor, and J. Graham . Active shape models – their training and application. CVIU, 61:38–59, 1995. [6] W. Deng, J. Hu, and J...1- minimization algorithms for robust face recognition. Techni- cal Report arXiv:1007.3753, University of California, Berke- ley , 2012. [30] M. Yang

  6. Self-face recognition in children with autism spectrum disorders: a near-infrared spectroscopy study.

    PubMed

    Kita, Yosuke; Gunji, Atsuko; Inoue, Yuki; Goto, Takaaki; Sakihara, Kotoe; Kaga, Makiko; Inagaki, Masumi; Hosokawa, Toru

    2011-06-01

    It is assumed that children with autism spectrum disorders (ASD) have specificities for self-face recognition, which is known to be a basic cognitive ability for social development. In the present study, we investigated neurological substrates and potentially influential factors for self-face recognition of ASD patients using near-infrared spectroscopy (NIRS). The subjects were 11 healthy adult men, 13 normally developing boys, and 10 boys with ASD. Their hemodynamic activities in the frontal area and their scanning strategies (eye-movement) were examined during self-face recognition. Other factors such as ASD severities and self-consciousness were also evaluated by parents and patients, respectively. Oxygenated hemoglobin levels were higher in the regions corresponding to the right inferior frontal gyrus than in those corresponding to the left inferior frontal gyrus. In two groups of children these activities reflected ASD severities, such that the more serious ASD characteristics corresponded with lower activity levels. Moreover, higher levels of public self-consciousness intensified the activities, which were not influenced by the scanning strategies. These findings suggest that dysfunction in the right inferior frontal gyrus areas responsible for self-face recognition is one of the crucial neural substrates underlying ASD characteristics, which could potentially be used to evaluate psychological aspects such as public self-consciousness.

  7. Positive, but Not Negative, Facial Expressions Facilitate 3-Month-Olds' Recognition of an Individual Face

    ERIC Educational Resources Information Center

    Brenna, Viola; Proietti, Valentina; Montirosso, Rosario; Turati, Chiara

    2013-01-01

    The current study examined whether and how the presence of a positive or a negative emotional expression may affect the face recognition process at 3 months of age. Using a familiarization procedure, Experiment 1 demonstrated that positive (i.e., happiness), but not negative (i.e., fear and anger) facial expressions facilitate infants' ability to…

  8. A Smile Enhances 3-Month-Olds' Recognition of an Individual Face

    ERIC Educational Resources Information Center

    Turati, Chiara; Montirosso, Rosario; Brenna, Viola; Ferrara, Veronica; Borgatti, Renato

    2011-01-01

    Recent studies demonstrated that in adults and children recognition of face identity and facial expression mutually interact (Bate, Haslam, & Hodgson, 2009; Spangler, Schwarzer, Korell, & Maier-Karius, 2010). Here, using a familiarization paradigm, we explored the relation between these processes in early infancy, investigating whether 3-month-old…

  9. Integrating Illumination, Motion, and Shape Models for Robust Face Recognition in Video

    NASA Astrophysics Data System (ADS)

    Xu, Yilei; Roy-Chowdhury, Amit; Patel, Keyur

    2007-12-01

    The use of video sequences for face recognition has been relatively less studied compared to image-based approaches. In this paper, we present an analysis-by-synthesis framework for face recognition from video sequences that is robust to large changes in facial pose and lighting conditions. This requires tracking the video sequence, as well as recognition algorithms that are able to integrate information over the entire video; we address both these problems. Our method is based on a recently obtained theoretical result that can integrate the effects of motion, lighting, and shape in generating an image using a perspective camera. This result can be used to estimate the pose and structure of the face and the illumination conditions for each frame in a video sequence in the presence of multiple point and extended light sources. We propose a new inverse compositional estimation approach for this purpose. We then synthesize images using the face model estimated from the training data corresponding to the conditions in the probe sequences. Similarity between the synthesized and the probe images is computed using suitable distance measurements. The method can handle situations where the pose and lighting conditions in the training and testing data are completely disjoint. We show detailed performance analysis results and recognition scores on a large video dataset.

  10. Study on local Gabor binary patterns for face representation and recognition

    NASA Astrophysics Data System (ADS)

    Ge, Wei; Han, Chunling; Quan, Wei

    2015-12-01

    More recently, Local Binary Patterns(LBP) has received much attention in face representation and recognition. The original LBP operator could describe the spatial structure information, which are the variety edge or variety angle features of local facial images essentially, they are important factors of classify different faces. But the scale and orientation of the edge features include more detail information which could be used to classify different persons efficiently, while original LBP operator could not to extract the information. In this paper, based on the introduction of original LBP-based facial representation and recognition, the histogram sequences of local Gabor binary patterns are used to representation facial image. Principal Component Analysis (PCA) method is used to classification the histogram sequences, which have been converted to vectors. Recognition experimental results show that the method we used in this paper increases nearly 6% than the classification performance of original LBP operator.

  11. Speechreading and the Bruce-Young model of face recognition: early findings and recent developments.

    PubMed

    Campbell, Ruth

    2011-11-01

    In the context of face processing, the skill of processing speech from faces (speechreading) occupies a unique cognitive and neuropsychological niche. Neuropsychological dissociations in two cases (Campbell et al., 1986) suggested a very clear pattern: speechreading, but not face recognition, can be impaired by left-hemisphere damage, while face-recognition impairment consequent to right-hemisphere damage leaves speechreading unaffected. However, this story soon proved too simple, while neuroimaging techniques started to reveal further more detailed patterns. These patterns, moreover, were readily accommodated within the Bruce and Young (1986) model. Speechreading requires structural encoding of faces as faces, but further analysis of visible speech is supported by a network comprising several lateral temporal regions and inferior frontal regions. Posterior superior temporal regions play a significant role in speechreading natural speech, including audiovisual binding in hearing people. In deaf people, similar regions and circuits are implicated. While these detailed developments were not predicted by Bruce and Young, nevertheless, their model has stood the test of time, affording a structural framework for exploring speechreading in terms of face processing.

  12. Multi-Directional Multi-Level Dual-Cross Patterns for Robust Face Recognition.

    PubMed

    Ding, Changxing; Choi, Jonghyun; Tao, Dacheng; Davis, Larry S

    2016-03-01

    To perform unconstrained face recognition robust to variations in illumination, pose and expression, this paper presents a new scheme to extract "Multi-Directional Multi-Level Dual-Cross Patterns" (MDML-DCPs) from face images. Specifically, the MDML-DCPs scheme exploits the first derivative of Gaussian operator to reduce the impact of differences in illumination and then computes the DCP feature at both the holistic and component levels. DCP is a novel face image descriptor inspired by the unique textural structure of human faces. It is computationally efficient and only doubles the cost of computing local binary patterns, yet is extremely robust to pose and expression variations. MDML-DCPs comprehensively yet efficiently encodes the invariant characteristics of a face image from multiple levels into patterns that are highly discriminative of inter-personal differences but robust to intra-personal variations. Experimental results on the FERET, CAS-PERL-R1, FRGC 2.0, and LFW databases indicate that DCP outperforms the state-of-the-art local descriptors (e.g., LBP, LTP, LPQ, POEM, tLBP, and LGXP) for both face identification and face verification tasks. More impressively, the best performance is achieved on the challenging LFW and FRGC 2.0 databases by deploying MDML-DCPs in a simple recognition scheme.

  13. An ERP investigation of the co-development of hemispheric lateralization of face and word recognition

    PubMed Central

    Dundas, Eva M.; Plaut, David C.; Behrmann, Marlene

    2014-01-01

    The adult human brain would appear to have specialized and independent neural systems for the visual processing of words and faces. Extensive evidence has demonstrated greater selectivity for written words in the left over right hemisphere, and, conversely, greater selectivity for faces in the right over left hemisphere. This study examines the emergence of these complementary neural profiles, as well as the possible relationship between them. Using behavioral and neurophysiological measures, in adults, we observed the standard finding of greater accuracy and a larger N170 ERP component in the left over right hemisphere for words, and conversely, greater accuracy and a larger N170 in the right over the left hemisphere for faces. We also found that, although children aged 7-12 years revealed the adult hemispheric pattern for words, they showed neither a behavioral nor a neural hemispheric superiority for faces. Of particular interest, the magnitude of their N170 for faces in the right hemisphere was related to that of the N170 for words in their left hemisphere. These findings suggest that the hemispheric organization of face recognition and of word recognition do not develop independently, and that word lateralization may precede and drive later face lateralization. A theoretical account for the findings, in which competition for visual representations unfolds over the course of development, is discussed. PMID:24933662

  14. A prescreener for 3D face recognition using radial symmerty and the Hausdorff fraction.

    SciTech Connect

    Koudelka, Melissa L.; Koch, Mark William; Russ, Trina Denise

    2005-04-01

    Face recognition systems require the ability to efficiently scan an existing database of faces to locate a match for a newly acquired face. The large number of faces in real world databases makes computationally intensive algorithms impractical for scanning entire databases. We propose the use of more efficient algorithms to 'prescreen' face databases, determining a limited set of likely matches that can be processed further to identify a match. We use both radial symmetry and shape to extract five features of interest on 3D range images of faces. These facial features determine a very small subset of discriminating points which serve as input to a prescreening algorithm based on a Hausdorff fraction. We show how to compute the Haudorff fraction in linear O(n) time using a range image representation. Our feature extraction and prescreening algorithms are verified using the FRGC v1.0 3D face scan data. Results show 97% of the extracted facial features are within 10 mm or less of manually marked ground truth, and the prescreener has a rank 6 recognition rate of 100%.

  15. Background learning for robust face recognition with PCA in the presence of clutter.

    PubMed

    Rajagopalan, A N; Chellappa, Rama; Koterba, Nathan T

    2005-06-01

    We propose a new method within the framework of principal component analysis (PCA) to robustly recognize faces in the presence of clutter. The traditional eigenface recognition (EFR) method, which is based on PCA, works quite well when the input test patterns are faces. However, when confronted with the more general task of recognizing faces appearing against a background, the performance of the EFR method can be quite poor. It may miss faces completely or may wrongly associate many of the background image patterns to faces in the training set. In order to improve performance in the presence of background, we argue in favor of learning the distribution of background patterns and show how this can be done for a given test image. An eigenbackground space is constructed corresponding to the given test image and this space in conjunction with the eigenface space is used to impart robustness. A suitable classifier is derived to distinguish nonface patterns from faces. When tested on images depicting face recognition in real situations against cluttered background, the performance of the proposed method is quite good with fewer false alarms.

  16. Color correction using color-flow eigenspace model in color face recognition

    NASA Astrophysics Data System (ADS)

    Choi, JaeYoung; Ro, Yong Man

    2009-02-01

    We propose a new color correction approach which, as opposed to existing methods, take advantages of a given pair of two color face images (probe and gallery) in the color face recognition (FR) framework. In the proposed color correction method, the color-flow vector and color-flow eigenspace model are developed to generate color corrected probe images. The main contribution of this paper is threefold: 1) the proposed method can reliably compensate the non-linear photic variations imposed on probe face images comparing to traditional color correction techniques; 2) to the best of our knowledge, for the first time, we conduct extensive experiment studies to compare the effectiveness of various color correction methods to deal with photometrical distortions in probe images; 3) the proposed method can significantly enhance the recognition performance degraded by severely illuminant probe face images. Two standard face databases CMU PIE and XM2VTSDB were used to demonstrate the effectiveness of the proposed color correction method. The usefulness of the proposed method in the color FR is shown in terms of both absolute and comparative recognition performances against four traditional color correction solutions of White balance, Gray-world, Retinex, and Color-by-correlation.

  17. Effects of sleep loss on emotion recognition: a dissociation between face and word stimuli.

    PubMed

    Maccari, Lisa; Martella, Diana; Marotta, Andrea; Sebastiani, Mara; Banaj, Nerisa; Fuentes, Luis J; Casagrande, Maria

    2014-10-01

    Short-term sleep deprivation, or extended wakefulness, adversely affects cognitive functions and behavior. However, scarce research has addressed the effects of sleep deprivation (SD) on emotional processing. In this study, we investigated the impact of reduced vigilance due to moderate sleep deprivation on the ability to recognize emotional expressions of faces and emotional content of words. Participants remained awake for 24 h and performed the tasks in two sessions, one in which they were not affected by sleep loss (baseline; BSL), and other affected by SD, according to a counterbalanced sequence. Tasks were carried out twice at 10:00 and 4:00 am, or at 12:00 and 6:00 am. In both tasks, participants had to respond to the emotional valence of the target stimulus: negative, positive, or neutral. The results showed that in the word task, sleep deprivation impaired recognition irrespective of the emotional valence of words. However, sleep deprivation impaired recognition of emotional face expressions mainly when they showed a neutral expression. Emotional face expressions were less affected by the sleep loss, but positive faces were more resistant than negative faces to the detrimental effect of sleep deprivation. The differential effects of sleep deprivation on recognition of the different emotional stimuli are indicative of emotional facial expressions being stronger emotional stimuli than emotional laden words. This dissociation may be attributed to the more automatic sensory encoding of emotional facial content.

  18. WHAT PREDICTS THE OWN-AGE BIAS IN FACE RECOGNITION MEMORY?

    PubMed Central

    He, Yi; Ebner, Natalie C.; Johnson, Marcia K.

    2011-01-01

    Younger and older adults’ visual scan patterns were examined as they passively viewed younger and older neutral faces. Both participant age groups tended to look longer at their own-age as compared to other-age faces. In addition, both age groups reported more exposure to own-age than other-age individuals. Importantly, the own-age bias in visual inspection of faces and the own-age bias in self-reported amount of exposure to young and older individuals in everyday life, but not explicit age stereotypes and implicit age associations, significantly and independently predicted the own-age bias in later old/new face recognition. We suggest these findings reflect increased personal and social relevance of, and more accessible and elaborated schemas for, own-age than other-age faces. PMID:21415928

  19. A Cognitively-Motivated Framework for Partial Face Recognition in Unconstrained Scenarios

    PubMed Central

    Monteiro, João C.; Cardoso, Jaime S.

    2015-01-01

    Humans perform and rely on face recognition routinely and effortlessly throughout their daily lives. Multiple works in recent years have sought to replicate this process in a robust and automatic way. However, it is known that the performance of face recognition algorithms is severely compromised in non-ideal image acquisition scenarios. In an attempt to deal with conditions, such as occlusion and heterogeneous illumination, we propose a new approach motivated by the global precedent hypothesis of the human brain's cognitive mechanisms of perception. An automatic modeling of SIFT keypoint descriptors using a Gaussian mixture model (GMM)-based universal background model method is proposed. A decision is, then, made in an innovative hierarchical sense, with holistic information gaining precedence over a more detailed local analysis. The algorithm was tested on the ORL, ARand Extended Yale B Face databases and presented state-of-the-art performance for a variety of experimental setups. PMID:25602266

  20. A 2D range Hausdorff approach for 3D face recognition.

    SciTech Connect

    Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin

    2005-04-01

    This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and template datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.

  1. Recognizing the same face in different contexts: Testing within-person face recognition in typical development and in autism.

    PubMed

    Neil, Louise; Cappagli, Giulia; Karaminis, Themelis; Jenkins, Rob; Pellicano, Elizabeth

    2016-03-01

    Unfamiliar face recognition follows a particularly protracted developmental trajectory and is more likely to be atypical in children with autism than those without autism. There is a paucity of research, however, examining the ability to recognize the same face across multiple naturally varying images. Here, we investigated within-person face recognition in children with and without autism. In Experiment 1, typically developing 6- and 7-year-olds, 8- and 9-year-olds, 10- and 11-year-olds, 12- to 14-year-olds, and adults were given 40 grayscale photographs of two distinct male identities (20 of each face taken at different ages, from different angles, and in different lighting conditions) and were asked to sort them by identity. Children mistook images of the same person as images of different people, subdividing each individual into many perceived identities. Younger children divided images into more perceived identities than adults and also made more misidentification errors (placing two different identities together in the same group) than older children and adults. In Experiment 2, we used the same procedure with 32 cognitively able children with autism. Autistic children reported a similar number of identities and made similar numbers of misidentification errors to a group of typical children of similar age and ability. Fine-grained analysis using matrices revealed marginal group differences in overall performance. We suggest that the immature performance in typical and autistic children could arise from problems extracting the perceptual commonalities from different images of the same person and building stable representations of facial identity.

  2. Face Memory and Object Recognition in Children with High-Functioning Autism or Asperger Syndrome and in Their Parents

    ERIC Educational Resources Information Center

    Kuusikko-Gauffin, Sanna; Jansson-Verkasalo, Eira; Carter, Alice; Pollock-Wurman, Rachel; Jussila, Katja; Mattila, Marja-Leena; Rahko, Jukka; Ebeling, Hanna; Pauls, David; Moilanen, Irma

    2011-01-01

    Children with Autism Spectrum Disorders (ASDs) have reported to have impairments in face, recognition and face memory, but intact object recognition and object memory. Potential abnormalities, in these fields at the family level of high-functioning children with ASD remains understudied despite, the ever-mounting evidence that ASDs are genetic and…

  3. Own- and Other-Race Face Identity Recognition in Children: The Effects of Pose and Feature Composition

    ERIC Educational Resources Information Center

    Anzures, Gizelle; Kelly, David J.; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; de Viviés, Xavier; Lee, Kang

    2014-01-01

    We used a matching-to-sample task and manipulated facial pose and feature composition to examine the other-race effect (ORE) in face identity recognition between 5 and 10 years of age. Overall, the present findings provide a genuine measure of own- and other-race face identity recognition in children that is independent of photographic and image…

  4. ERP Correlates of Target-Distracter Differentiation in Repeated Runs of a Continuous Recognition Task with Emotional and Neutral Faces

    ERIC Educational Resources Information Center

    Treese, Anne-Cecile; Johansson, Mikael; Lindgren, Magnus

    2010-01-01

    The emotional salience of faces has previously been shown to induce memory distortions in recognition memory tasks. This event-related potential (ERP) study used repeated runs of a continuous recognition task with emotional and neutral faces to investigate emotion-induced memory distortions. In the second and third runs, participants made more…

  5. Face recognition deficits in autism spectrum disorders are both domain specific and process specific.

    PubMed

    Weigelt, Sarah; Koldewyn, Kami; Kanwisher, Nancy

    2013-01-01

    Although many studies have reported face identity recognition deficits in autism spectrum disorders (ASD), two fundamental question remains: 1) Is this deficit "process specific" for face memory in particular, or does it extend to perceptual discrimination of faces as well? And 2) Is the deficit "domain specific" for faces, or is it found more generally for other social or even nonsocial stimuli? The answers to these questions are important both for understanding the nature of autism and its developmental etiology, and for understanding the functional architecture of face processing in the typical brain. Here we show that children with ASD are impaired (compared to age and IQ-matched typical children) in face memory, but not face perception, demonstrating process specificity. Further, we find no deficit for either memory or perception of places or cars, indicating domain specificity. Importantly, we further showed deficits in both the perception and memory of bodies, suggesting that the relevant domain of deficit may be social rather than specifically facial. These results provide a more precise characterization of the cognitive phenotype of autism and further indicate a functional dissociation between face memory and face perception.

  6. Accurate palm vein recognition based on wavelet scattering and spectral regression kernel discriminant analysis

    NASA Astrophysics Data System (ADS)

    Elnasir, Selma; Shamsuddin, Siti Mariyam; Farokhi, Sajad

    2015-01-01

    Palm vein recognition (PVR) is a promising new biometric that has been applied successfully as a method of access control by many organizations, which has even further potential in the field of forensics. The palm vein pattern has highly discriminative features that are difficult to forge because of its subcutaneous position in the palm. Despite considerable progress and a few practical issues, providing accurate palm vein readings has remained an unsolved issue in biometrics. We propose a robust and more accurate PVR method based on the combination of wavelet scattering (WS) with spectral regression kernel discriminant analysis (SRKDA). As the dimension of WS generated features is quite large, SRKDA is required to reduce the extracted features to enhance the discrimination. The results based on two public databases-PolyU Hyper Spectral Palmprint public database and PolyU Multi Spectral Palmprint-show the high performance of the proposed scheme in comparison with state-of-the-art methods. The proposed approach scored a 99.44% identification rate and a 99.90% verification rate [equal error rate (EER)=0.1%] for the hyperspectral database and a 99.97% identification rate and a 99.98% verification rate (EER=0.019%) for the multispectral database.

  7. Face Recognition with Pose Variations and Misalignment via Orthogonal Procrustes Regression.

    PubMed

    Tai, Ying; Yang, Jian; Zhang, Yigong; Luo, Lei; Qian, Jianjun; Chen, Yu

    2016-04-06

    Linear regression based method is a hot topic in face recognition community. Recently, sparse representation and collaborative representation based classifiers for face recognition have been proposed and attracted great attention. However, most of the existing regression analysis based methods are sensitive to pose variations. In this paper, we introduce the orthogonal Procrustes problem (OPP) as a model to handle pose variations existed in two-dimensional face images. OPP seeks an optimal linear transformation between two images with different poses so as to make the transformed image best fits the other one. We integrate OPP into the regression model and propose the orthogonal Procrustes regression (OPR) model. To address the problem that the linear transformation is not suitable for handling highly non-linear pose variation, we further adopt a progressive strategy and propose the stacked orthogonal Procrustes regression (stacked OPR). As a practical framework, OPR can handle face alignment, pose correction and face representation simultaneously. We optimize the proposed model via an efficient alternating iterative algorithm and experimental results on three popular face databases, CMU PIE database, CMU Multi-PIE database and LFW database, demonstrate the effectiveness of our proposed method.

  8. Face Recognition With Pose Variations and Misalignment via Orthogonal Procrustes Regression.

    PubMed

    Tai, Ying; Yang, Jian; Zhang, Yigong; Luo, Lei; Qian, Jianjun; Chen, Yu

    2016-06-01

    A linear regression-based method is a hot topic in face recognition community. Recently, sparse representation and collaborative representation-based classifiers for face recognition have been proposed and attracted great attention. However, most of the existing regression analysis-based methods are sensitive to pose variations. In this paper, we introduce the orthogonal Procrustes problem (OPP) as a model to handle pose variations existed in 2D face images. OPP seeks an optimal linear transformation between two images with different poses so as to make the transformed image best fits the other one. We integrate OPP into the regression model and propose the orthogonal Procrustes regression (OPR) model. To address the problem that the linear transformation is not suitable for handling highly non-linear pose variation, we further adopt a progressive strategy and propose the stacked OPR. As a practical framework, OPR can handle face alignment, pose correction, and face representation simultaneously. We optimize the proposed model via an efficient alternating iterative algorithm, and experimental results on three popular face databases, such as CMU PIE database, CMU Multi-PIE database, and LFW database, demonstrate the effectiveness of our proposed method.

  9. Neural correlates of temporal integration in face recognition: an fMRI study.

    PubMed

    Lee, Yunjo; Anaki, David; Grady, Cheryl L; Moscovitch, Morris

    2012-07-16

    Integration of temporally separated visual inputs is crucial for perception of a unified representation. Here, we show that regions involved in configural processing of faces contribute to temporal integration occurring within a limited time-window using a multivariate analysis (partial least squares, PLS) exploring the relation between brain activity and recognition performance. During fMRI, top and bottom parts of a famous face were presented sequentially with a varying interval (0, 200, or 800 ms) or were misaligned. The 800 ms condition activated several regions implicated in face processing, attention and working memory, relative to the other conditions, suggesting more active maintenance of individual face parts. Analysis of brain-behavior correlations showed that better identification in the 0 and 200 conditions was associated with increased activity in areas considered to be part of a configural face processing network, including right fusiform, middle occipital, bilateral superior temporal areas, anterior/middle cingulate and frontal cortices. In contrast, successful recognition in the 800 and misaligned conditions, which involve analytic and strategic processing, was negatively associated with activation in these regions. Thus, configural processing may involve rapid temporal integration of facial features and their relations. Our finding that regions concerned with configural and analytic processes in the service of face identification opposed each other may explain why it is difficult to apply the two processes concurrently.

  10. Development of coffee maker service robot using speech and face recognition systems using POMDP

    NASA Astrophysics Data System (ADS)

    Budiharto, Widodo; Meiliana; Santoso Gunawan, Alexander Agung

    2016-07-01

    There are many development of intelligent service robot in order to interact with user naturally. This purpose can be done by embedding speech and face recognition ability on specific tasks to the robot. In this research, we would like to propose Intelligent Coffee Maker Robot which the speech recognition is based on Indonesian language and powered by statistical dialogue systems. This kind of robot can be used in the office, supermarket or restaurant. In our scenario, robot will recognize user's face and then accept commands from the user to do an action, specifically in making a coffee. Based on our previous work, the accuracy for speech recognition is about 86% and face recognition is about 93% in laboratory experiments. The main problem in here is to know the intention of user about how sweetness of the coffee. The intelligent coffee maker robot should conclude the user intention through conversation under unreliable automatic speech in noisy environment. In this paper, this spoken dialog problem is treated as a partially observable Markov decision process (POMDP). We describe how this formulation establish a promising framework by empirical results. The dialog simulations are presented which demonstrate significant quantitative outcome.

  11. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    PubMed

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding.

  12. Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression

    PubMed Central

    Gao, Guangwei; Yang, Jian; Jing, Xiaoyuan; Huang, Pu; Hua, Juliang; Yue, Dong

    2016-01-01

    In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms. PMID:27525734

  13. Eye-movement strategies in developmental prosopagnosia and "super" face recognition.

    PubMed

    Bobak, Anna K; Parris, Benjamin A; Gregory, Nicola J; Bennetts, Rachel J; Bate, Sarah

    2017-02-01

    Developmental prosopagnosia (DP) is a cognitive condition characterized by a severe deficit in face recognition. Few investigations have examined whether impairments at the early stages of processing may underpin the condition, and it is also unknown whether DP is simply the "bottom end" of the typical face-processing spectrum. To address these issues, we monitored the eye-movements of DPs, typical perceivers, and "super recognizers" (SRs) while they viewed a set of static images displaying people engaged in naturalistic social scenarios. Three key findings emerged: (a) Individuals with more severe prosopagnosia spent less time examining the internal facial region, (b) as observed in acquired prosopagnosia, some DPs spent less time examining the eyes and more time examining the mouth than controls, and (c) SRs spent more time examining the nose-a measure that also correlated with face recognition ability in controls. These findings support previous suggestions that DP is a heterogeneous condition, but suggest that at least the most severe cases represent a group of individuals that qualitatively differ from the typical population. While SRs seem to merely be those at the "top end" of normal, this work identifies the nose as a critical region for successful face recognition.

  14. Using Facial Symmetry to Handle Pose Variations in Real-World 3D Face Recognition.

    PubMed

    Passalis, Georgios; Perakis, Panagiotis; Theoharis, Theoharis; Kakadiaris, Ioannis A

    2011-10-01

    The uncontrolled conditions of real-world biometric applications pose a great challenge to any face recognition approach. The unconstrained acquisition of data from uncooperative subjects may result in facial scans with significant pose variations along the yaw axis. Such pose variations can cause extensive occlusions, resulting in missing data. In this paper, a novel 3D face recognition method is proposed that uses facial symmetry to handle pose variations. It employs an automatic landmark detector that estimates pose and detects occluded areas for each facial scan. Subsequently, an Annotated Face Model is registered and fitted to the scan. During fitting, facial symmetry is used to overcome the challenges of missing data. The result is a pose invariant geometry image. Unlike existing methods that require frontal scans, the proposed method performs comparisons among interpose scans using a wavelet-based biometric signature. It is suitable for real-world applications as it only requires half of the face to be visible to the sensor. The proposed method was evaluated using databases from the University of Notre Dame and the University of Houston that, to the best of our knowledge, include the most challenging pose variations publicly available. The average rank-one recognition rate of the proposed method in these databases was 83.7 percent.

  15. Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.

    PubMed

    Gao, Guangwei; Yang, Jian; Jing, Xiaoyuan; Huang, Pu; Hua, Juliang; Yue, Dong

    2016-01-01

    In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms.

  16. Age-related differences in brain electrical activity during extended continuous face recognition in younger children, older children and adults.

    PubMed

    Van Strien, Jan W; Glimmerveen, Johanna C; Franken, Ingmar H A; Martens, Vanessa E G; de Bruin, Eveline A

    2011-09-01

    To examine the development of recognition memory in primary-school children, 36 healthy younger children (8-9 years old) and 36 healthy older children (11-12 years old) participated in an ERP study with an extended continuous face recognition task (Study 1). Each face of a series of 30 faces was shown randomly six times interspersed with distracter faces. The children were required to make old vs. new decisions. Older children responded faster than younger children, but younger children exhibited a steeper decrease in latencies across the five repetitions. Older children exhibited better accuracy for new faces, but there were no age differences in recognition accuracy for repeated faces. For the N2, N400 and late positive complex (LPC), we analyzed the old/new effects (repetition 1 vs. new presentation) and the extended repetition effects (repetitions 1 through 5). Compared to older children, younger children exhibited larger frontocentral N2 and N400 old/new effects. For extended face repetitions, negativity of the N2 and N400 decreased in a linear fashion in both age groups. For the LPC, an ERP component thought to reflect recollection, no significant old/new or extended repetition effects were found. Employing the same face recognition paradigm in 20 adults (Study 2), we found a significant N400 old/new effect at lateral frontal sites and a significant LPC repetition effect at parietal sites, with LPC amplitudes increasing linearly with the number of repetitions. This study clearly demonstrates differential developmental courses for the N400 and LPC pertaining to recognition memory for faces. It is concluded that face recognition in children is mediated by early and probably more automatic than conscious recognition processes. In adults, the LPC extended repetition effect indicates that adult face recognition memory is related to a conscious and graded recollection process rather than to an automatic recognition process.

  17. Experiment on parallel correlated recognition of 2030 human faces based on speckle modulation.

    PubMed

    Liao, Yi; Guo, Yunbo; Cao, Liangcai; Ma, Xiaosu; He, Qingsheng; Jin, Guofan

    2004-08-23

    In this paper, the experiment on parallel correlated recognition of 2030 human faces in Fe:LiNbO(3) crystal is detailedly presented, a very clear correlation spots array was achieved and the recognition accuracy is better than 95%. According to the experiment, it is proved that speckle modulation on the object beam of volume holographic correlators can well suppress the crosstalk, so that the multiplexing spacing is markedly reduced and the channel density is increased 10 times compared with the traditional holographic correlators without speckle modulation.

  18. Exploring Hindu Indian emotion expressions: evidence for accurate recognition by Americans and Indians.

    PubMed

    Hejmadi, A; Davidson, R J; Rozin, P

    2000-05-01

    Subjects were presented with videotaped expressions of 10 classic Hindu emotions. The 10 emotions were (in rough translation from Sanskrit) anger, disgust, fear, heroism, humor-amusement, love, peace, sadness, shame-embarrassment, and wonder. These emotions (except for shame) and their portrayal were described about 2,000 years ago in the Natyasastra, and are enacted in the contemporary Hindu classical dance. The expressions are dynamic and include both the face and the body, especially the hands. Three different expressive versions of each emotion were presented, along with 15 neutral expressions. American and Indian college students responded to each of these 45 expressions using either a fixed-response format (10 emotion names and "neutral/no emotion") or a totally free response format. Participants from both countries were quite accurate in identifying emotions correctly using both fixed-choice (65% correct, expected value of 9%) and free-response (61% correct, expected value close to zero) methods.

  19. Right perceptual bias and self-face recognition in individuals with congenital prosopagnosia.

    PubMed

    Malaspina, Manuela; Albonico, Andrea; Daini, Roberta

    2016-01-01

    The existence of a drift to base judgments more on the right half-part of facial stimuli, which falls in the observer's left visual field (left perceptual bias (LPB)), in normal individuals has been demonstrated. However, less is known about the existence of this phenomenon in people affected by face impairment from birth, namely congenital prosopagnosics. In the current study, we aimed to investigate the presence of the LPB under face impairment conditions using chimeric stimuli and the most familiar face of all: the self-face. For this purpose we tested 10 participants with congenital prosopagnosia and 21 healthy controls with a face matching task using facial stimuli, involving a spatial manipulation of the left and the right hemi-faces of self-photos and photos of others. Even though congenital prosopagnosics performance was significantly lower than that of controls, both groups showed a consistent self-face advantage. Moreover, congenital prosopagnosics showed optimal performance when the right side of their face was presented, that is, right perceptual bias, suggesting a differential strategy for self-recognition in those subjects. A possible explanation for this result is discussed.

  20. [Non-conscious perception of emotional faces affects the visual objects recognition].

    PubMed

    Gerasimenko, N Iu; Slavutskaia, A V; Kalinin, S A; Mikhaĭlova, E S

    2013-01-01

    In 34 healthy subjects we have analyzed accuracy and reaction time (RT) during the recognition of complex visual images: pictures of animals and non-living objects. The target stimuli were preceded by brief presentation of masking non-target ones, which represented drawings of emotional (angry, fearful, happy) or neutral faces. We have revealed that in contrast to accuracy the RT depended on the emotional expression of the preceding faces. RT was significantly shorter if the target objects were paired with the angry and fearful faces as compared with the happy and neutral ones. These effects depended on the category of the target stimulus and were more prominent for objects than for animals. Further, the emotional faces' effects were determined by emotional and communication personality traits (defined by Cattell's Questionnaire) and were clearer defined in more sensitive, anxious and pessimistic introverts. The data are important for understanding the mechanisms of human visual behavior determination by non-consciously processing of emotional information.

  1. Multi-stream face recognition on dedicated mobile devices for crime-fighting

    NASA Astrophysics Data System (ADS)

    Jassim, Sabah A.; Sellahewa, Harin

    2006-09-01

    Automatic face recognition is a useful tool in the fight against crime and terrorism. Technological advance in mobile communication systems and multi-application mobile devices enable the creation of hybrid platforms for active and passive surveillance. A dedicated mobile device that incorporates audio-visual sensors would not only complement existing networks of fixed surveillance devices (e.g. CCTV) but could also provide wide geographical coverage in almost any situation and anywhere. Such a device can hold a small portion of a law-enforcing agency biometric database that consist of audio and/or visual data of a number of suspects/wanted or missing persons who are expected to be in a local geographical area. This will assist law-enforcing officers on the ground in identifying persons whose biometric templates are downloaded onto their devices. Biometric data on the device can be regularly updated which will reduce the number of faces an officer has to remember. Such a dedicated device would act as an active/passive mobile surveillance unit that incorporate automatic identification. This paper is concerned with the feasibility of using wavelet-based face recognition schemes on such devices. The proposed schemes extend our recently developed face verification scheme for implementation on a currently available PDA. In particular we will investigate the use of a combination of wavelet frequency channels for multi-stream face recognition. We shall present experimental results on the performance of our proposed schemes for a number of publicly available face databases including a new AV database of videos recorded on a PDA.

  2. A family at risk: congenital prosopagnosia, poor face recognition and visuoperceptual deficits within one family.

    PubMed

    Johnen, Andreas; Schmukle, Stefan C; Hüttenbrink, Judith; Kischka, Claudia; Kennerknecht, Ingo; Dobel, Christian

    2014-05-01

    Congenital prosopagnosia (CP) describes a severe face processing impairment despite intact early vision and in the absence of overt brain damage. CP is assumed to be present from birth and often transmitted within families. Previous studies reported conflicting findings regarding associated deficits in nonface visuoperceptual tasks. However, diagnostic criteria for CP significantly differed between studies, impeding conclusions on the heterogeneity of the impairment. Following current suggestions for clinical diagnoses of CP, we administered standardized tests for face processing, a self-report questionnaire and general visual processing tests to an extended family (N=28), in which many members reported difficulties with face recognition. This allowed us to assess the degree of heterogeneity of the deficit within a large sample of suspected CPs of similar genetic and environmental background. (a) We found evidence for a severe face processing deficit but intact nonface visuoperceptual skills in three family members - a father and his two sons - who fulfilled conservative criteria for a CP diagnosis on standardized tests and a self-report questionnaire, thus corroborating findings of familial transmissions of CP. (b) Face processing performance of the remaining family members was also significantly below the mean of the general population, suggesting that face processing impairments are transmitted as a continuous trait rather than in a dichotomous all-or-nothing fashion. (c) Self-rating scores of face recognition showed acceptable correlations with standardized tests, suggesting this method as a viable screening procedure for CP diagnoses. (d) Finally, some family members revealed severe impairments in general visual processing and nonface visual memory tasks either in conjunction with face perception deficits or as an isolated impairment. This finding may indicate an elevated risk for more general visuoperceptual deficits in families with prosopagnosic members.

  3. The neural basis of self-face recognition after self-concept threat and comparison with important others.

    PubMed

    Guan, Lili; Qi, Mingming; Zhang, Qinglin; Yang, Juan

    2014-01-01

    The implicit positive association (IPA) theory attributed self-face advantage to the IPA with self-concept. Previous behavioral study has found that self-concept threat (SCT) could eliminate the self-advantage in face recognition over familiar-face, without taking levels of facial familiarity into account. The current event-related potential study aimed to investigate whether SCT could eliminate the self-face advantage over stranger-face. Fifteen participants completed a "self-friend" comparison task in which participants identified the face orientation of self-face and friend-face after SCT and non-self-concept threat (NSCT) priming, and a "self-stranger" comparison task was also completed in which participants identified the face orientation of self-face and stranger-face after SCT and NSCT priming. The results showed that the N2 amplitudes were more negative for processing friend-face than self-face after NSCT priming, but there was no significant difference between them after SCT priming. Moreover, the N2 amplitudes were more negative for processing stranger-face than self-face both after SCT priming and after NSCT priming. Furthermore, SCT manipulated the N2 amplitudes of friend-face rather than self-face. Overall, the present study made a supplementary to the current IPA theory and further indicated that SCT would only eliminate this self-face recognition advantage when comparing with important others.

  4. Automated, long-range, night/day, active-SWIR face recognition system

    NASA Astrophysics Data System (ADS)

    Lemoff, Brian E.; Martin, Robert B.; Sluch, Mikhail; Kafka, Kristopher M.; Dolby, Andrew; Ice, Robert

    2014-06-01

    Covert, long-range, night/day identification of stationary human subjects using face recognition has been previously demonstrated using the active-SWIR Tactical Imager for Night/Day Extended-Range Surveillance (TINDERS) system. TINDERS uses an invisible, eye-safe, SWIR laser illuminator to produce high-quality facial imagery under conditions ranging from bright sunlight to total darkness. The recent addition of automation software to TINDERS has enabled the autonomous identification of moving subjects at distances greater than 100 m. Unlike typical cooperative, short range face recognition scenarios, where positive identification requires only a single face image, the SWIR wavelength, long distance, and uncontrolled conditions mean that positive identification requires fusing the face matching results from multiple captured images of a single subject. Automation software is required to initially detect a person, lock on and track the person as they move, and select video frames containing high-quality frontal face images for processing. Fusion algorithms are required to combine the matching results from multiple frames to produce a high-confidence match. These automation functions will be described, and results showing automated identification of moving subjects, night and day, at multiple distances will be presented.

  5. Face recognition across non-uniform motion blur, illumination, and pose.

    PubMed

    Punnappurath, Abhijith; Rajagopalan, Ambasamudram Narayanan; Taheri, Sima; Chellappa, Rama; Seetharaman, Guna

    2015-07-01

    Existing methods for performing face recognition in the presence of blur are based on the convolution model and cannot handle non-uniform blurring situations that frequently arise from tilts and rotations in hand-held cameras. In this paper, we propose a methodology for face recognition in the presence of space-varying motion blur comprising of arbitrarily-shaped kernels. We model the blurred face as a convex combination of geometrically transformed instances of the focused gallery face, and show that the set of all images obtained by non-uniformly blurring a given image forms a convex set. We first propose a non-uniform blur-robust algorithm by making use of the assumption of a sparse camera trajectory in the camera motion space to build an energy function with l1 -norm constraint on the camera motion. The framework is then extended to handle illumination variations by exploiting the fact that the set of all images obtained from a face image by non-uniform blurring and changing the illumination forms a bi-convex set. Finally, we propose an elegant extension to also account for variations in pose.

  6. Membership-degree preserving discriminant analysis with applications to face recognition.

    PubMed

    Yang, Zhangjing; Liu, Chuancai; Huang, Pu; Qian, Jianjun

    2013-01-01

    In pattern recognition, feature extraction techniques have been widely employed to reduce the dimensionality of high-dimensional data. In this paper, we propose a novel feature extraction algorithm called membership-degree preserving discriminant analysis (MPDA) based on the fisher criterion and fuzzy set theory for face recognition. In the proposed algorithm, the membership degree of each sample to particular classes is firstly calculated by the fuzzy k-nearest neighbor (FKNN) algorithm to characterize the similarity between each sample and class centers, and then the membership degree is incorporated into the definition of the between-class scatter and the within-class scatter. The feature extraction criterion via maximizing the ratio of the between-class scatter to the within-class scatter is applied. Experimental results on the ORL, Yale, and FERET face databases demonstrate the effectiveness of the proposed algorithm.

  7. 3D face recognition based on the hierarchical score-level fusion classifiers

    NASA Astrophysics Data System (ADS)

    Mráček, Štěpán.; Váša, Jan; Lankašová, Karolína; Drahanský, Martin; Doležel, Michal

    2014-05-01

    This paper describes the 3D face recognition algorithm that is based on the hierarchical score-level fusion clas-sifiers. In a simple (unimodal) biometric pipeline, the feature vector is extracted from the input data and subsequently compared with the template stored in the database. In our approachm, we utilize several feature extraction algorithms. We use 6 different image representations of the input 3D face data. Moreover, we are using Gabor and Gauss-Laguerre filter banks applied on the input image data that yield to 12 resulting feature vectors. Each representation is compared with corresponding counterpart from the biometric database. We also add the recognition based on the iso-geodesic curves. The final score-level fusion is performed on 13 comparison scores using the Support Vector Machine (SVM) classifier.

  8. Three-dimensional face recognition in the presence of facial expressions: an annotated deformable model approach.

    PubMed

    Kakadiaris, Ioannis A; Passalis, Georgios; Toderici, George; Murtuza, Mohammed N; Lu, Yunliang; Karampatziakis, Nikos; Theoharis, Theoharis

    2007-04-01

    In this paper, we present the computational tools and a hardware prototype for 3D face recognition. Full automation is provided through the use of advanced multistage alignment algorithms, resilience to facial expressions by employing a deformable model framework, and invariance to 3D capture devices through suitable preprocessing steps. In addition, scalability in both time and space is achieved by converting 3D facial scans into compact metadata. We present our results on the largest known, and now publicly available, Face Recognition Grand Challenge 3D facial database consisting of several thousand scans. To the best of our knowledge, this is the highest performance reported on the FRGC v2 database for the 3D modality.

  9. Pose-Encoded Spherical Harmonics for Face Recognition and Synthesis Using a Single Image

    NASA Astrophysics Data System (ADS)

    Yue, Zhanfeng; Zhao, Wenyi; Chellappa, Rama

    2007-12-01

    Face recognition under varying pose is a challenging problem, especially when illumination variations are also present. In this paper, we propose to address one of the most challenging scenarios in face recognition. That is, to identify a subject from a test image that is acquired under different pose and illumination condition from only one training sample (also known as a gallery image) of this subject in the database. For example, the test image could be semifrontal and illuminated by multiple lighting sources while the corresponding training image is frontal under a single lighting source. Under the assumption of Lambertian reflectance, the spherical harmonics representation has proved to be effective in modeling illumination variations for a fixed pose. In this paper, we extend the spherical harmonics representation to encode pose information. More specifically, we utilize the fact that 2D harmonic basis images at different poses are related by close-form linear transformations, and give a more convenient transformation matrix to be directly used for basis images. An immediate application is that we can easily synthesize a different view of a subject under arbitrary lighting conditions by changing the coefficients of the spherical harmonics representation. A more important result is an efficient face recognition method, based on the orthonormality of the linear transformations, for solving the above-mentioned challenging scenario. Thus, we directly project a nonfrontal view test image onto the space of frontal view harmonic basis images. The impact of some empirical factors due to the projection is embedded in a sparse warping matrix; for most cases, we show that the recognition performance does not deteriorate after warping the test image to the frontal view. Very good recognition results are obtained using this method for both synthetic and challenging real images.

  10. The Effect of Inversion on 3- to 5-Year-Olds' Recognition of Face and Nonface Visual Objects

    ERIC Educational Resources Information Center

    Picozzi, Marta; Cassia, Viola Macchi; Turati, Chiara; Vescovo, Elena

    2009-01-01

    This study compared the effect of stimulus inversion on 3- to 5-year-olds' recognition of faces and two nonface object categories matched with faces for a number of attributes: shoes (Experiment 1) and frontal images of cars (Experiments 2 and 3). The inversion effect was present for faces but not shoes at 3 years of age (Experiment 1). Analogous…

  11. Face recognition using 3D facial shape and color map information: comparison and combination

    NASA Astrophysics Data System (ADS)

    Godil, Afzal; Ressler, Sandy; Grother, Patrick

    2004-08-01

    In this paper, we investigate the use of 3D surface geometry for face recognition and compare it to one based on color map information. The 3D surface and color map data are from the CAESAR anthropometric database. We find that the recognition performance is not very different between 3D surface and color map information using a principal component analysis algorithm. We also discuss the different techniques for the combination of the 3D surface and color map information for multi-modal recognition by using different fusion approaches and show that there is significant improvement in results. The effectiveness of various techniques is compared and evaluated on a dataset with 200 subjects in two different positions.

  12. Local directional pattern of phase congruency features for illumination invariant face recognition

    NASA Astrophysics Data System (ADS)

    Essa, Almabrok E.; Asari, Vijayan K.

    2014-04-01

    An illumination-robust face recognition system using Local Directional Pattern (LDP) descriptors in Phase Congruency (PC) space is proposed in this paper. The proposed Directional Pattern of Phase Congruency (DPPC) is an oriented and multi-scale local descriptor that is able to encode various patterns of face images under different lighting conditions. It is constructed by applying LDP on the oriented PC images. A LDP feature is obtained by computing the edge response values in eight directions at each pixel position and encoding them into an eight bit binary code using the relative strength magnitude of these edge responses. Phase congruency and local directional pattern have been independently used in the field of face and facial expression recognition, since they are robust to illumination changes. When the PC extracts the discontinuities in the image such as edges and corners, the LDP computes the edge response values in different directions and uses these to encode the image texture. The local directional pattern descriptor on the phase congruency image is subjected to principal component analysis (PCA) for dimensionality reduction for fast and effective face recognition application. The performance evaluation of the proposed DPPC algorithm is conducted on several publicly available databases and observed promising recognition rates. Better classification accuracy shows the superiority of the LDP descriptor against other appearance-based feature descriptors such as Local Binary Pattern (LBP). In other words, our result shows that by using the LDP descriptor the Euclidean distance between reference image and testing images in the same class is much less than that between reference image and testing images from the other classes.

  13. On the Relation between Face and Object Recognition in Developmental Prosopagnosia: No Dissociation but a Systematic Association

    PubMed Central

    Klargaard, Solja K.; Starrfelt, Randi

    2016-01-01

    There is an ongoing debate about whether face recognition and object recognition constitute separate domains. Clarification of this issue can have important theoretical implications as face recognition is often used as a prime example of domain-specificity in mind and brain. An important source of input to this debate comes from studies of individuals with developmental prosopagnosia, suggesting that face recognition can be selectively impaired. We put the selectivity hypothesis to test by assessing the performance of 10 individuals with developmental prosopagnosia on demanding tests of visual object processing involving both regular and degraded drawings. None of the individuals exhibited a clear dissociation between face and object recognition, and as a group they were significantly more affected by degradation of objects than control participants. Importantly, we also find positive correlations between the severity of the face recognition impairment and the degree of impaired performance with degraded objects. This suggests that the face and object deficits are systematically related rather than coincidental. We conclude that at present, there is no strong evidence in the literature on developmental prosopagnosia supporting domain-specific accounts of face recognition. PMID:27792780

  14. Band-Reweighed Gabor Kernel Embedding for Face Image Representation and Recognition.

    PubMed

    Ren, Chuan-Xian; Dai, Dao-Qing; Li, Xiao-Xin; Lai, Zhao-Rong

    2014-02-01

    Face recognition with illumination or pose variation is a challenging problem in image processing and pattern recognition. A novel algorithm using band-reweighed Gabor kernel embedding to deal with the problem is proposed in this paper. For a given image, it is first transformed by a group of Gabor filters, which output Gabor features using different orientation and scale parameters. Fisher scoring function is used to measure the importance of features in each band, and then, the features with the largest scores are preserved for saving memory requirements. The reduced bands are combined by a vector, which is determined by a weighted kernel discriminant criterion and solved by a constrained quadratic programming method, and then, the weighted sum of these nonlinear bands is defined as the similarity between two images. Compared with existing concatenation-based Gabor feature representation and the uniformly weighted similarity calculation approaches, our method provides a new way to use Gabor features for face recognition and presents a reasonable interpretation for highlighting discriminant orientations and scales. The minimum Mahalanobis distance considering the spatial correlations within the data is exploited for feature matching, and the graphical lasso is used therein for directly estimating the sparse inverse covariance matrix. Experiments using benchmark databases show that our new algorithm improves the recognition results and obtains competitive performance.

  15. Computational and performance aspects of PCA-based face-recognition algorithms.

    PubMed

    Moon, H; Phillips, P J

    2001-01-01

    Algorithms based on principal component analysis (PCA) form the basis of numerous studies in the psychological and algorithmic face-recognition literature. PCA is a statistical technique and its incorporation into a face-recognition algorithm requires numerous design decisions. We explicitly state the design decisions by introducing a generic modular PCA-algorithm. This allows us to investigate these decisions, including those not documented in the literature. We experimented with different implementations of each module, and evaluated the different implementations using the September 1996 FERET evaluation protocol (the de facto standard for evaluating face-recognition algorithms). We experimented with (i) changing the illumination normalization procedure; (ii) studying effects on algorithm performance of compressing images with JPEG and wavelet compression algorithms; (iii) varying the number of eigenvectors in the representation; and (iv) changing the similarity measure in the classification process. We performed two experiments. In the first experiment, we obtained performance results on the standard September 1996 FERET large-gallery image sets. In the second experiment, we examined the variability in algorithm performance on different sets of facial images. The study was performed on 100 randomly generated image sets (galleries) of the same size. Our two most significant results are (i) changing the similarity measure produced the greatest change in performance, and (ii) that difference in performance of +/- 10% is needed to distinguish between algorithms.

  16. Fusing local patterns of Gabor magnitude and phase for face recognition.

    PubMed

    Xie, Shufu; Shan, Shiguang; Chen, Xilin; Chen, Jie

    2010-05-01

    Gabor features have been known to be effective for face recognition. However, only a few approaches utilize phase feature and they usually perform worse than those using magnitude feature. To investigate the potential of Gabor phase and its fusion with magnitude for face recognition, in this paper, we first propose local Gabor XOR patterns (LGXP), which encodes the Gabor phase by using the local XOR pattern (LXP) operator. Then, we introduce block-based Fisher's linear discriminant (BFLD) to reduce the dimensionality of the proposed descriptor and at the same time enhance its discriminative power. Finally, by using BFLD, we fuse local patterns of Gabor magnitude and phase for face recognition. We evaluate our approach on FERET and FRGC 2.0 databases. In particular, we perform comparative experimental studies of different local Gabor patterns. We also make a detailed comparison of their combinations with BFLD, as well as the fusion of different descriptors by using BFLD. Extensive experimental results verify the effectiveness of our LGXP descriptor and also show that our fusion approach outperforms most of the state-of-the-art approaches.

  17. The utility of multiple synthesized views in the recognition of unfamiliar faces.

    PubMed

    Jones, Scott P; Dwyer, Dominic M; Lewis, Michael B

    2017-05-01

    The ability to recognize an unfamiliar individual on the basis of prior exposure to a photograph is notoriously poor and prone to errors, but recognition accuracy is improved when multiple photographs are available. In applied situations, when only limited real images are available (e.g., from a mugshot or CCTV image), the generation of new images might provide a technological prosthesis for otherwise fallible human recognition. We report two experiments examining the effects of providing computer-generated additional views of a target face. In Experiment 1, provision of computer-generated views supported better target face recognition than exposure to the target image alone and equivalent performance to that for exposure of multiple photograph views. Experiment 2 replicated the advantage of providing generated views, but also indicated an advantage for multiple viewings of the single target photograph. These results strengthen the claim that identifying a target face can be improved by providing multiple synthesized views based on a single target image. In addition, our results suggest that the degree of advantage provided by synthesized views may be affected by the quality of synthesized material.

  18. Electrophysiological study of contextual variations in a short-term face recognition task.

    PubMed

    Guillaume, Fabrice; Tiberghien, Guy

    2005-03-01

    Event-related potentials (ERPs) were recorded during two short-term recognition tasks using unfamiliar faces. These experiments are based on the process dissociation procedure (PDP), whereby the exclusion criterion was an intrinsic context or extrinsic context, the facial expression (Experiment 1) or background (Experiment 2), respectively. The results indicate that retrieval orientation, in addition to extensive strategic control, affects both the frontal (N250) and temporoparietal (P3b) components. Furthermore, these data indicate that an early frontal modulation interacts between processing that bears on the face (interactive intrinsic context) and processing that bears on two objects at the same time (interactive extrinsic context), in which, in the latter case, that the background change led to an early modulation at the frontal sites in the left hemisphere. These results are consistent with the idea that frontal effects reflect differences in the nature of the information during retrieval and postretrieval processes involved. Furthermore, that the left posterior repetition effect appears to be a manifestation of the retrieval of relevant contextual information that perturbs the recognition decision, whereas the right posterior repetition effect reflects to be the outcome of the retrieval of the face as a whole. Finally, results are in concordance with the hypothesis that the difference during recognition with or without source memory may be in the strength of the relationship between the target and the contextual information to be retrieved. In essence, that automatic and controlled processes in a given context depends on both task-related and target-related constraints.

  19. Neural correlates of the in-group memory advantage on the encoding and recognition of faces.

    PubMed

    Herzmann, Grit; Curran, Tim

    2013-01-01

    People have a memory advantage for faces that belong to the same group, for example, that attend the same university or have the same personality type. Faces from such in-group members are assumed to receive more attention during memory encoding and are therefore recognized more accurately. Here we use event-related potentials related to memory encoding and retrieval to investigate the neural correlates of the in-group memory advantage. Using the minimal group procedure, subjects were classified based on a bogus personality test as belonging to one of two personality types. While the electroencephalogram was recorded, subjects studied and recognized faces supposedly belonging to the subject's own and the other personality type. Subjects recognized in-group faces more accurately than out-group faces but the effect size was small. Using the individual behavioral in-group memory advantage in multivariate analyses of covariance, we determined neural correlates of the in-group advantage. During memory encoding (300 to 1000 ms after stimulus onset), subjects with a high in-group memory advantage elicited more positive amplitudes for subsequently remembered in-group than out-group faces, showing that in-group faces received more attention and elicited more neural activity during initial encoding. Early during memory retrieval (300 to 500 ms), frontal brain areas were more activated for remembered in-group faces indicating an early detection of group membership. Surprisingly, the parietal old/new effect (600 to 900 ms) thought to indicate recollection processes differed between in-group and out-group faces independent from the behavioral in-group memory advantage. This finding suggests that group membership affects memory retrieval independent of memory performance. Comparisons with a previous study on the other-race effect, another memory phenomenon influenced by social classification of faces, suggested that the in-group memory advantage is dominated by top

  20. Emotional face recognition in adolescent suicide attempters and adolescents engaging in non-suicidal self-injury.

    PubMed

    Seymour, Karen E; Jones, Richard N; Cushman, Grace K; Galvan, Thania; Puzia, Megan E; Kim, Kerri L; Spirito, Anthony; Dickstein, Daniel P

    2016-03-01

    Little is known about the bio-behavioral mechanisms underlying and differentiating suicide attempts from non-suicidal self-injury (NSSI) in adolescents. Adolescents who attempt suicide or engage in NSSI often report significant interpersonal and social difficulties. Emotional face recognition ability is a fundamental skill required for successful social interactions, and deficits in this ability may provide insight into the unique brain-behavior interactions underlying suicide attempts versus NSSI in adolescents. Therefore, we examined emotional face recognition ability among three mutually exclusive groups: (1) inpatient adolescents who attempted suicide (SA, n = 30); (2) inpatient adolescents engaged in NSSI (NSSI, n = 30); and (3) typically developing controls (TDC, n = 30) without psychiatric illness. Participants included adolescents aged 13-17 years, matched on age, gender and full-scale IQ. Emotional face recognition was evaluated using the diagnostic assessment of nonverbal accuracy (DANVA-2). Compared to TDC youth, adolescents with NSSI made more errors on child fearful and adult sad face recognition while controlling for psychopathology and medication status (ps < 0.05). No differences were found on emotional face recognition between NSSI and SA groups. Secondary analyses showed that compared to inpatients without major depression, those with major depression made fewer errors on adult sad face recognition even when controlling for group status (p < 0.05). Further, compared to inpatients without generalized anxiety, those with generalized anxiety made fewer recognition errors on adult happy faces even when controlling for group status (p < 0.05). Adolescent inpatients engaged in NSSI showed greater deficits in emotional face recognition than TDC, but not inpatient adolescents who attempted suicide. Further results suggest the importance of psychopathology in emotional face recognition. Replication of these preliminary results and examination of the role

  1. Own- and other-race face identity recognition in children: the effects of pose and feature composition.

    PubMed

    Anzures, Gizelle; Kelly, David J; Pascalis, Olivier; Quinn, Paul C; Slater, Alan M; de Viviés, Xavier; Lee, Kang

    2014-02-01

    We used a matching-to-sample task and manipulated facial pose and feature composition to examine the other-race effect (ORE) in face identity recognition between 5 and 10 years of age. Overall, the present findings provide a genuine measure of own- and other-race face identity recognition in children that is independent of photographic and image processing. The current study also confirms the presence of an ORE in children as young as 5 years of age using a recognition paradigm that is sensitive to their developing cognitive abilities. In addition, the present findings show that with age, increasing experience with familiar classes of own-race faces and further lack of experience with unfamiliar classes of other-race faces serves to maintain the ORE between 5 and 10 years of age rather than exacerbate the effect. All age groups also showed a differential effect of stimulus facial pose in their recognition of the internal regions of own- and other-race faces. Own-race inner faces were remembered best when three-quarter poses were used during familiarization and frontal poses were used during the recognition test. In contrast, other-race inner faces were remembered best when frontal poses were used during familiarization and three-quarter poses were used during the recognition test. Thus, children encode and/or retrieve own- and other-race faces from memory in qualitatively different ways.

  2. Perceptual Organization as a Determinant of Visual Recognition Memory

    ERIC Educational Resources Information Center

    Wiseman, Sandor; Neisser, Ulric

    1974-01-01

    Ambiguous pictures that could be seen as faces or as meaningless patterns were the stimuli in two recognition-memory experiments. Recognition was far more accurate when the stimuli were seen as faces. (Editor)

  3. An Evolutionary Feature-Based Visual Attention Model Applied to Face Recognition

    NASA Astrophysics Data System (ADS)

    Vázquez, Roberto A.; Sossa, Humberto; Garro, Beatriz A.

    Visual attention is a powerful mechanism that enables perception to focus on a small subset of the information picked up by our eyes. It is directly related to the accuracy of an object categorization task. In this paper we adopt those biological hypotheses and propose an evolutionary visual attention model applied to the face recognition problem. The model is composed by three levels: the attentive level that determines where to look by means of a retinal ganglion network simulated using a network of bi-stable neurons and controlled by an evolutionary process; the preprocessing level that analyses and process the information from the retinal ganglion network; and the associative level that uses a neural network to associate the visual stimuli with the face of a particular person. To test the accuracy of the model a benchmark of faces is used.

  4. Fraudulent ID using face morphs: Experiments on human and automatic recognition

    PubMed Central

    Robertson, David J.; Kramer, Robin S. S.

    2017-01-01

    Matching unfamiliar faces is known to be difficult, and this can give an opportunity to those engaged in identity fraud. Here we examine a relatively new form of fraud, the use of photo-ID containing a graphical morph between two faces. Such a document may look sufficiently like two people to serve as ID for both. We present two experiments with human viewers, and a third with a smartphone face recognition system. In Experiment 1, viewers were asked to match pairs of faces, without being warned that one of the pair could be a morph. They very commonly accepted a morphed face as a match. However, in Experiment 2, following very short training on morph detection, their acceptance rate fell considerably. Nevertheless, there remained large individual differences in people’s ability to detect a morph. In Experiment 3 we show that a smartphone makes errors at a similar rate to ‘trained’ human viewers—i.e. accepting a small number of morphs as genuine ID. We discuss these results in reference to the use of face photos for security. PMID:28328928

  5. Fraudulent ID using face morphs: Experiments on human and automatic recognition.

    PubMed

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2017-01-01

    Matching unfamiliar faces is known to be difficult, and this can give an opportunity to those engaged in identity fraud. Here we examine a relatively new form of fraud, the use of photo-ID containing a graphical morph between two faces. Such a document may look sufficiently like two people to serve as ID for both. We present two experiments with human viewers, and a third with a smartphone face recognition system. In Experiment 1, viewers were asked to match pairs of faces, without being warned that one of the pair could be a morph. They very commonly accepted a morphed face as a match. However, in Experiment 2, following very short training on morph detection, their acceptance rate fell considerably. Nevertheless, there remained large individual differences in people's ability to detect a morph. In Experiment 3 we show that a smartphone makes errors at a similar rate to 'trained' human viewers-i.e. accepting a small number of morphs as genuine ID. We discuss these results in reference to the use of face photos for security.

  6. Through the eyes of the own-race bias: eye-tracking and pupillometry during face recognition.

    PubMed

    Wu, Esther Xiu Wen; Laeng, Bruno; Magnussen, Svein

    2012-01-01

    People are generally better at remembering faces of their own race than faces of a different race, and this effect is known as the own-race bias (ORB) effect. We used eye-tracking and pupillometry to investigate whether Caucasian and Asian face stimuli elicited different-looking patterns in Caucasian participants in a face-memory task. Consistent with the ORB effect, we found better recognition performance for own-race faces than other-race faces, and shorter response times. In addition, at encoding, eye movements and pupillary responses to Asian faces (i.e., the other race) were different from those to Caucasian faces (i.e., the own race). Processing of own-race faces was characterized by more active scanning, with a larger number of shorter fixations, and more frequent saccades. Moreover, pupillary diameters were larger when viewing other-race than own-race faces, suggesting a greater cognitive effort when encoding other-race faces.

  7. Functional connectivity differences in autism during face and car recognition: underconnectivity and atypical age-related changes.

    PubMed

    Lynn, Andrew C; Padmanabhan, Aarthi; Simmonds, Daniel; Foran, William; Hallquist, Michael N; Luna, Beatriz; O'Hearn, Kirsten

    2016-10-16

    Face recognition abilities improve between adolescence and adulthood over typical development (TD), but plateau in autism, leading to increasing face recognition deficits in autism later in life. Developmental differences between autism and TD may reflect changes between neural systems involved in the development of face encoding and recognition. Here, we focused on whole-brain connectivity with the fusiform face area (FFA), a well-established face-preferential brain region. Older children, adolescents, and adults with and without autism completed the Cambridge Face Memory Test, and a matched car memory test, during fMRI scanning. We then examined task-based functional connectivity between the FFA and the rest of the brain, comparing autism and TD groups during encoding and recognition of face and car stimuli. The autism group exhibited underconnectivity, relative to the TD group, between the FFA and frontal and primary visual cortices, independent of age. Underconnectivity with the medial and rostral lateral prefrontal cortex was face-specific during encoding and recognition, respectively. Conversely, underconnectivity with the L orbitofrontal cortex was evident for both face and car encoding. Atypical age-related changes in connectivity emerged between the FFA and the R temporoparietal junction, and R dorsal striatum for face stimuli only. Similar differences in age-related changes in autism emerged for FFA connectivity with the amygdala across both face and car recognition. Thus, underconnectivity and atypical development of functional connectivity may lead to a less optimal face-processing network in the context of increasing general and social cognitive deficits in autism.

  8. Autistic traits are linked to reduced adaptive coding of face identity and selectively poorer face recognition in men but not women.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Ewing, Louise

    2013-11-01

    Our ability to discriminate and recognize thousands of faces despite their similarity as visual patterns relies on adaptive, norm-based, coding mechanisms that are continuously updated by experience. Reduced adaptive coding of face identity has been proposed as a neurocognitive endophenotype for autism, because it is found in autism and in relatives of individuals with autism. Autistic traits can also extend continuously into the general population, raising the possibility that reduced adaptive coding of face identity may be more generally associated with autistic traits. In the present study, we investigated whether adaptive coding of face identity decreases as autistic traits increase in an undergraduate population. Adaptive coding was measured using face identity aftereffects, and autistic traits were measured using the Autism-Spectrum Quotient (AQ) and its subscales. We also measured face and car recognition ability to determine whether autistic traits are selectively related to face recognition difficulties. We found that men who scored higher on levels of autistic traits related to social interaction had reduced adaptive coding of face identity. This result is consistent with the idea that atypical adaptive face-coding mechanisms are an endophenotype for autism. Autistic traits were also linked with face-selective recognition difficulties in men. However, there were some unexpected sex differences. In women, autistic traits were linked positively, rather than negatively, with adaptive coding of identity, and were unrelated to face-selective recognition difficulties. These sex differences indicate that autistic traits can have different neurocognitive correlates in men and women and raise the intriguing possibility that endophenotypes of autism can differ in males and females.

  9. Composite multilobe descriptors for cross-spectral recognition of full and partial face

    NASA Astrophysics Data System (ADS)

    Cao, Zhicheng; Schmid, Natalia A.; Bourlai, Thirimachos

    2016-08-01

    Cross-spectral image matching is a challenging research problem motivated by various applications, including surveillance, security, and identity management in general. An example of this problem includes cross-spectral matching of active infrared (IR) or thermal IR face images against a dataset of visible light images. A summary of recent developments in the field of cross-spectral face recognition by the authors is presented. In particular, it describes the original form and two variants of a local operator named composite multilobe descriptor (CMLD) for facial feature extraction with the purpose of cross-spectral matching of near-IR, short-wave IR, mid-wave IR, and long-wave IR to a gallery of visible light images. The experiments demonstrate that the variants of CMLD outperform the original CMLD and other recently developed composite operators used for comparison. In addition to different IR spectra, various standoff distances from close-up (1.5 m) to intermediate (50 m) and long (106 m) are also investigated. Performance of CMLD I to III is evaluated for each of the three cases of distances. The newly developed operators, CMLD I to III, are further utilized to conduct a study on cross-spectral partial face recognition where different facial regions are compared in terms of the amount of useful information they contain for the purpose of conducting cross-spectral face recognition. The experimental results show that among three facial regions considered in the experiments the eye region is the most informative for all IR spectra at all standoff distances.

  10. The impact of beliefs about face recognition ability on memory retrieval processes in young and older adults.

    PubMed

    Humphries, Joyce E; Flowe, Heather D; Hall, Louise C; Williams, Louise C; Ryder, Hannah L

    2016-01-01

    This study examined whether beliefs about face recognition ability differentially influence memory retrieval in older compared to young adults. Participants evaluated their ability to recognise faces and were also given information about their ability to perceive and recognise faces. The information was ostensibly based on an objective measure of their ability, but in actuality, participants had been randomly assigned the information they received (high ability, low ability or no information control). Following this information, face recognition accuracy for a set of previously studied faces was measured using a remember-know memory paradigm. Older adults rated their ability to recognise faces as poorer compared to young adults. Additionally, negative information about face recognition ability improved only older adults' ability to recognise a previously seen face. Older adults were also found to engage in more familiarity than item-specific processing than young adults, but information about their face recognition ability did not affect face processing style. The role that older adults' memory beliefs have in the meta-cognitive strategies they employ is discussed.

  11. How a hat may affect 3-month-olds' recognition of a face: an eye-tracking study.

    PubMed

    Bulf, Hermann; Valenza, Eloisa; Turati, Chiara

    2013-01-01

    Recent studies have shown that infants' face recognition rests on a robust face representation that is resilient to a variety of facial transformations such as rotations in depth, motion, occlusion or deprivation of inner/outer features. Here, we investigated whether 3-month-old infants' ability to represent the invariant aspects of a face is affected by the presence of an external add-on element, i.e. a hat. Using a visual habituation task, three experiments were carried out in which face recognition was investigated by manipulating the presence/absence of a hat during face encoding (i.e. habituation phase) and face recognition (i.e. test phase). An eye-tracker system was used to record the time infants spent looking at face-relevant information compared to the hat. The results showed that infants' face recognition was not affected by the presence of the external element when the type of the hat did not vary between the habituation and test phases, and when both the novel and the familiar face wore the same hat during the test phase (Experiment 1). Infants' ability to recognize the invariant aspects of a face was preserved also when the hat was absent in the habituation phase and the same hat was shown only during the test phase (Experiment 2). Conversely, when the novel face identity competed with a novel hat, the hat triggered the infants' attention, interfering with the recognition process and preventing the infants' preference for the novel face during the test phase (Experiment 3). Findings from the current study shed light on how faces and objects are processed when they are simultaneously presented in the same visual scene, contributing to an understanding of how infants respond to the multiple and composite information available in their surrounding environment.

  12. Infrared face recognition based on intensity of local micropattern-weighted local binary pattern

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Liu, Guodong

    2011-07-01

    The traditional local binary pattern (LBP) histogram representation extracts the local micropatterns and assigns the same weight to all local micropatterns. To combine the different contributions of local micropatterns to face recognition, this paper proposes a weighted LBP histogram based on Weber's law. First, inspired by psychological Weber's law, intensity of local micropattern is defined by the ratio between two terms: one is relative intensity differences of a central pixel against its neighbors and the other is intensity of local central pixel. Second, regarding the intensity of local micropattern as its weight, the weighted LBP histogram is constructed with the defined weight. Finally, to make full use of the space location information and lessen the complexity of recognition, the partitioning and locality preserving projection are applied to get final features. The proposed method is tested on our infrared face databases and yields the recognition rate of 99.2% for same-session situation and 96.4% for elapsed-time situation compared to the 97.6 and 92.1% produced by the method based on traditional LBP.

  13. Illumination invariant face recognition and impostor rejection using different MINACE filter algorithms

    NASA Astrophysics Data System (ADS)

    Patnaik, Rohit; Casasent, David

    2005-03-01

    A face recognition system that functions in the presence of illumination variations is presented. It is based on the minimum noise and correlation energy (MINACE) filter. A separate MINACE filter is synthesized for each person using an automated filter-synthesis algorithm that uses a training set of illumination differences of that person and a validation set of a few faces of other persons to select the MINACE filter parameter c. The MINACE filter for each person is a combination of training images of only that person; no false-class training is done. Different formulations of the MINACE filter and the use of two different correlation plane metrics: correlation peak value and peak-to-correlation plane energy ratio (PCER), are examined. Performance results for face verification and identification are presented using images from the CMU Pose, Illumination, and Expression (PIE) database. All training and test set images are registered to remove tilt bias and scale variations. To evaluate the face verification and identification systems, a set of impostor images (non-database faces) is used to obtain false alarm scores (PFA).

  14. The telltale face: possible mechanisms behind defector and cooperator recognition revealed by emotional facial expression metrics.

    PubMed

    Kovács-Bálint, Zsófia; Bereczkei, Tamás; Hernádi, István

    2013-11-01

    In this study, we investigated the role of facial cues in cooperator and defector recognition. First, a face image database was constructed from pairs of full face portraits of target subjects taken at the moment of decision-making in a prisoner's dilemma game (PDG) and in a preceding neutral task. Image pairs with no deficiencies (n = 67) were standardized for orientation and luminance. Then, confidence in defector and cooperator recognition was tested with image rating in a different group of lay judges (n = 62). Results indicate that (1) defectors were better recognized (58% vs. 47%), (2) they looked different from cooperators (p < .01), (3) males but not females evaluated the images with a relative bias towards the cooperator category (p < .01), and (4) females were more confident in detecting defectors (p < .05). According to facial microexpression analysis, defection was strongly linked with depressed lower lips and less opened eyes. Significant correlation was found between the intensity of micromimics and the rating of images in the cooperator-defector dimension. In summary, facial expressions can be considered as reliable indicators of momentary social dispositions in the PDG. Females may exhibit an evolutionary-based overestimation bias to detecting social visual cues of the defector face.

  15. Face recognition by exploring information jointly in space, scale and orientation.

    PubMed

    Lei, Zhen; Liao, Shengcai; Pietikäinen, Matti; Li, Stan Z

    2011-01-01

    Information jointly contained in image space, scale and orientation domains can provide rich important clues not seen in either individual of these domains. The position, spatial frequency and orientation selectivity properties are believed to have an important role in visual perception. This paper proposes a novel face representation and recognition approach by exploring information jointly in image space, scale and orientation domains. Specifically, the face image is first decomposed into different scale and orientation responses by convolving multiscale and multiorientation Gabor filters. Second, local binary pattern analysis is used to describe the neighboring relationship not only in image space, but also in different scale and orientation responses. This way, information from different domains is explored to give a good face representation for recognition. Discriminant classification is then performed based upon weighted histogram intersection or conditional mutual information with linear discriminant analysis techniques. Extensive experimental results on FERET, AR, and FRGC ver 2.0 databases show the significant advantages of the proposed method over the existing ones.

  16. Multiple scales combined principle component analysis deep learning network for face recognition

    NASA Astrophysics Data System (ADS)

    Tian, Lei; Fan, Chunxiao; Ming, Yue

    2016-03-01

    It is well known that higher level features can represent the abstract semantics of original data. We propose a multiple scales combined deep learning network to learn a set of high-level feature representations through each stage of convolutional neural network for face recognition, which is named as multiscaled principle component analysis (PCA) Network (MS-PCANet). There are two main differences between our model and the traditional deep learning network. On the one hand, we get the prefixed filter kernels by learning the principal component of images' patches using PCA, nonlinearly process the convolutional results by using simple binary hashing, and pool them using spatial pyramid pooling method. On the other hand, in our model, the output features of several stages are fed to the classifier. The purpose of combining feature representations from multiple stages is to provide multiscaled features to the classifier, since the features in the latter stage are more global and invariant than those in the early stage. Therefore, our MS-PCANet feature compactly encodes both holistic abstract information and local specific information. Extensive experimental results show our MS-PCANet model can efficiently extract high-level feature presentations and outperform state-of-the-art face/expression recognition methods on multiple modalities benchmark face-related datasets.

  17. Recognition disorders for famous faces and voices: a review of the literature and normative data of a new test battery.

    PubMed

    Quaranta, Davide; Piccininni, Chiara; Carlesimo, Giovanni Augusto; Luzzi, Simona; Marra, Camillo; Papagno, Costanza; Trojano, Luigi; Gainotti, Guido

    2016-03-01

    Several anatomo-clinical investigations have shown that familiar face recognition disorders not due to high level perceptual defects are often observed in patients with lesions of the right anterior temporal lobe (ATL). The meaning of these findings is, however, controversial, because some authors claim that these patients show pure instances of modality-specific 'associative prosopagnosia', whereas other authors maintain that in these patients voice recognition is also impaired and that these patients have a 'multimodal person recognition disorder'. To solve the problem of the nature of famous faces recognition disorders in patients affected by right ATL lesions, it is therefore very important to verify with formal tests if these patients are or are not able to recognize others by voice, but a direct comparison between the two modalities is hindered by the fact that voice recognition is more difficult than face recognition. To circumvent this difficulty, we constructed a test battery in which subjects were requested to recognize the same persons (well-known at the national level) through their faces and voices, evaluating familiarity and identification processes. The present paper describes the 'Famous People Recognition Battery' and reports the normative data necessary to clarify the nature of person recognition disorders observed in patients affected by right ATL lesions.

  18. Neural Correlates of the In-Group Memory Advantage on the Encoding and Recognition of Faces

    PubMed Central

    Herzmann, Grit; Curran, Tim

    2013-01-01

    People have a memory advantage for faces that belong to the same group, for example, that attend the same university or have the same personality type. Faces from such in-group members are assumed to receive more attention during memory encoding and are therefore recognized more accurately. Here we use event-related potentials related to memory encoding and retrieval to investigate the neural correlates of the in-group memory advantage. Using the minimal group procedure, subjects were classified based on a bogus personality test as belonging to one of two personality types. While the electroencephalogram was recorded, subjects studied and recognized faces supposedly belonging to the subject’s own and the other personality type. Subjects recognized in-group faces more accurately than out-group faces but the effect size was small. Using the individual behavioral in-group memory advantage in multivariate analyses of covariance, we determined neural correlates of the in-group advantage. During memory encoding (300 to 1000 ms after stimulus onset), subjects with a high in-group memory advantage elicited more positive amplitudes for subsequently remembered in-group than out-group faces, showing that in-group faces received more attention and elicited more neural activity during initial encoding. Early during memory retrieval (300 to 500 ms), frontal brain areas were more activated for remembered in-group faces indicating an early detection of group membership. Surprisingly, the parietal old/new effect (600 to 900 ms) thought to indicate recollection processes differed between in-group and out-group faces independent from the behavioral in-group memory advantage. This finding suggests that group membership affects memory retrieval independent of memory performance. Comparisons with a previous study on the other-race effect, another memory phenomenon influenced by social classification of faces, suggested that the in-group memory advantage is dominated by top

  19. Paternal kin recognition and infant care in white-faced capuchins (Cebus capucinus).

    PubMed

    Sargeant, Elizabeth J; Wikberg, Eva C; Kawamura, Shoji; Jack, Katharine M; Fedigan, Linda M

    2016-06-01

    Evidence for paternal kin recognition and paternally biased behaviors is mixed among primates. We investigate whether infant handling behaviors exhibit paternal kin biases in wild white-faced capuchins monkeys (Cebus capucinus) by comparing interactions between infants and genetic sires, potential sires, siblings (full sibling, maternal, and paternal half-siblings) and unrelated handlers. We used a linear mixed model approach to analyze data collected on 21 focal infants from six groups in Sector Santa Rosa, Costa Rica. Our analyses suggest that the best predictor of adult and subadult male interactions with an infant is the male's dominance status, not his paternity status. We found that maternal siblings but not paternal siblings handled infants more than did unrelated individuals. We conclude that maternal but not paternal kinship influence patterns of infant handling in white-faced capuchins, regardless of whether or not they can recognize paternal kin. Am. J. Primatol. 78:659-668, 2016. © 2016 Wiley Periodicals, Inc.

  20. Upright or inverted, entire or exploded: right-hemispheric superiority in face recognition withstands multiple spatial manipulations.

    PubMed

    Prete, Giulia; Marzoli, Daniele; Tommasi, Luca

    2015-01-01

    Background. The ability to identify faces has been interpreted as a cerebral specialization based on the evolutionary importance of these social stimuli, and a number of studies have shown that this function is mainly lateralized in the right hemisphere. The aim of this study was to assess the right-hemispheric specialization in face recognition in unfamiliar circumstances. Methods. Using a divided visual field paradigm, we investigated hemispheric asymmetries in the matching of two subsequent faces, using two types of transformation hindering identity recognition, namely upside-down rotation and spatial "explosion" (female and male faces were fractured into parts so that their mutual spatial relations were left intact), as well as their combination. Results. We confirmed the right-hemispheric superiority in face processing. Moreover, we found a decrease of the identity recognition for more extreme "levels of explosion" and for faces presented upside-down (either as sample or target stimuli) than for faces presented upright, as well as an advantage in the matching of female compared to male faces. Discussion. We conclude that the right-hemispheric superiority for face processing is not an epiphenomenon of our expertise, because we are not often exposed to inverted and "exploded" faces, but rather a robust hemispheric lateralization. We speculate that these results could be attributable to the prevalence of right-handedness in humans and/or to early biases in social interactions.

  1. The rehabilitation of face recognition impairments: a critical review and future directions

    PubMed Central

    Bate, Sarah; Bennetts, Rachel J.

    2014-01-01

    While much research has investigated the neural and cognitive characteristics of face recognition impairments (prosopagnosia), much less work has examined their rehabilitation. In this paper, we present a critical analysis of the studies that have attempted to improve face-processing skills in acquired and developmental prosopagnosia, and place them in the context of the wider neurorehabilitation literature. First, we examine whether neuroplasticity within the typical face-processing system varies across the lifespan, in order to examine whether timing of intervention may be crucial. Second, we examine reports of interventions in acquired prosopagnosia, where training in compensatory strategies has had some success. Third, we examine reports of interventions in developmental prosopagnosia, where compensatory training in children and remedial training in adults have both been successful. However, the gains are somewhat limited—compensatory strategies have resulted in labored recognition techniques and limited generalization to untrained faces, and remedial techniques require longer periods of training and result in limited maintenance of gains. Critically, intervention suitability and outcome in both forms of the condition likely depends on a complex interaction of factors, including prosopagnosia severity, the precise functional locus of the impairment, and individual differences such as age. Finally, we discuss future directions in the rehabilitation of prosopagnosia, and the possibility of boosting the effects of cognitive training programmes by simultaneous administration of oxytocin or non-invasive brain stimulation. We conclude that future work using more systematic methods and larger participant groups is clearly required, and in the case of developmental prosopagnosia, there is an urgent need to develop early detection and remediation tools for children, in order to optimize intervention outcome. PMID:25100965

  2. The rehabilitation of face recognition impairments: a critical review and future directions.

    PubMed

    Bate, Sarah; Bennetts, Rachel J

    2014-01-01

    While much research has investigated the neural and cognitive characteristics of face recognition impairments (prosopagnosia), much less work has examined their rehabilitation. In this paper, we present a critical analysis of the studies that have attempted to improve face-processing skills in acquired and developmental prosopagnosia, and place them in the context of the wider neurorehabilitation literature. First, we examine whether neuroplasticity within the typical face-processing system varies across the lifespan, in order to examine whether timing of intervention may be crucial. Second, we examine reports of interventions in acquired prosopagnosia, where training in compensatory strategies has had some success. Third, we examine reports of interventions in developmental prosopagnosia, where compensatory training in children and remedial training in adults have both been successful. However, the gains are somewhat limited-compensatory strategies have resulted in labored recognition techniques and limited generalization to untrained faces, and remedial techniques require longer periods of training and result in limited maintenance of gains. Critically, intervention suitability and outcome in both forms of the condition likely depends on a complex interaction of factors, including prosopagnosia severity, the precise functional locus of the impairment, and individual differences such as age. Finally, we discuss future directions in the rehabilitation of prosopagnosia, and the possibility of boosting the effects of cognitive training programmes by simultaneous administration of oxytocin or non-invasive brain stimulation. We conclude that future work using more systematic methods and larger participant groups is clearly required, and in the case of developmental prosopagnosia, there is an urgent need to develop early detection and remediation tools for children, in order to optimize intervention outcome.

  3. Computer-Assisted Face Processing Instruction Improves Emotion Recognition, Mentalizing, and Social Skills in Students with ASD

    ERIC Educational Resources Information Center

    Rice, Linda Marie; Wall, Carla Anne; Fogel, Adam; Shic, Frederick

    2015-01-01

    This study examined the extent to which a computer-based social skills intervention called "FaceSay"™ was associated with improvements in affect recognition, mentalizing, and social skills of school-aged children with Autism Spectrum Disorder (ASD). "FaceSay"™ offers students simulated practice with eye gaze, joint attention,…

  4. Recognition of Immaturity and Emotional Expressions in Blended Faces by Children with Autism and Other Developmental Disabilities

    ERIC Educational Resources Information Center

    Gross, Thomas F.

    2008-01-01

    The recognition of facial immaturity and emotional expression by children with autism, language disorders, mental retardation, and non-disabled controls was studied in two experiments. Children identified immaturity and expression in upright and inverted faces. The autism group identified fewer immature faces and expressions than control (Exp. 1 &…

  5. Does Perceived Race Affect Discrimination and Recognition of Ambiguous-Race Faces? A Test of the Sociocognitive Hypothesis

    ERIC Educational Resources Information Center

    Rhodes, Gillian; Lie, Hanne C.; Ewing, Louise; Evangelista, Emma; Tanaka, James W.

    2010-01-01

    Discrimination and recognition are often poorer for other-race than own-race faces. These other-race effects (OREs) have traditionally been attributed to reduced perceptual expertise, resulting from more limited experience, with other-race faces. However, recent findings suggest that sociocognitive factors, such as reduced motivation to…

  6. Expression-invariant face recognition using depth and intensity dual-tree complex wavelet transform features

    NASA Astrophysics Data System (ADS)

    Ayatollahi, Fazael; Raie, Abolghasem A.; Hajati, Farshid

    2015-03-01

    A new multimodal expression-invariant face recognition method is proposed by extracting features of rigid and semirigid regions of the face which are less affected by facial expressions. Dual-tree complex wavelet transform is applied in one decomposition level to extract the desired feature from range and intensity images by transforming the regions into eight subimages, consisting of six band-pass subimages to represent face details and two low-pass subimages to represent face approximates. The support vector machine has been used to classify both feature fusion and score fusion modes. To test the algorithm, BU-3DFE and FRGC v2.0 datasets have been selected. The BU-3DFE dataset was tested by low intensity versus high intensity and high intensity versus low intensity strategies using all expressions in both training and testing stages in different levels. Findings include the best rank-1 identification rate of 99.8% and verification rate of 100% at a 0.1% false acceptance rate. The FRGC v2.0 was tested by the neutral versus non-neutral strategy, which applies images without expression in training and with expression in the testing stage, thereby achieving the best rank-1 identification rate of 93.5% and verification rate of 97.4% at a 0.1% false acceptance rate.

  7. Assessment of H.264 video compression on automated face recognition performance in surveillance and mobile video scenarios

    NASA Astrophysics Data System (ADS)

    Klare, Brendan; Burge, Mark

    2010-04-01

    We assess the impact of the H.264 video codec on the match performance of automated face recognition in surveillance and mobile video applications. A set of two hundred access control (90 pixel inter-pupilary distance) and distance surveillance (45 pixel inter-pupilary distance) videos taken under non-ideal imaging and facial recognition (e.g., pose, illumination, and expression) conditions were matched using two commercial face recognition engines in the studies. The first study evaluated automated face recognition performance on access control and distance surveillance videos at CIF and VGA resolutions using the H.264 baseline profile at nine bitrates rates ranging from 8kbs to 2048kbs. In our experiments, video signals were able to be compressed up to 128kbs before a significant drop face recognition performance occurred. The second study evaluated automated face recognition on mobile devices at QCIF, iPhone, and Android resolutions for each of the H.264 PDA profiles. Rank one match performance, cumulative match scores, and failure to enroll rates are reported.

  8. Hyperspectral face recognition using improved inter-channel alignment based on qualitative prediction models.

    PubMed

    Cho, Woon; Jang, Jinbeum; Koschan, Andreas; Abidi, Mongi A; Paik, Joonki

    2016-11-28

    A fundamental limitation of hyperspectral imaging is the inter-band misalignment correlated with subject motion during data acquisition. One way of resolving this problem is to assess the alignment quality of hyperspectral image cubes derived from the state-of-the-art alignment methods. In this paper, we present an automatic selection framework for the optimal alignment method to improve the performance of face recognition. Specifically, we develop two qualitative prediction models based on: 1) a principal curvature map for evaluating the similarity index between sequential target bands and a reference band in the hyperspectral image cube as a full-reference metric; and 2) the cumulative probability of target colors in the HSV color space for evaluating the alignment index of a single sRGB image rendered using all of the bands of the hyperspectral image cube as a no-reference metric. We verify the efficacy of the proposed metrics on a new large-scale database, demonstrating a higher prediction accuracy in determining improved alignment compared to two full-reference and five no-reference image quality metrics. We also validate the ability of the proposed framework to improve hyperspectral face recognition.

  9. Image disparity in cross-spectral face recognition: mitigating camera and atmospheric effects

    NASA Astrophysics Data System (ADS)

    Cao, Zhicheng; Schmid, Natalia A.; Li, Xin

    2016-05-01

    Matching facial images acquired in different electromagnetic spectral bands remains a challenge. An example of this type of comparison is matching active or passive infrared (IR) against a gallery of visible face images. When combined with cross-distance, this problem becomes even more challenging due to deteriorated quality of the IR data. As an example, we consider a scenario where visible light images are acquired at a short standoff distance while IR images are long range data. To address the difference in image quality due to atmospheric and camera effects, typical degrading factors observed in long range data, we propose two approaches that allow to coordinate image quality of visible and IR face images. The first approach involves Gaussian-based smoothing functions applied to images acquired at a short distance (visible light images in the case we analyze). The second approach involves denoising and enhancement applied to low quality IR face images. A quality measure tool called Adaptive Sharpness Measure is utilized as guidance for the quality parity process, which is an improvement of the famous Tenengrad method. For recognition algorithm, a composite operator combining Gabor filters, Local Binary Patterns (LBP), generalized LBP and Weber Local Descriptor (WLD) is used. The composite operator encodes both magnitude and phase responses of the Gabor filters. The combining of LBP and WLD utilizes both the orientation and intensity information of edges. Different IR bands, short-wave infrared (SWIR) and near-infrared (NIR), and different long standoff distances are considered. The experimental results show that in all cases the proposed technique of image quality parity (both approaches) benefits the final recognition performance.

  10. Template protection and its implementation in 3D face recognition systems

    NASA Astrophysics Data System (ADS)

    Zhou, Xuebing

    2007-04-01

    As biometric recognition systems are widely applied in various application areas, security and privacy risks have recently attracted the attention of the biometric community. Template protection techniques prevent stored reference data from revealing private biometric information and enhance the security of biometrics systems against attacks such as identity theft and cross matching. This paper concentrates on a template protection algorithm that merges methods from cryptography, error correction coding and biometrics. The key component of the algorithm is to convert biometric templates into binary vectors. It is shown that the binary vectors should be robust, uniformly distributed, statistically independent and collision-free so that authentication performance can be optimized and information leakage can be avoided. Depending on statistical character of the biometric template, different approaches for transforming biometric templates into compact binary vectors are presented. The proposed methods are integrated into a 3D face recognition system and tested on the 3D facial images of the FRGC database. It is shown that the resulting binary vectors provide an authentication performance that is similar to the original 3D face templates. A high security level is achieved with reasonable false acceptance and false rejection rates of the system, based on an efficient statistical analysis. The algorithm estimates the statistical character of biometric templates from a number of biometric samples in the enrollment database. For the FRGC 3D face database, the small distinction of robustness and discriminative power between the classification results under the assumption of uniquely distributed templates and the ones under the assumption of Gaussian distributed templates is shown in our tests.

  11. Galactose uncovers face recognition and mental images in congenital prosopagnosia: the first case report.

    PubMed

    Esins, Janina; Schultz, Johannes; Bülthoff, Isabelle; Kennerknecht, Ingo

    2014-09-01

    A woman in her early 40s with congenital prosopagnosia and attention deficit hyperactivity disorder observed for the first time sudden and extensive improvement of her face recognition abilities, mental imagery, and sense of navigation after galactose intake. This effect of galactose on prosopagnosia has never been reported before. Even if this effect is restricted to a subform of congenital prosopagnosia, galactose might improve the condition of other prosopagnosics. Congenital prosopagnosia, the inability to recognize other people by their face, has extensive negative impact on everyday life. It has a high prevalence of about 2.5%. Monosaccharides are known to have a positive impact on cognitive performance. Here, we report the case of a prosopagnosic woman for whom the daily intake of 5 g of galactose resulted in a remarkable improvement of her lifelong face blindness, along with improved sense of orientation and more vivid mental imagery. All these improvements vanished after discontinuing galactose intake. The self-reported effects of galactose were wide-ranging and remarkably strong but could not be reproduced for 16 other prosopagnosics tested. Indications about heterogeneity within prosopagnosia have been reported; this could explain the difficulty to find similar effects in other prosopagnosics. Detailed analyses of the effects of galactose in prosopagnosia might give more insight into the effects of galactose on human cognition in general. Galactose is cheap and easy to obtain, therefore, a systematic test of its positive effects on other cases of congenital prosopagnosia may be warranted.

  12. A Multi-Modal Face Recognition Method Using Complete Local Derivative Patterns and Depth Maps

    PubMed Central

    Yin, Shouyi; Dai, Xu; Ouyang, Peng; Liu, Leibo; Wei, Shaojun

    2014-01-01

    In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN) and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP). It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP) based features. PMID:25333290

  13. Galactose uncovers face recognition and mental images in congenital prosopagnosia: The first case report

    PubMed Central

    Esins, Janina; Schultz, Johannes; Bülthoff, Isabelle; Kennerknecht, Ingo

    2014-01-01

    A woman in her early 40s with congenital prosopagnosia and attention deficit hyperactivity disorder observed for the first time sudden and extensive improvement of her face recognition abilities, mental imagery, and sense of navigation after galactose intake. This effect of galactose on prosopagnosia has never been reported before. Even if this effect is restricted to a subform of congenital prosopagnosia, galactose might improve the condition of other prosopagnosics. Congenital prosopagnosia, the inability to recognize other people by their face, has extensive negative impact on everyday life. It has a high prevalence of about 2.5%. Monosaccharides are known to have a positive impact on cognitive performance. Here, we report the case of a prosopagnosic woman for whom the daily intake of 5 g of galactose resulted in a remarkable improvement of her lifelong face blindness, along with improved sense of orientation and more vivid mental imagery. All these improvements vanished after discontinuing galactose intake. The self-reported effects of galactose were wide-ranging and remarkably strong but could not be reproduced for 16 other prosopagnosics tested. Indications about heterogeneity within prosopagnosia have been reported; this could explain the difficulty to find similar effects in other prosopagnosics. Detailed analyses of the effects of galactose in prosopagnosia might give more insight into the effects of galactose on human cognition in general. Galactose is cheap and easy to obtain, therefore, a systematic test of its positive effects on other cases of congenital prosopagnosia may be warranted. PMID:24164936

  14. Efficient face recognition using local derivative pattern and shifted phase-encoded fringe-adjusted joint transform correlation

    NASA Astrophysics Data System (ADS)

    Biswas, Bikram K.; Alam, Mohammad S.; Chowdhury, Suparna

    2016-04-01

    An improved shifted phase-encoded fringe-adjusted joint transform correlation technique is proposed in this paper for face recognition which can accommodate the detrimental effects of noise, illumination, and other 3D distortions such as expression and rotation variations. This technique utilizes a third order local derivative pattern operator (LDP3) followed by a shifted phase-encoded fringe-adjusted joint transform correlation (SPFJTC) operation. The local derivative pattern operator ensures better facial feature extraction in a variable environment while the SPFJTC yields robust correlation output for the desired signals. The performance of the proposed method is determined by using the Yale Face Database, Yale Face Database B, and Georgia Institute of Technology Face Database. This technique has been found to yield better face recognition rate compared to alternate JTC based techniques.

  15. The Facial Expressive Action Stimulus Test. A test battery for the assessment of face memory, face and object perception, configuration processing, and facial expression recognition.

    PubMed

    de Gelder, Beatrice; Huis In 't Veld, Elisabeth M J; Van den Stock, Jan

    2015-01-01

    There are many ways to assess face perception skills. In this study, we describe a novel task battery FEAST (Facial Expressive Action Stimulus Test) developed to test recognition of identity and expressions of human faces as well as stimulus control categories. The FEAST consists of a neutral and emotional face memory task, a face and shoe identity matching task, a face and house part-to-whole matching task, and a human and animal facial expression matching task. The identity and part-to-whole matching tasks contain both upright and inverted conditions. The results provide reference data of a healthy sample of controls in two age groups for future users of the FEAST.

  16. The Facial Expressive Action Stimulus Test. A test battery for the assessment of face memory, face and object perception, configuration processing, and facial expression recognition

    PubMed Central

    de Gelder, Beatrice; Huis in ‘t Veld, Elisabeth M. J.; Van den Stock, Jan

    2015-01-01

    There are many ways to assess face perception skills. In this study, we describe a novel task battery FEAST (Facial Expressive Action Stimulus Test) developed to test recognition of identity and expressions of human faces as well as stimulus control categories. The FEAST consists of a neutral and emotional face memory task, a face and shoe identity matching task, a face and house part-to-whole matching task, and a human and animal facial expression matching task. The identity and part-to-whole matching tasks contain both upright and inverted conditions. The results provide reference data of a healthy sample of controls in two age groups for future users of the FEAST. PMID:26579004

  17. Target-context unitization effect on the familiarity-related FN400: a face recognition exclusion task.

    PubMed

    Guillaume, Fabrice; Etienne, Yann

    2015-03-01

    Using two exclusion tasks, the present study examined how the ERP correlates of face recognition are affected by the nature of the information to be retrieved. Intrinsic (facial expression) and extrinsic (background scene) visual information were paired with face identity and constituted the exclusion criterion at test time. Although perceptual information had to be taken into account in both situations, the FN400 old-new effect was observed only for old target faces on the expression-exclusion task, whereas it was found for both old target and old non-target faces in the background-exclusion situation. These results reveal that the FN400, which is generally interpreted as a correlate of familiarity, was modulated by the retrieval of intra-item and intrinsic face information, but not by the retrieval of extrinsic information. The observed effects on the FN400 depended on the nature of the information to be retrieved and its relationship (unitization) to the recognition target. On the other hand, the parietal old-new effect (generally described as an ERP correlate of recollection) reflected the retrieval of both types of contextual features equivalently. The current findings are discussed in relation to recent controversies about the nature of the recognition processes reflected by the ERP correlates of face recognition.

  18. Characterizing the spatio-temporal dynamics of the neural events occurring prior to and up to overt recognition of famous faces.

    PubMed

    Jemel, Boutheina; Schuller, Anne-Marie; Goffaux, Valérie

    2010-10-01

    Although it is generally acknowledged that familiar face recognition is fast, mandatory, and proceeds outside conscious control, it is still unclear whether processes leading to familiar face recognition occur in a linear (i.e., gradual) or a nonlinear (i.e., all-or-none) manner. To test these two alternative accounts, we recorded scalp ERPs while participants indicated whether they recognize as familiar the faces of famous and unfamiliar persons gradually revealed in a descending sequence of frames, from the noisier to the least noisy. This presentation procedure allowed us to characterize the changes in scalp ERP responses occurring prior to and up to overt recognition. Our main finding is that gradual and all-or-none processes are possibly involved during overt recognition of familiar faces. Although the N170 and the N250 face-sensitive responses displayed an abrupt activity change at the moment of overt recognition of famous faces, later ERPs encompassing the N400 and late positive component exhibited an incremental increase in amplitude as the point of recognition approached. In addition, famous faces that were not overtly recognized at one trial before recognition elicited larger ERP potentials than unfamiliar faces, probably reflecting a covert recognition process. Overall, these findings present evidence that recognition of familiar faces implicates spatio-temporally complex neural processes exhibiting differential pattern activity changes as a function of recognition state.

  19. A study of fuzzy logic ensemble system performance on face recognition problem

    NASA Astrophysics Data System (ADS)

    Polyakova, A.; Lipinskiy, L.

    2017-02-01

    Some problems are difficult to solve by using a single intelligent information technology (IIT). The ensemble of the various data mining (DM) techniques is a set of models which are able to solve the problem by itself, but the combination of which allows increasing the efficiency of the system as a whole. Using the IIT ensembles can improve the reliability and efficiency of the final decision, since it emphasizes on the diversity of its components. The new method of the intellectual informational technology ensemble design is considered in this paper. It is based on the fuzzy logic and is designed to solve the classification and regression problems. The ensemble consists of several data mining algorithms: artificial neural network, support vector machine and decision trees. These algorithms and their ensemble have been tested by solving the face recognition problems. Principal components analysis (PCA) is used for feature selection.

  20. Real-time robust face recognition using weight-incorporated LBP

    NASA Astrophysics Data System (ADS)

    Jeng, Ren-He; Chen, Wen-Shiung; Hsieh, Lili

    2016-07-01

    In this paper, a new texture descriptor based on the extraction of image representation which is the selection of weights assigned to input and output of local binary patterns in determining the efficiency of each feature, called weight-incorporated local binary pattern (WiLBP), is developed for image representation. By using averaged gradients information, the principal components of a covariance matrix are derived to obtain an adjusted principal components of a maximum variance matrix, namely quantized eigen-analysis (QEA). The QEA matrix is a weight matrix used to adjust the contribution of comparisons of pixel intensities. To evaluate the performance of the WiLBP, a series of experiments was tested on some popular face databases. The misclassification error obtained by the QEA across most trials is lower than that of the PCA. The experimental results also show that the WiLBP is a fast and robust method in individual recognition and gender classification applications.

  1. The functional correlates of face perception and recognition of emotional facial expressions as evidenced by fMRI.

    PubMed

    Jehna, M; Neuper, C; Ischebeck, A; Loitfelder, M; Ropele, S; Langkammer, C; Ebner, F; Fuchs, S; Schmidt, R; Fazekas, F; Enzinger, C

    2011-06-01

    Recognition and processing of emotional facial expression are crucial for social behavior and employ higher-order cognitive and visual working processes. In neuropsychiatric disorders, impaired emotion recognition most frequently concerned three specific emotions, i.e., anger, fear, and disgust. As incorrect processing of (neutral) facial stimuli per se might also underlie deficits in the recognition of emotional facial expressions, we aimed to assess all these aspects in one experiment. We therefore report here a functional magnetic resonance imaging (fMRI) paradigm for parallel assessment of the neural correlates of both the recognition of neutral faces and the three clinically most relevant emotions for future use in patients with neuropsychiatric disorders. FMRI analyses were expanded through comparisons of the emotional conditions with each other. The differential insights resulting from these two analyses strategies are compared and discussed. 30 healthy participants (21 F/9 M; age 36.3 ± 14.3, 17-66 years) underwent fMRI and behavioral testing for non-emotional and emotional face recognition. Recognition of neutral faces elicited activation in the fusiform gyri. Processing angry faces led to activation in left middle and superior frontal gyri and the anterior cingulate cortex. There was considerable heterogeneity regarding the fear versus neutral contrast, resulting in null effects for this contrast. Upon recognition of disgust, activation was noted in bilateral occipital, in the fronto-orbital cortex and in the insula. Analyzing contrasts between emotional conditions showed similar results (to those of contrasting with reference conditions) for separated emotional network patterns. We demonstrate here that our paradigm reproduces single aspects of separate previous studies across a cohort of healthy subjects, irrespective of age. Our approach might prove useful in future studies of patients with neurologic disorders with potential effect on emotion

  2. Social and attention-to-detail subclusters of autistic traits differentially predict looking at eyes and face identity recognition ability.

    PubMed

    Davis, Joshua; McKone, Elinor; Zirnsak, Marc; Moore, Tirin; O'Kearney, Richard; Apthorp, Deborah; Palermo, Romina

    2017-02-01

    This study distinguished between different subclusters of autistic traits in the general population and examined the relationships between these subclusters, looking at the eyes of faces, and the ability to recognize facial identity. Using the Autism Spectrum Quotient (AQ) measure in a university-recruited sample, we separate the social aspects of autistic traits (i.e., those related to communication and social interaction; AQ-Social) from the non-social aspects, particularly attention-to-detail (AQ-Attention). We provide the first evidence that these social and non-social aspects are associated differentially with looking at eyes: While AQ-Social showed the commonly assumed tendency towards reduced looking at eyes, AQ-Attention was associated with increased looking at eyes. We also report that higher attention-to-detail (AQ-Attention) was then indirectly related to improved face recognition, mediated by increased number of fixations to the eyes during face learning. Higher levels of socially relevant autistic traits (AQ-Social) trended in the opposite direction towards being related to poorer face recognition (significantly so in females on the Cambridge Face Memory Test). There was no evidence of any mediated relationship between AQ-Social and face recognition via reduced looking at the eyes. These different effects of AQ-Attention and AQ-Social suggest face-processing studies in Autism Spectrum Disorder might similarly benefit from considering symptom subclusters. Additionally, concerning mechanisms of face recognition, our results support the view that more looking at eyes predicts better face memory.

  3. Multilayer surface albedo for face recognition with reference images in bad lighting conditions.

    PubMed

    Lai, Zhao-Rong; Dai, Dao-Qing; Ren, Chuan-Xian; Huang, Ke-Kun

    2014-11-01

    In this paper, we propose a multilayer surface albedo (MLSA) model to tackle face recognition in bad lighting conditions, especially with reference images in bad lighting conditions. Some previous researches conclude that illumination variations mainly lie in the large-scale features of an image and extract small-scale features in the surface albedo (or surface texture). However, this surface albedo is not robust enough, which still contains some detrimental sharp features. To improve robustness of the surface albedo, MLSA further decomposes it as a linear sum of several detailed layers, to separate and represent features of different scales in a more specific way. Then, the layers are adjusted by separate weights, which are global parameters and selected for only once. A criterion function is developed to select these layer weights with an independent training set. Despite controlled illumination variations, MLSA is also effective to uncontrolled illumination variations, even mixed with other complicated variations (expression, pose, occlusion, and so on). Extensive experiments on four benchmark data sets show that MLSA has good receiver operating characteristic curve and statistical discriminating capability. The refined albedo improves recognition performance, especially with reference images in bad lighting conditions.

  4. Emotional face recognition deficits and medication effects in pre-manifest through stage-II Huntington's disease.

    PubMed

    Labuschagne, Izelle; Jones, Rebecca; Callaghan, Jenny; Whitehead, Daisy; Dumas, Eve M; Say, Miranda J; Hart, Ellen P; Justo, Damian; Coleman, Allison; Dar Santos, Rachelle C; Frost, Chris; Craufurd, David; Tabrizi, Sarah J; Stout, Julie C

    2013-05-15

    Facial emotion recognition impairments have been reported in Huntington's disease (HD). However, the nature of the impairments across the spectrum of HD remains unclear. We report on emotion recognition data from 344 participants comprising premanifest HD (PreHD) and early HD patients, and controls. In a test of recognition of facial emotions, we examined responses to six basic emotional expressions and neutral expressions. In addition, and within the early HD sample, we tested for differences on emotion recognition performance between those 'on' vs. 'off' neuroleptic or selective serotonin reuptake inhibitor (SSRI) medications. The PreHD groups showed significant (p<0.05) impaired recognition, compared to controls, on fearful, angry and surprised faces; whereas the early HD groups were significantly impaired across all emotions including neutral expressions. In early HD, neuroleptic use was associated with worse facial emotion recognition, whereas SSRI use was associated with better facial emotion recognition. The findings suggest that emotion recognition impairments exist across the HD spectrum, but are relatively more widespread in manifest HD than in the premanifest period. Commonly prescribed medications to treat HD-related symptoms also appear to affect emotion recognition. These findings have important implications for interpersonal communication and medication usage in HD.

  5. Simulation and experiment research of face recognition with modified multi-method morphological correlation algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Yu; Xuping, Zhang

    2007-03-01

    Morphological definition of similarity degree of gray-scale image and general definition of morphological correlation (GMC) are proposed. Hardware and software design for a compact joint transform correlator are presented in order to implement GMC. Two kinds of modified general morphological correlation algorithm are proposed. The gray-scale image is decomposed into a set of binary image slices in certain decomposition method. In the first algorithm, the edge of each binary joint image slice is detected, width adjustability of which is investigated, and the joint power spectrum of the edge is summed. In the second algorithm, the joint power spectrum of each pair is binarized or thinned and then summed in one situation, and the summation of the joint power spectrums of these pairs is binarized or thinned in the other situation. Computer-simulation results and real face image recognition results indicate that the modified algorithm can improve the discrimination capabilities with respect to the gray-scale face images of high similarity.

  6. Virtual images inspired consolidate collaborative representation-based classification method for face recognition

    NASA Astrophysics Data System (ADS)

    Liu, Shigang; Zhang, Xinxin; Peng, Yali; Cao, Han

    2016-07-01

    The collaborative representation-based classification method performs well in the field of classification of high-dimensional images such as face recognition. It utilizes training samples from all classes to represent a test sample and assigns a class label to the test sample using the representation residuals. However, this method still suffers from the problem that limited number of training sample influences the classification accuracy when applied to image classification. In this paper, we propose a modified collaborative representation-based classification method (MCRC), which exploits novel virtual images and can obtain high classification accuracy. The procedure to produce virtual images is very simple but the use of them can bring surprising performance improvement. The virtual images can sufficiently denote the features of original face images in some case. Extensive experimental results doubtlessly demonstrate that the proposed method can effectively improve the classification accuracy. This is mainly attributed to the integration of the collaborative representation and the proposed feature-information dominated virtual images.

  7. Blur and illumination robust face recognition via set-theoretic characterization.

    PubMed

    Vageeswaran, Priyanka; Mitra, Kaushik; Chellappa, Rama

    2013-04-01

    We address the problem of unconstrained face recognition from remotely acquired images. The main factors that make this problem challenging are image degradation due to blur, and appearance variations due to illumination and pose. In this paper, we address the